Вы находитесь на странице: 1из 135

Table of Contents

Chapter 1: Principle of Operating System.................................................................................................5 Operating System Introduction:...........................................................................................................5 Two views of the Operating System: .................................................................................................... Operating System as an !"tended #achine or $irtual #achine%or &s a 'ser(computer interface) Operating System as a *esource #anager....................................................................................... computer System organi+ation:............................................................................................................., -iles:......................................................................................................................................................, System Call:........................................................................................................................................... Shell:....................................................................................................................................................1/ 0ernel:.................................................................................................................................................1/ Operating System Structure:................................................................................................................11 #onolithic System..........................................................................................................................11 1ayered Operating System ............................................................................................................12 $irtual #achines:............................................................................................................................13 Client4Server or #icro5ernel .........................................................................................................13 -unction of Operating system:............................................................................................................15 !volution of Operating System:..........................................................................................................15 Serial Processing:............................................................................................................................1 Simple 6atch Processing:...............................................................................................................1 #ultiprogrammed 6atch System:...................................................................................................17 #ultitas5ing or Time Sharing System:...........................................................................................1, 8istri9uted System:.............................................................................................................................1. 8istri9uted Operating System:............................................................................................................2/ *eal Time Operating System:..............................................................................................................21 Chapter42 Processes and Threads............................................................................................................22 Introduction to process:.......................................................................................................................22 The process #odel..............................................................................................................................2: Process Creation:............................................................................................................................2: Process Control 6loc5:...................................................................................................................25 Process Termination:.......................................................................................................................2 Process States:................................................................................................................................2 Implementation of Process:.................................................................................................................27 Conte"t Switching:..............................................................................................................................2, Threads:...............................................................................................................................................2, #ultithreading:...............................................................................................................................2, 6enefits of #ulti4threading:...........................................................................................................2. Process $S Thread:.........................................................................................................................:/ #ulti4Threading #odel:......................................................................................................................:1 Interprocess Communication:..............................................................................................................:2 Shared #emory:.............................................................................................................................:: #essage Passing:............................................................................................................................:3 *ace Condition:...................................................................................................................................:3 &voiding *ace Conditions:.............................................................................................................: Techni;ues for avoiding *ace Condition:...........................................................................................:7
Page:1 Compiled 9y: daya

1.8isa9ling Interrupts:....................................................................................................................:7 2.1oc5 $aria9les.............................................................................................................................:, :.Strict &lteration:..........................................................................................................................:, 3.Peterson<s Solution:......................................................................................................................:. 5. The TS1 Instruction....................................................................................................................3/ Pro9lems with mutual !"clusion:...................................................................................................3/ Priority Inversion Pro9lem:............................................................................................................31 Sleep and =a5eup:..........................................................................................................................31 !"amples to use Sleep and =a5eup primitives:..................................................................................31 Semaphore:..........................................................................................................................................3: #onitors:.............................................................................................................................................3 #essage Passing:.................................................................................................................................37 Classical IPC Pro9lems.......................................................................................................................3, *eaders =riter pro9lems:....................................................................................................................5/ Sleeping 6ar9er Pro9lem.....................................................................................................................51 8eadloc5:.............................................................................................................................................52 *esources.............................................................................................................................................52 =hat is 8eadloc5>...............................................................................................................................5: Starvation vs. 8eadloc5.......................................................................................................................53 Conditions for 8eadloc5:....................................................................................................................53 8eadloc5 #odeling:............................................................................................................................53 #ethods for ?andling 8eadloc5:........................................................................................................5 8eadloc5 Prevention ......................................................................................................................5 8eadloc5 &voidance:...........................................................................................................................57 6an5ers &lgorithms:.......................................................................................................................57 6an5ers &lgorithms for #ultiple *esources:...................................................................................... / 8etection and *ecovery..................................................................................................................... : The Ostrich &lgorithm.................................................................................................................... : Chapter :: 0ernel:.................................................................................................................................... 3 Introduction:........................................................................................................................................ 3 Conte"t Switch.................................................................................................................................... 3 Types Of 0ernels ................................................................................................................................ 5 1 #onolithic 0ernels ..................................................................................................................... 5 2 #icro5ernels ............................................................................................................................... 5 :. !"o45ernel:................................................................................................................................. 7 Interrupt ?andler:................................................................................................................................ 7 -irst 1evel Interrupt ?andler %-1I?)............................................................................................. 7 Chapter 3: Scheduling:............................................................................................................................. . Scheduling Criteria:........................................................................................................................7/ Types of Scheduling:...........................................................................................................................7/ Scheduling &lgorithms:.......................................................................................................................71 1. -irst come -irst Serve:................................................................................................................71 2. Shortest @o9 -irst:.......................................................................................................................72 :. *ound4*o9in Scheduling &lgorithms:.......................................................................................73 3. Priority Scheduling:....................................................................................................................75 #ultilevel Aueue Scheduling:........................................................................................................7 Buaranteed Scheduling:..................................................................................................................77
Page:2 Compiled 9y: daya

1ottery Scheduling:........................................................................................................................77 Two41evel Scheduling:...................................................................................................................77 Scheduling in *eal Time System:...................................................................................................7, Policy $S #echanism:...................................................................................................................7, Chapter 5: #emory #anagement............................................................................................................,/ Types of #emory:...............................................................................................................................,/ #emory #anagement:........................................................................................................................,1 Two maCor schemes for memory management. ..................................................................................,1 Contiguous allocation ....................................................................................................................,1 Don4contiguous allocation..............................................................................................................,1 #emory Partitioning:..........................................................................................................................,1 1. -i"ed Partitioning:......................................................................................................................,1 2.8ynamic($aria9le Partitioning:...................................................................................................,2 #emory #anagement with 6itmaps:.............................................................................................,: #emory #anagement with 1in5ed 1ists.......................................................................................,3 Swapping:............................................................................................................................................, 1ogical &ddress $S Physical &ddress:...........................................................................................,7 Don4contiguous #emory allocation:..............................................................................................,7 $irtual #emory:..................................................................................................................................,, Paging:.................................................................................................................................................,, P&B! -ault:........................................................................................................................................../ Paging ?ardware:.................................................................................................................................1 Page *eplacement &lgorithms:............................................................................................................: The Optimal Page *eplacement &lgorithm: ...................................................................................: -I-O: %-irst In -irst Out).................................................................................................................3 1*'%1east *ecently 'sed):............................................................................................................3 The Second Chance Page *eplacement &lgorithm:........................................................................5 The Cloc5 Page *eplacement &lgorithm........................................................................................ Paging $s Segmentation:................................................................................................................., Chapter: Input(Output:............................................................................................................................. =hat a9out I(O>.................................................................................................................................... Some operational parameters:.......................................................................................................1// 8evice Controllers:............................................................................................................................1/1 #emory4mapped Input(Output:........................................................................................................1/1 Port4mapped I(O :..............................................................................................................................1/2 8#&: %8irect #emory &ccess)........................................................................................................1/2 8evice 8river:...................................................................................................................................1/3 =ays to do IDP'T(O'TP'T:...........................................................................................................1/5 Programmed I(O...........................................................................................................................1/5 Interrupt4driven I(O:.....................................................................................................................1/7 8#&:............................................................................................................................................1/7 8is5s:.................................................................................................................................................1/, Terminals:.....................................................................................................................................1/. Cloc5:................................................................................................................................................11/ *&I8.................................................................................................................................................11/ Chapter:7 -ile4systems...........................................................................................................................11: =hat is -ile4System>.........................................................................................................................11:
Page:: Compiled 9y: daya

-ile Daming.......................................................................................................................................11: -ile &ttri9utes....................................................................................................................................11: -ile Operations..................................................................................................................................113 -ile Structure:....................................................................................................................................11 -iles Organi+ation and &ccess #echanism:......................................................................................11 -ile &llocation #ethod:.....................................................................................................................117 Contiguous allocation:..................................................................................................................117 1in5ed 1ist &llocation:.................................................................................................................11, Inde"ed allocation %I4Dodes):.......................................................................................................11, -ile System 1ayout:...........................................................................................................................12/ 8irectories:........................................................................................................................................12/ &ccess Control #atri".......................................................................................................................121 &ccess Control 1ist:..........................................................................................................................122 Chapter ,: 8istri9uted Operating4system...............................................................................................12: &rchitecture.......................................................................................................................................125 6us 9ased multiprocessor..................................................................................................................127 Switched #ultiprocessor...................................................................................................................12, Cross9ar Switch:...........................................................................................................................12. Omega Switching networ5...........................................................................................................12. 6us 9ased multicomputers.................................................................................................................12. Switch #ulticomputers.....................................................................................................................12. Communication in 8istri9uted System:............................................................................................1:1 *PC:..............................................................................................................................................1:2 &T#..............................................................................................................................................1:: Client4server #odel:.....................................................................................................................1:5

Page:3

Compiled 9y: daya

Chapter 1: Principle of Operating System Introduction, Operations System Concepts: Processes, files, shell, system calls, security and Operating System structure: Monolithic systems, Layered, Virtual Machines, Client Server and volution of Operating Systems: !ser driven, operator driven, simple "atch system, off line "atch system, directly coupled off line system, multi# programmed spooling system, on#line timesharing system, multiprocessor systems, multi#computer$ distri"uted systems, %eal time Operating Systems Operating System Introduction: Computer Soft&are can roughly "e divided into t&o types: a'( )pplication Soft&are: *hich perform the actual &or+ the user &ants( "'( System Soft&are: *hich manage the operation of the computer itself( ,he most fundamental system program is the operating system, &hose -o" is to control all the computer.s resources and provide a "ase upon &hich the application program can "e &ritten( Operating system acts as an intermediary "et&een a user of a computer and the computer hard&are(

Fig 1.1 Abstract view of the components of a Computer System. ) computer system can "e divided roughly into four components: the hardware, the operating system, the application program, and the users as sho&n in the fig /(/
Page:5 Compiled 9y: daya

)n operating system is similar to a government. Li+e a government it performs no useful function "y itself( It simply provides an environment &ithin &hich other programs can do useful &or+( Two views of the Operating System: Operating System as an Extended Machine or Virtual Machine or !s a "ser#computer interface$ ,he operating system mas+s or hides the details of the 0ard&are form the programmers and general users and provides a convenient interface for using the system( ,he program that hides the truth a"out the hard&are from the user and presents a nice simple vie& of named files that can "e read and &ritten is of course the operating system( In this vie& the function of OS is to present the user &ith the e1uivalent of an e2tended machine or virtual machine that is easier to program than underlying hard&are( 3ust as the operating system shields the user from the dis+ hard&are and presents a simple file#oriented interface, it also conceals a lot of unpleasant "usiness concerning interrupts, timers,memory management and other lo& level features( ,he placement of OS is as sho&n in fig/(4 ) ma-or function of OS is to hide all the comple2ity presented "y the underlying hard&are and gives the programmer a more convenient set of instructions to &or+ &ith(

Fig1.2: Computer system consists of Hardware, system program and app ication program

Operating System as a Resource Manager ) computer system has many resources( Modern computers consist of processors, memories, timers, dis+s, mice, net&or+ interfaces, printers, and a &ide variety of other devices( In the alternative vie&, the -o" of the operating system is to provide for an orderly and controlled allocation of the processors, memories, and I$O devices among the various programs competing for them( Imagine &hat &ould happen if three programs running on some computer all tried to print their output simultaneously on the same printer( ,he first fe& lines of printout might "e from program /, the ne2t
Page: Compiled 9y: daya

fe& from program 4, then some from program 5, and so forth( ,he result &ould "e chaos( ,he operating system can "ring order to the potential chaos "y "uffering all the output destined for the printer on the dis+( *hen one program is finished, the operating system can then copy its output from the dis+ file &here it has "een stored to the printer, &hile at the same time the other program can continue generating more output, o"livious to the fact that the output is not really going to the printer 6yet'( 7igure /(5 suggests the main resources that are managed "y the OS( ) portion of the OS is in main memory( ,his includes 8ernel or nucleus( ,he remainder of main memory contains user programs and data( ,he allocation of this resource 6main memory' is controlled -ointly "y the OS and memory management hard&are in the processor

Fig1.!: "he #perating system as $esource manger

Page:7

Compiled 9y: daya

computer System organization: ) modern general purpose computer system consists of one or more cpus and a num"er of device controllers connected through a common "us that provides access to shared memory

Fig 1.%: A &odern Computer System

Files: ) ma-or function of the operating system is to hide the peculiarities of the dis+s and other I$O devices and present the programmer &ith a nice, clean a"stract model of device#independent files( System calls are o"viously needed to create files, remove files, read files, and &rite files( 9efore a file can "e read, it must "e opened, and after it has "een read it should "e closed, so calls are provided to do these things( ,o provide a place to +eep files, most operating system has the concept of a directory as a &ay of grouping files together( ) student, for e2ample, might have one directory for each course he is ta+ing 6for the programs needed for that course', another directory for his electronic mail, and still another directory for his *orld *ide *e" home page( System calls are then needed to create and remove directories( Calls are also provided to put an e2isting file into a directory, and to remove a file from a directory( :irectory entries may "e either files or other directories( ,his model also gives rise to a hierarchy of the file system as sho&n in fig(

Page:,

Compiled 9y: daya

very file &ithin the directory hierarchy can "e specified "y giving its path name from the top of the directory hierarchy, the root directory( Such a"solute path names consist of the list of directories that must "e traversed from the root directory to get to the file, &ith slashes separating the components( In 7ig( /#;, the path for file CS/</ is $7aculty$Prof(9ro&n$Courses$CS/</( ,he leading slash indicates that the path is a"solute, that is, starting at the root directory( )s an aside, in *indo&s, the "ac+slash 6=' character is used as the separator instead of the slash 6$' character, so the file path given a"ove &ould "e &ritten as =7aculty=Prof(9ro&n=Courses=CS/</( System Call: In computing, a system call is ho& a program re1uests a service from an operating system.s +ernel( ,his may include hard&are related services 6e(g( accessing the hard dis+', creating and e2ecuting ne& processes, and communicating &ith integral +ernel services 6li+e scheduling'( System calls provide the interface "et&een a process and the operating system( On !ni2, !ni2#li+e and other POSI>#compati"le operating systems, popular system calls are open, read, &rite, close, &ait, e2ecve, for+, e2it, and +ill( Many of today.s operating systems have hundreds of system calls( 7or e2ample, Linu2 has over 5<< different calls( System calls can 9e roughly grouped into five maCor categories: 1. Process Control. load e2ecute create process
Page:. Compiled 9y: daya

2.

:.

3.

5.

terminate process get$set process attri"utes &ait for time, &ait event, signal event allocate, free memory -ile management. create fileE delete file openE close readE writeE reposition get(set file attri9utes 8evice #anagement. re;uest deviceE release device readE writeE reposition get(set device attri9utes logically attach or detach devices Information #aintenance. get(set time or date get(set system data get(set processE fileE or device attri9utes Communication. createE delete communication connection sendE receive messages transfer status information attach or detach remote devices

Shell: ) shell is a program that provides the traditional te2t only user interface for Linu2 and other !ni2 operating system( Its primary function is to read commands typed into a console or terminal &indo& and then e2ecute it( ,he term shell derives its name form the fact that it is an outer layer of the OS( ) shell is an interface "et&een the user and the internal part of the operating system( ) user is in shell6i(e interacting &ith the shell' as soon as the user has logged into the system( ) shell is the most fundamental &ay that user can interact &ith the system and the shell hides the detail of the underlying system from the user( Example: 9ourne Shell 9ash shell 8orn Shell C shell ernel: In computing, the kernel is the main component of most computer operating systems? it is a "ridge "et&een applications and the actual data processing done at the hard&are level( ,he +ernel.s responsi"ilities include managing the system.s resources 6the communication "et&een hard&are and soft&are components'( !sually as a "asic component of an operating system, a +ernel can provide the lo&est#level a"straction layer for the resources 6especially processors and I$O devices' that application
Page:1/ Compiled 9y: daya

soft&are must control to perform its function( It typically ma+es these facilities availa"le to application processes through inter#process communication mechanisms and system calls( Operating system tas+s are done differently "y different +ernels, depending on their design and implementation( *hile monolithic +ernels e2ecute all the operating system code in the same address space to increase the performance of the system, micro+ernels run most of the operating system services in user space as servers, aiming to improve maintaina"ility and modularity of the operating system( ) range of possi"ilities e2ists "et&een these t&o e2tremes(

Operating System Structure: ,he structure of an operating system is dictated "y the model employed in "uilding them( )n operating system model is a "road frame&or+ that unifies the many features and services the operating system provides and tas+s it performs( Operating systems are "roadly classified into follo&ing categories, "ased on the their structuring mechanism as follo&s: a. #onolithic System 9. 1ayered System c. $irtual #achine d. !"o5ernels e. Client4Server #odel Monolithic System ,he components of monolithic operating system are organi@ed hapha@ardly and any module can call any other module &ithout any reservation( Similar to the other operating systems, applications in monolithic OS are separated from the operating system itself( ,hat is, the operating system code runs in a privileged processor mode 6referred to as +ernel mode', &ith access to system data and to the hard&are? applications run in a non#privileged processor mode 6called the user mode', &ith a limited set of interfaces availa"le and &ith limited access to system data( ,he monolithic operating system structure &ith separate user and +ernel processor mode is sho&n in 7igure(

Page:11

Compiled 9y: daya

,his approach might &ell "e su"titled A,he 9ig Mess(A ,he structure is that there is no structure( ,he operating system is &ritten as a collection of procedures, each of &hich can call any of the other ones &henever it needs to( *hen this techni1ue is used, each procedure in the system has a &ell#defined interface in terms of parameters and results, and each one is free to call any other one, if the latter provides some useful computation that the former needs( Example Systems: CP/M and MS-DOS

%ayered Operating System ,he layered approach consists of "rea+ing the operating system into the num"er of layers6level', each "uilt on the top of lo&er layers( ,he "ottom layer 6layer <' is the hard&are layer? the highest layer is the user interface( ,he main advantages of the layered approach is modularity( ,he layers are selected such that each uses functions 6operations' and services of only lo&er#level layers( ,his approach simplifies de"ugging and
Page:12 Compiled 9y: daya

system verifications( ,hat is in this approach, the Bth layer can access services provided "y the 6B# /'th layer and provide services to the 6BC/'th layer( ,his structure also allo&s the operating system to "e de"ugged starting at the lo&est layer, adding one layer at a time until the &hole system &or+s correctly( Layering also ma+es it easier to enhance the operating system? one entire layer can "e replaced &ithout affecting other parts of the system(

,he layer approach to design &as first used in the ,0 operating system at the ,echnische 0ogeschool indhoven( ,he ,0 system &as defined in the si2 layers , as sho&n in the fig "elo&(

Page:1:

Compiled 9y: daya

2ample Systems: V)>$VMS, Multics, !BI>

Virtual Machines:
Virtual machine approach provides an interface that is identical to the underlying "are hard&are( ach process is provided &ith a 6virtual' copy of the underlying computer as sho&n in the fig( ,he resources of the physical computer are shared to create the virtual machine( CP! scheduling can "e used to share the CP! and to create the appearance that users have their o&n processors(

-ig: a). Don $irtual #achine

9). $irtual #achine.

)lthough the virtual machine concept is useful, it is difficult to implement( Much effort is re1uired to provide an e2act duplicate of the underlying machine( 2ample( 3ava

Client&Ser'er or Micro(ernel
,he advent of ne& concepts in operating system design, micro+ernel, is aimed at migrating traditional services of an operating system out of the monolithic +ernel into the user#level process( ,he idea is to divide the operating system into several processes, each of &hich implements a single set of services # for e2ample, I$O servers, memory server, process server, threads interface system( ach server runs in user mode, provides services to the re1uested client( ,he client, &hich can "e either another operating system component or application program, re1uests a service "y sending a message to the server( )n OS +ernel 6or micro+ernel' running in +ernel mode delivers the message to the appropriate server? the server performs the operation? and micro+ernel delivers the results to the client in another message, as illustrated in 7igure(

Page:13

Compiled 9y: daya

7ig: ,he client#server model(

-ig: The client4server model in a distri9uted system.

Function of Operating system: /( 4( 5( D( Memory management function processors management function I$O :evice management function 7ile management function

!volution of Operating System: !volution of Operating Systems: 'ser drivenE operator driven, simple "atch system, off line "atch system, directly coupled off line system, multi# programmed spooling system, online timesharing system, multiprocessor systems, multi#computer$ distri"uted systems, %eal time Operating Systems . /( 4( 5( D(
Page:15

Serial processing 9atch processing Multiprogramming Multitas+ing or time sharing System


Compiled 9y: daya

E( ;( F( G( H(

Bet&or+ Operating system :istri"uted Operating system Multiprocessor Operating System %eal ,ime Operating System Modern Operating system

Serial Processing:

arly computer from late /HD< to the mid /HE<( ,he programmer interacted directly &ith the computer hard&are( ,hese machine are called "are machine as they don.t have OS( very computer system is programmed in its machine language( !ses Punch Card, paper tapes and language translator

,hese system presented t&o ma-or pro"lems( /( Scheduling 4( Set up time: Scheduling: !sed sign up sheet to reserve machine time( ) user may sign up for an hour "ut finishes his -o" in DE minutes( ,his &ould result in &asted computer idle time, also the user might run into the pro"lem not finish his -o" in alloted time( Set up time: ) single program involves: Loading compiler and source program in memory Saving the compiled program 6o"-ect code' Loading and lin+ing together o"-ect program and common function ach of these steps involves the mounting or dismounting tapes on setting up punch cards( If an error occur user had to go the "eginning of the set up se1uence( ,hus, a considera"le amount of time is spent in setting up the program to run( ,his mode of operation is turned as serial processing ,reflecting the fact that users access the computer in series( Simple )atch Processing:

arly computers &ere very e2pensive, and therefore it &as important to ma2imi@e processor utili@ation( ,he &asted time due to scheduling and setup time in Serial Processing &as unaccepta"le( ,o improve utili@ation, the concept of a "atch operating system &as developed( 9atch is defined as a group of -o"s &ith similar needs( ,he operating system allo&s users to form "atches( Computer e2ecutes each "atch se1uentially, processing all -o"s of a "atch considering them as a single process called "atch processing(

,he central idea "ehind the simple "atch#processing scheme is the use of a piece of soft&are +no&n
Page:1 Compiled 9y: daya

as the monitor( *ith this type of OS, the user no longer has direct access to the processor( Instead, the user su"mits the -o" on cards or tape to a computer operator, &ho "atches the -o"s together se1uentially and places the entire "atch on an input device, for use "y the monitor( ach program is constructed to "ranch "ac+ to the monitor &hen it completes processing, at &hich point the monitor automatically "egins loading the ne2t program(

Fig.1.': &emory (ayout for resident memory

*ith a "atch operating system, processor time alternates "et&een e2ecution of user programs and e2ecution of the monitor( ,here have "een t&o sacrifices: Some main memory is no& given over to the monitor and some processor time is consumed "y the monitor( 9oth of these are forms of overhead( Multiprogrammed )atch System: ) single program cannot +eep either CP! or I$O devices "usy at all times( Multiprogramming increases CP! utili@ation "y organi@ing -o"s in such a manner that CP! has al&ays one -o" to e2ecute( If computer is re1uired to run several programs at the same time, the processor could "e +ept "usy for the most of the time "y s&itching its attention from one program to the ne2t( )dditionally I$O transfer could overlap the processor activity i(e, &hile one program is a&aiting for an I$O transfer, another program can use the processor( So CP! never sits idle or if comes in idle state then after a very small time it is again "usy( ,his is illustrated in fig "elo&(

Page:17

Compiled 9y: daya

Fig 1.). &u tiprogramming e*amp e

Multitas(ing or Time Sharing System:


Multiprogramming didn.t provide the user interaction &ith the computer system( ,ime sharing or Multitas+ing is a logical e2tension of Multiprogramming that provides user interaction( ,here are more than one user interacting the system at the same time ,he s&itching of CP! "et&een t&o users is so fast that it gives the impression to user that he is only &or+ing on the system "ut actually it is shared among different users( CP! "ound is divided into different time slots depending upon the num"er of users using the system( -ust as multiprogramming allo&s the processor to handle multiple "atch -o"s at a time, multiprogramming can also "e used to handle multiple interactive -o"s( In this latter case, the techni1ue is referred to as time sharing, "ecause processor time is shared among multiple users ) multitas+ing system uses CP! scheduling and multiprogramming to provide each user &ith a small portion of a time shared computer( ach user has at least one separate program in
Compiled 9y: daya

Page:1,

memory( Multitas+ing are more comple2 than multiprogramming and must provide a mechanism for -o"s synchroni@ation and communication and it may ensure that system does not go in deadloc+(

)lthough "atch processing is still in use "ut most of the system today availa"le uses the concept of multitas+ing and Multiprogramming(

"istri#uted System:
Multiprocessor system: # Ieneral term for the use of t&o or more CP!s for a computer system( # Can vary &ith conte2t, mostly as a function of ho& CP!s are defined( # ,he term multiprocessing is sometimes used to refer to the e2ecution of multiple concurrent soft&are processes in a system as opposed to a single process at any one instant( Multiprogramming: Multiprogramming is more appropriate to descri"e concept &hich is implemented mostly in soft&ares, &hereas multiprocessing is more appropriate to descri"e the use of multiple hard&are CP!s( ) system can "e "oth multiprocessing and multiprogramming, only one of the t&o or neither of the t&o( Processor Coupling: Its the logical connection of the CP!s( Multiprocessor system have more than one processing unit sharing memory$peripherals devices( ,hey have greater computing po&er and higher relia"ility( Multiprocessor system can "e classified into t&o types: /( ,ightly coupled 2. 1osely coupled%distri9uted). !ach processor has its own memory and copy of the OS. ig!tly Coupled" Multiprocessor System#: ,ightly coupled multiprocessor system contain multiple CP!s that are connected at the "us level( ach processor is assigned a specific duty "ut processor &or+ in close association, possi"ly sharing one memory module( chip multiprocessors also +no&n as multi#core computing involves more than one processor placed on a single chip and can "e thought as the most e2treme form of tightly coupled multiprogramming( :ual core, Core#4 :uo, Intel Core IE etc are the "rand name used for various mid#range to high end consumers and "usiness multiprocessor made "y Intel(

$oosely Coupled" Distri%uted System#: Loosely coupled system often referred to as clusters are "ased on multiple stand#alone single or dual processors commodity computers interconnected via a high speed communication system( distri"uted system are connected via a distri"uted operating system( Multiprocessor operating system: Multiprocessor operating system aims to support high performance through the use of multiple CP!s( It consists of a set of processors that share a set of physical memory "loc+s over an interconnected
Page:1. Compiled 9y: daya

net&or+( )n important goal is to ma+e the num"er of CP!s transparent to the application( )chieving such transparency is relatively easy "ecause the communication "et&een different 6parts of ' application uses the same primitives as those in uni#processor OS( ,he idea is that all communication is done "y manipulating data at the shared memory locations and that &e only have to protect that data segment against simultaneous access( Protection is done through synchroni@ation primitives li+e semaphores and monitors(

7ig: Multiprocessor System(

"istri#uted Operating System:


) recent trend in computer system is to distri"ute computation among several processors( In contrasts to the tightly coupled system the processors do not share memory or a cloc+( Instead, each processor has its o&n local memory( ,he processors communicate &ith one another through various communication lines such as computer net&or+( :istri"uted operating system are the operating system for a distri"uted system6 a net&or+ of autonomous computers connected "y a communication net&or+ through a message passing mechanisms'( ) distri"uted operating system controls and manages the hard&are and soft&are resources of a distri"uted system( *hen a program is e2ecuted on a distri"uted system, user is not a&are of &here the program is e2ecuted or the location of the resources accessed(

7ig: )rchitecture of a :istri"uted system(


Page:2/ Compiled 9y: daya

2ample of :istri"uted OS: )moe"a, Chorus, )lpha 8ernel(

Real Time Operating System:


Primary o"-ective of %eal ,ime Operating System is to provide 1uic+ response time and thus to meet a scheduling deadline( !ser convenience and resource utili@ation are secondary concern to these systems( %eal time systems has many events that must "e accepted and processed in a short time or &ithin certain deadline( Such applications include: %oc+et launching, flight control, ro"otics, real time simulation, telephone s&itching e1uipments etc( %eal time systems are classified into t&o categories: a#. So&t 'eal time System: If certain deadlines are missed then system continues its &or+ing &ith no failure "ut its performance degrade( %#. (ard 'eal time System: If any deadline is missed then system &ill fail to &or+ or does not &or+ properly( ,his system gurantees that critical tas+ is completed on time(

Page:21

Compiled 9y: daya

Chapter&* Processes and Threads Process Concepts: Introduction, :efinition of Process, Process states and transition, PC9 6Process Control 9loc+', Concurrent Process: Introduction, Parallel Processing, IPC 6Inter#process Communication', Critical %egions and conditions, Mutual 2clusion, Mutual 2clusion Primitives and Implementation, :e++erJs )lgorithm, PetersonJs )lgorithm, ,SL 6test and set loc+', Loc+s, Producer and consumer pro"lem, monitors, Message Passing, Classical IPC pro"lems, :eadloc+ and Indefinite Postponement: Introduction, Preempta"le and Bonpreempta"le %esources, Conditions for deadloc+, deadloc+ modeling, deadloc+ prevention, deadloc+ avoidance, dadloc+ detection and recovery, Starvation, ,hreads: Introduction, thread model, thread usage, advantages of threads( Introduction to process: A process is an instance of a program in execution. A program by itself is not a process; a program is a passive entity, such as a file containing a list of instructions stored on disks. (often called an executable file), whereas a process is an active entity, with a program counter specifying the next instruction to execute and a set of associated resources. A program becomes a process when an executable file is loaded into memory. Program: A set of instructions a computer can interpret and execute. Process: Dynamic Part of a program in execution a live entity, it can be created, executed and terminated. It goes through different states wait running Ready etc Requires resources to be allocated by the OS one or more processes may be executing the same code. Program: static no states This example illustrate the difference between a process and a program: main () { int i , prod =1; for (i=0;i<100;i++) prod = pord*i; } It is a program containing one multiplication statement (prod = prod * i) but the process will execute
Page:22 Compiled 9y: daya

100 multiplication, one at a time through the 'for' loop. Although two processes may be associated with the same program, they are nevertheless considered two separate execution sequences. For instance several users may be running different copies of mail program, or the same user may invoke many copies of web browser program. Each of these is a separate process, and although the text sections are equivalent, the data, heap and stack section may vary. The process Model

Fig:1.1: +a, &u tiprogramming of four programs. +b, Conceptua mode of four independent, se-uentia processes. +c, #n y one program is active at any instant.

A process is just an executing program, including the current values of the program counter, registers, and variables. Conceptually, each process has its own virtual CPU. In reality, of course, the real CPU switches back and forth from process to process, but to understand the system, it is much easier to think about a collection of processes running in (pseudo) parallel, than to try to keep track of how the CPU switches from program to program. This rapid switching back and forth is called multiprogramming Process Creation: There are four principal events that cause processes to be created: 1. System initialization. 2. Execution of a process creation system call by a running process. 3. A user request to create a new process. 4. Initiation of a batch job. Parent process create children processes, &hich, in turn create other processes, forming a tree of processes ( Ienerally, process identified and managed via a process identifier 6pid'

When an operating system is booted, often several processes are created. Some of these are foreground processes, that is, processes that interact with (human) users and perform
Page:2: Compiled 9y: daya

work for them. Others are background processes, which are not associated with particular users, but instead have some specific function. For example, a background process may be designed to accept incoming requests for web pages hosted on that machine, waking up when a request arrives to service the request. Processes that stay in the background to handle some activity such as web pages, printing, and so on are called daemons In addition to the processes created at boot time, new processes can be created afterward as well. Often a running process will issue system calls to create one or more new processes to help it do its job. In interactive systems, users can start a program by typing a command or double clicking an icon. In UNIX there is only one system call to create a new process: fork. This call creates an exact clone of the calling process. After the fork, the two processes, the parent and the child, have the same memory image, the same environment strings, and the same open files. That is all there is. Usually, the child process then executes execve or a similar system call to change its memory image and run a new program. For example, when a user types a command, say, sort, to the shell, the shell forks off a child process and the child executes sort. The C program shown below describes the system call, fork and exec used in UNIX. We now have two different process running a copy of the same program. The value of the PID for the child process is zero; that for the parent is an integer value greater than zero. The child process overlays its address space with the UNIX command /bin/ls using the execlp() system call. (execlp is version of the exec). The parent waits for the child process to complete with the wait() system call. When the child process completes (by either implicitly or explicitly invoking the exit() ) the parent process resumes from the call to wait, where it completes using the exit() system call. This is also illustrated in the fig. below.

#include<stdio.h> #include<unistd.h> #include<sys/types.h> int main() { pid_t pid; /* fork another process */ pid = fork(); if (pid < 0) { /* error occurred */ fprintf(stderr, "Fork Failed"); printf("%d", pid); exit(-1); } else if (pid == 0) { /* child process */ execlp("/bin/ls", "ls", NULL); } else { /* parent process */ /* parent will wait for the child to complete */
Page:23 Compiled 9y: daya

wait (NULL); printf ("Child Complete\n"); printf("%d",pid); exit(0); } } The last situation in which processes are created applies only to the batch systems found on large mainframes. Here users can submit batch jobs to the system (possibly remotely). When the operating system decides that it has the resources to run another job, it creates a new process and runs the next job from the input queue in it.

UNIX System Call: fork system call creates new process exec system call used after a fork to replace the process memory space with a new program Process Control )loc(: In operating system each process is represented by a process control block(PCB) or a task control block. Its a data structure that physically represent a process in the memory of a computer system. It contains many pieces of information associated with a specific process that includes the following. Identifier: A unique identifier associated with this process, to distinguish it from all other processes. State: If the process is currently executing, it is in the running state. Priority: Priority level relative to other processes. Program counter: The address of the next instruction in the program to be executed. Memory pointers: Includes pointers to the program code and data associated with this process, plus any memory blocks shared with other processes. Context data: These are data that are present in registers in the Fig1.2: .rocess Contro / oc0 processor while the process is executing. I/O status information: Includes outstanding I/O requests, I/O devices (e.g., tape drives) assigned to this process, a list of files in use by the process, and so on.
Page:25 Compiled 9y: daya

Accounting information: May include the amount of processor time and clock time used, time limits, account numbers, and so on. Process Termination: After a process has been created, it starts running and does whatever its job is: After some time it will terminate due to one of the following conditions. 1. Normal exit (voluntary). 2. Error exit (voluntary). 3. Fatal error (involuntary). 4. Killed by another process (involuntary).

Process States:
Each process may be in one of the following states:

Fig1.!: .rocess state "ransition diagram

New: The process is being created. Running:Instructions are being executed. Waiting: The process is waiting for some event to occur ( such as I/O completion or reception of a signal) Ready:The process is waiting to be assigned to a processor. Terminated: The process has finished execution.

Page:2

Compiled 9y: daya

Implementation of $rocess: Operating system maintains a table (an array of structure) known as process table with one entry per process to implement the process. The entry contains detail about the process such as, process state, program counter, stack pointer, memory allocation, the status of its open files, its accounting information, scheduling information and everything else about the process that must be saved when the process is switched from running to ready or blocked state, so that it can be restarted later as if it had never been stopped. Process Management Memory Management File management *egister program counter Program status word stac5 pointer process state priority Scheduling parameter Process I8 parent process process group Signals Time when process started CP' time used Children<s CP' time Time of ne"t alarm Pointer to te"t segment pinter to data segment pinter to stac5 segment *oot directory =or5ing directory -ile descriptors 'S!* I8 B*O'P I8

Each I/O device class is associated with a location (often near the bottom of the memory) called the Interrupt Vector. It contains the address of interrupt service procedure. Suppose that user process 3 is running when a disk interrupt occurs. User process 3's program counter, program status word and possibly one or more registers are pushed onto the (current) stack by the interrupt hardware. The computer then jumps to the address specified in the disk interrupt vector. That is all the hardware does. From here on, it is up to the software in particular the interrupt service procedure. Interrupt handling and scheduling are summarized below. 1. Hardware stack program counter etc. 2. Hardware loads new program counter from interrupt vector 3. Assembly languages procedures save registers. 4. Assembly language procedures sets up new stack. 5. C interrupt service runs typically reads and buffer input. 6. Scheduler decides which process is to run next. 7. C procedures returns to the assembly code. 8. Assembly language procedures starts up new current process. Fig:The above points lists the Skeleton of what the lowest level of the operating system does when an interrupt occurs.

Page:27

Compiled 9y: daya

Conte%t Switching: Switching the CPU to another process requires performing a state save of the current process and a state restore restore of a different process. This task is known as context switch. When a context switch occurs, the kernel saves the the context of the old process in its PCB and loads the saved context of the new process scheduled to run. Context switch time is pure overhead, because the system does no useful work while switching. Its speed varies from machine to machine, depending on the memory speed, the number of registers that must be copied etc. Threads: Fig: C.1 switch from one process to another

A thread is a basic unit of CPU utilization, it comprises a thread ID, a program counter, a register set, and a stack. It shares with other threads belonging to the same process its code section, data section, and other operating system resources, such as open files and signals. A traditional ( or heavy weight) process has a single thread of control. If a process has multiple thread of control, it can perform more than one task at a time. Fig below illustrate the difference between single threaded process and a multithreaded process. Thread simply enable us to split up a program into logically separate pieces and have the pieces run independently of one another until they need to communicate. In a sense threads are a further level of object orientation for multitasking system. Multithreading: Many software package that run on modern desktop pcs are multi-threaded. An application is implemented as a separate process with several threads of control. A web browser might have one thread to display images or text while other thread retrieves data from the network. A word-processor may have a thread for displaying graphics, another thread for reading the character entered by user through the keyboard, and a third thread for performing spelling and grammar checking in the background.

Page:2,

Compiled 9y: daya

Why Multithreading: In certain situations, a single application may be required to perform several similar task such as a web server accepts client requests for web pages, images, sound, graphics etc. A busy web server may have several clients concurrently accessing it. So if the web server runs on traditional single threaded process, it would be able to service only one client at a time. The amount of time that the client might have to wait for its request to be serviced is enormous. One solution of this problem can be thought by creation of new process. When the server receives a new request, it creates a separate process to service that request. But this method is heavy weight. In fact this process creation method was common before threads become popular. Process creation is time consuming and resource intensive. If the new process perform the same task as the existing process, why incur all that overhead? It is generally more efficient for one process that contains multiple threads to serve the same purpose. This approach would multithreaded the web server process. The server would cerate a separate thread that would listen for clients requests. When a request is made, rather than creating another process, it will create a separate thread to service the request. )enefits of Multi&threading: Responsiveness: Mutlithreaded interactive application continues to run even if part of it is blocked or performing a lengthy operation, thereby increasing the responsiveness to the user. Resource Sharing: By default, threads share the memory and the resources of the process to which they belong. It allows an application to have several different threads of activity withing the same address space. Economy:Allocating memory and resources for process creation is costly. Since thread shares the resources of the process to which they belong, it is more economical to create and context switch threads. It is more time consuming to create and manage process than threads. Utilization of multiprocessor architecture: The benefits of multi threading can be greatly increased in multiprocessor architecture, where threads may be running in parallel on different processors. Mutlithreading on a multi-CPU increases concurrency.

Page:2.

Compiled 9y: daya

Process VS Thread: Process Thread

Program in execution. Heavy weight Unit of Allocation Resources, privileges etc Inter-process communication is expensive: need to context switch Secure: one process cannot corrupt another process Process are Typically independent Process carry considerable state information. Processes have separate address space

Basic unit of CPU utilization. Light weight Unit of Execution PC, SP, registers PCProgram counter, SPStack pointer Inter-thread communication cheap: can use process memory and may not need to context switch Not secure: a thread can write the memory used by another thread Thread exist as subsets of a process Multiple thread within a process share state as well as memory and other resources. Thread share their address space

Page::/

Compiled 9y: daya

processes interact only through system-provided Context switching between threads in the same inter-process communication mechanisms. process is typically faster than context switching between processes.

Multi&Threading Model:
!ser thread are supported a"ove the +ernel and are managed &ithout the +ernel support &hereas +ernel threads are supported and are manged directly "y the operating system( Virtually all operating#system includes +ernel threads( !ltimately there must e2ists a relationship "et&een user threads and +ernel threads( *e have three models for it( ). Many-to-one model maps many user level threads to one +ernel thread( ,hread management is done "y the thread li"rary in user space so it is efficient? "ut the entire process &ill "loc+ if a thread ma+es a "loc+ing system call( )lso only one thread can access the +ernel at a time, multiple threads are una"le to run in parallel on multiprocessors( Ireen ,hreads # a thread li"rary availa"le for Solaris use this model(

7ig: Many to One threading Model

*. One-to-one Model: maps each user thread to a +ernel thread( It provides more concurrency than many to one model "y allo&ing another thread to run &hen a thread ma+es a "loc+ing system call( ,he only dra&"ac+ to this model is that creating a user thread re1uires creating the corresponding +ernel thread( Linu2 along &ith families of *indo&s operating system use this model(

Fig: #ne to one "hreading mode


Page::1 Compiled 9y: daya

+. Many-to-many Model: multiple2es many user level thread to a smaller or e1ual num"er of +ernel threads( ,he num"er of +ernel thread may "e specific to either a particular application or a particular machine( Many#to#many model allo&s the users to create as many threads as he &ishes "ut the true concurrency is not gained "ecause the +ernel can schedule only one thread at a time(

Fig:&any to &any Interprocess Communication: Processes frequently needs to communicate with each other. For example in a shell pipeline, the output of the first process must be passed to the second process and so on down the line. Thus there is a need for communication between the process, preferably in a well-structured way not using the interrupts. IPC enables one application to control another application, and for several applications to share the same data without interfering with one another. Inter-process communication (IPC) is a set of techniques for the exchange of data among multiple threads in one or more processes. Processes may be running on one or more computers connected by a network. IPC techniques are divided into methods for message passing, synchronization, shared memory, and remote procedure calls (RPC). co-operating Process: & process is independent if it can<t affect or 9e affected 9y another process. & process is co4operating if it can affects other or 9e affected 9y the other process. &ny process that shares data with other process is called co4operating process. There are many reasons for providing an environment for process co4operation. ).,n&ormation s!aring: Several users may "e interested to access the same piece of information6 for instance a shared file'( *e must allo& concurrent access to such information( *.Computation Speedup: 9rea+up tas+s into su"#tas+s(

Page::2

Compiled 9y: daya

+.Modularity: construct a system in a modular fashion( -.convenience: co-operating process requires IPC. There are two fundamental ways of IPC. a. Shared Memory b. Message Passing

Fig: Communication &ode a. &essage .assing b. Shared &emory Shared Memory:


Here a region of memory that is shared by co-operating process is established. Process can exchange the information by reading and writing data to the shared region. Shared memory allows maximum speed and convenience of communication as it can be done at the speed of memory within the computer. System calls are required only to establish shared memory regions. Once shared memory is established no assistance from the kernel is required,all access are treated as routine memory access.

Page:::

Compiled 9y: daya

Message Passing:

communication takes place by means of messages exchanged between the co-operating process Message passing is useful for exchanging the smaller amount of data since no conflict need to be avoided. Easier to implement than shared memory. Slower than that of Shared memory as message passing system are typically implemented using system call which requires more time consuming task of Kernel intervnetion.

Race Condition:

,he situation &here t&o or more processes are reading or &riting some shared data K the final results depends on &ho runs precisely &hen are called race conditions. ,o see ho& interprocess communication &or+s in practice, let us consider a simple "ut common e2ample, a print spooler( *hen a process &ants to print a file, it enters the file name in a special spooler directory( )nother process, the printer daemon, periodically chec+s to see if there are any files to "e printed, and if there are, it prints them and removes their names from the directory( Imagine that our spooler directory has a large number of slots, numbered 0, 1, 2, ..., each one capable of holding a file name. Also imagine that there are two shared variables, out: which points to the next file to be printed in: which points to the next free slot in the directory. At a certain instant, slots 0 to 3 are empty (the files have already been printed) and slots 4 to 6 are full (with the names of files to be printed). More or less simultaneously, processes A and B decide they want to queue a file for printing as shown in the fig. Process A reads in and stores the value, 7, in a local variable called next_free_slot. Just then a clock interrupt occurs and the CPU decides that process A has run long enough, so it switches to process B.
Page::3 Compiled 9y: daya

Process B also reads in, and also gets a 7, so it stores the name of its file in slot 7 and updates in to be an 8. Then it goes off and does other things. Eventually, process A runs again, starting from the place it left off last time. It looks at next_free_slot, finds a 7 there, and writes its file name in slot 7, erasing the name that process B just put there. Then it computes next_free_slot + 1, which is 8, and sets in to 8. The spooler directory is now internally consistent, so the printer daemon will not notice anything wrong, but process B will never receive any output.

Spooling: Simultaneous peripheral operations online Another Example for Race Condition: When two or more concurrently running threads/processes access a shared data item or resources and final results depends on the order of execution, we have race condition. Suppose we have two threads A and B as shown below. Thread A increase the share variable count by 1 and Thread B decrease the count by 1.

If the current value of Count is /<, "oth e2ecution orders yield /< "ecause Count is increased "y / follo&ed "y decreased "y / for the former case, and Count is decreased "y / follo&ed "y increased "y / for the latter( 0o&ever, if Count is not protected "y mutual e2clusion, &e might get difference results( Statements CountCC and Count## may "e translated to machine instructions as sho&n "elo&:

If "oth statements are not protected, the e2ecution of the instructions may "e interleaved due to conte2t s&itching( More precisely, &hile thread . is in the middle of e2ecuting CountFF, the system may s&itch . out and let thread / run( Similarly, &hile thread / is in the middle of e2ecuting Count44, the system may s&itch / out and let thread . run( Should this happen, &e have a pro"lem( ,he follo&ing ta"le sho&s the e2ecution details( 7or each thread, this ta"le sho&s the instruction e2ecuted and the content of its register( Bote that registers are part of a thread.s environment and different threads have different environments( Conse1uently, the modification of a register "y a thread only affects the thread itself and &ill not affect the registers of other threads( ,he last column sho&s the value of Count in memory( Suppose thread . is running( )fter thread . e2ecutes its 1O&8 instruction, a conte2t s&itch s&itches thread . out and s&itches thread / in( ,hus, thread / e2ecutes its three instructions, changing the value of Count to H( ,hen, the e2ecution is s&itched "ac+ to thread ., &hich continues &ith the remaining t&o instructions( Conse1uently, the value of Count is changed to //L

Page::5

Compiled 9y: daya

,he follo&ing sho&s the e2ecution flo& of e2ecuting thread B follo&ed "y thread A, and the result is HL

,his e2ample sho&s that &ithout mutual e2clusion, the access to a shared data item may generate different results if the e2ecution order is altered( ,his is, of course, a race condition( ,his race condition is due to no mutual e2clusion(

!'oiding +ace Conditions:


Critical Section: To avoid race condition we need Mutual Exclusion. Mutual Exclusion is someway of making sure that if one process is using a shared variable or file, the other processes will be excluded from doing the same things. The difficulty above in the printer spooler occurs because process B started using one of the shared variables before process A was finished with it. That part of the program where the shared memory is accessed is called the critical region or critical section. If we could arrange matters such that no two processes were ever in their critical regions at the
Page:: Compiled 9y: daya

same time, we could avoid race conditions. Although this requirement avoids race conditions, this is not sufficient for having parallel processes cooperate correctly and efficiently using shared data. (Rules for avoiding Race Condition) Solution to Critical section problem: 1. No two processes may be simultaneously inside their critical regions. (Mutual Exclusion) 2. No assumptions may be made about speeds or the number of CPUs. 3. No process running outside its critical region may block other processes. 4. No process should have to wait forever to enter its critical region.

Fig:&utua 2*c usion using Critica $egion

Techni'ues for avoiding Race Condition:


1. 2. 3. 4. 5. 6. 7. 8. 9. Disabling Interrupts Lock Variables Strict Alteration Peterson's Solution TSL instruction Sleep and Wakeup Semaphores Monitors Message Passing

1,-isabling .nterrupts: The simplest solution is to have each process disable all interrupts just after entering its critical region and re-enable them just before leaving it. With interrupts disabled, no clock interrupts can occur.
Page::7 Compiled 9y: daya

The CPU is only switched from process to process as a result of clock or other interrupts, after all, and with interrupts turned off the CPU will not be switched to another process. Thus, once a process has disabled interrupts, it can examine and update the shared memory without fear that any other process will intervene. Disadvantages: 1. It is unattractive because it is unwise to give user processes the power to turn off interrupts. Suppose that one of them did, and then never turned them on again? 2.Furthermore, if the system is a multiprocessor, with two or more CPUs, disabling interrupts affects only the CPU that executed the disable instruction. The other ones will continue running and can access the shared memory. Advantages: it is frequently convenient for the kernel itself to disable interrupts for a few instructions while it is updating variables or lists. If an interrupt occurred while the list of ready processes, for example, was in an inconsistent state, race conditions could occur. *,%oc( Variables

a single, shared, (lock) variable, initially 0. When a process wants to enter its critical region, it first tests the lock. If the lock is 0, the process sets it to 1 and enters the critical region. If the lock is already 1, the process just waits until it becomes 0. Thus, a 0 means that no process is in its critical region, and a 1 means that some process is in its critical region.

Drawbacks: Unfortunately, this idea contains exactly the same fatal flaw that we saw in the spooler directory. Suppose that one process reads the lock and sees that it is 0. Before it can set the lock to 1, another process is scheduled, runs, and sets the lock to 1. When the first process runs again, it will also set the lock to 1, and two processes will be in their critical regions at the same time. /,Strict !lteration:

while (TRUE){ while(turn != 0) /* loop* /; critical_region(); turn = 1; noncritical_region(); } (a)

while (TRUE) { while(turn != 1) critical_region(); turn = 0; noncritical_region(); } (b)

/* loop* /;

A proposed solution to the critical region problem. (a) Process 0. (b) Process 1. In both cases, be sure to note the semicolons terminating the while statements.
Page::, Compiled 9y: daya

Integer variable turn is initially 0. It keeps track of whose turn it is to enter the critical region and examine or update the shared memory. Initially, process 0 inspects turn, finds it to be 0, and enters its critical region. Process 1 also finds it to be 0 and therefore sits in a tight loop continually testing turn to see when it becomes 1. Continuously testing a variable until some value appears is called busy waiting. It should usually be avoided, since it wastes CPU time. Only when there is a reasonable expectation that the wait will be short is busy waiting used. A lock that uses busy waiting is called a spin lock. When process 0 leaves the critical region, it sets turn to 1, to allow process 1 to enter its critical region. This way no two process can enters critical region simultaneously. Drawbacks: Taking turn is is not a good idea when one of the process is much slower than other. This situation requires that two processes strictly alternate in entering their critical region. Example: Process 0 finishes the critical region it sets turn to 1 to allow process 1 to enter critical region. Suppose that process 1 finishes its critical region quickly so both process are in their non critical region with turn sets to 0. Process 0 executes its whole loop quickly, exiting its critical region & setting turn to 1. At this point turn is 1 and both processes are executing in their noncritical regions. Suddenly, process 0 finishes its noncritical region and goes back to the top of its loop. Unfortunately, it is not permitted to enter its critical region now since turn is 1 and process 1 is busy with its noncritical region. This situation violates the condition 3 set above: No process running outside the critical region may block other process. In fact the solution requires that the two processes strictly alternate in entering their critical region. 0,Peterson1s Solution: Mdefine 7)LS < Mdefine ,%! / Mdefine B 4 $N num"er of processes N$ int turn? $N &hose turn is itO N$ int interestedPBQ? $N all values initially < 67)LS 'N$ void enterRregion6int process' $N process is < or / N$ S int other? $N num"er of the other process N$ other T / # process? $N the opposite of process N$ interestedPprocessQ T ,%! ? $N sho& that you are interested N$ turn T process? $N set flag N$ &hile 6turn TT process KK interestedPotherQ TT ,%! ' $N null statement N$? U void leaveRregion6int process' $N process: &ho is leaving N$ S interestedPprocessQ T 7)LS ? $N indicate departure from critical region N$ U Peterson's solution for achieving mutual exclusion.
Page::. Compiled 9y: daya

Initially neither process is in critical region. Now process 0 calls enter_region. It indicates its interest by setting its array element and sets turn to 0. Since process 1 is not interested, enter_region returns immediately. If process 1 now calls enter_region, it will hang there until interested[0] goes to FALSE, an event that only happens when process 0 calls leave_region to exit the critical region. Now consider the case that both processes call enter_region almost simultaneously. Both will store their process number in turn. Whichever store is done last is the one that counts; the first one is lost. Suppose that process 1 stores last, so turn is 1. When both processes come to the while statement, process 0 executes it zero times and enters its critical region. Process 1 loops and does not enter its critical region. 2, The TS% .nstruction TSL RX,LOCK (Test and Set Lock) that works as follows: it reads the contents of the memory word LOCK into register RX and then stores a nonzero value at the memory address LOCK. The operations of reading the word and storing into it are guaranteed to be indivisibleno other processor can access the memory word until the instruction is finished. The CPU executing the TSL instruction locks the memory bus to prohibit other CPUs from accessing memory until it is done. enter_region: TSL REGISTER,LOCK CMP REGISTER,#0 JNE enter_region RET leave_region: MOVE LOCK, #0 RET

|copy LOCK to register and set LOCK to 1 |was LOCK zero? |if it was non zero, LOCK was set, so loop |return to caller; critical region entered |store a 0 in LOCK |return to caller

One solution to the critical region problem is now straightforward. Before entering its critical region, a process calls enter_region, which does busy waiting until the lock is free; then it acquires the lock and returns. After the critical region the process calls leave_region, which stores a 0 in LOCK. As with all solutions based on critical regions, the processes must call enter_region and leave_region at the correct times for the method to work. If a process cheats, the mutual exclusion will fail.

Problems with mutual Exclusion: The above techniques achieves the mutual exclusion using busy waiting. Here while one process is busy updating shared memory in its critical region, no other process will enter its critical region and cause trouble. Mutual Exclusion with busy waiting just check to see if the entry is allowed when a process wants to enter its critical region, if the entry is not allowed the process just sits in a tight loop waiting until it is 1. This approach waste CPU time 2. There can be an unexpected problem called priority inversion problem.

Page:3/

Compiled 9y: daya

Priority .n'ersion Problem: Consider a computer &ith t&o processes, 0, &ith high priority and L, &ith lo& priority, &hich share a critical region( ,he scheduling rules are such that 0 is run &henever it is in ready state( )t a certain moment, &ith L in its critical region, 0 "ecomes ready to run 6e(g(, an I$O operation completes'( 0 no& "egins "usy &aiting, "ut since L is never scheduled &hile 0 is running, L never gets the chance to leave its critical region, so 0 loops forever( ,his situation is sometimes referred to as the priority inversion pro"lem( Let us no& loo+ at some IPC primitives that "loc+s instead of &asting CP! time &hen they are not allo&ed to enter their critical regions( !sing "loc+ing constructs greatly improves the CP! utili@ation Sleep and 3a(eup: Sleep and wakeup are system calls that blocks process instead of wasting CPU time when they are not allowed to enter their critical region. sleep is a system call that causes the caller to block, that is, be suspended until another process wakes it up. The wakeup call has one parameter, the process to be awakened. !%amples to use Sleep and (a)eup primitives: Producer-consumer problem (Bounded Buffer): Two processes share a common, fixed-size buffer. One of them, the producer, puts information into the buffer, and the other one, the consumer, takes it out.

Trouble arises when 1. The producer wants to put a new data in the buffer, but buffer is already full. Solution: Producer goes to sleep and to be awakened when the consumer has removed data. 2. The consumer wants to remove data the buffer but buffer is already empty. Solution: Consumer goes to sleep until the producer puts some data in buffer and wakes consumer up.

Page:31

Compiled 9y: daya

#define N 100 int count = 0; void producer(void) { int item; while (TRUE){ item = produce_item(); if (count == N) sleep(); insert_item(item); count = count + 1; if (count == 1) wakeup(consumer); } } void consumer(void) { int item; while (TRUE){ if (count == 0) sleep(); item = remove_item(); count = count - 1; buffer */ if (count ==N - 1) wakeup(producer); consume_item(item); } }

/* number of slots in the buffer */ /* number of items in the buffer */

/* repeat forever */ /* generate next item */ /* if buffer is full, go to sleep */ /* put item in buffer */ /* increment count of items in buffer */ /* was buffer empty? */

/* repeat forever */ /* if buffer is empty, got to sleep */ /* take item out of buffer */ /* decrement count of items in /* was buffer full? */ /* print item */

Fig:The producer-consumer problem with a fatal race condition. N V Size of Buffer Count--> a variable to keep track of the no. of items in the buffer. Producers code: The producers code is first test to see if count is N. If it is, the producer will go to sleep ; if it is not the producer will add an item and increment count. Consumer code: It is similar as of producer. First test count to see if it is 0. If it is, go to sleep; if it nonzero remove an item and decrement the counter. Each of the process also tests to see if the other should be awakened and if so wakes it up. This approach sounds simple enough, but it leads to the same kinds of race conditions as we saw in the spooler directory. 1. The buffer is empty and the consumer has just read count to see if it is 0. 2. At that instant, the scheduler decides to stop running the consumer temporarily and start running the producer. (Consumer is interrupted and producer resumed)
Page:32 Compiled 9y: daya

3. The producer creates an item, puts it into the buffer, and increases count. 4. Because the buffer was empty prior to the last addition (count was just 0), the producer tries to wake up the consumer. 5. Unfortunately, the consumer is not yet logically asleep, so the wakeup signal is lost. 6. When the consumer next runs, it will test the value of count it previously read, find it to be 0, and go to sleep. 7. Sooner or later the producer will fill up the buffer and also go to sleep. Both will sleep forever.

The essence of the problem here is that a wakeup sent to a process that is not (yet) sleeping is lost. For temporary solution we can use wakeup waiting bit to prevent wakeup signal from getting lost, but it can't work for more processes. Semaphore: In computer science, a semaphore is a protected variable or abstract data type that constitutes a classic method of controlling access by several processes to a common resource in a parallel programming environment . Synchroni@ation tool that does not re1uire "usy &aiting ( A semaphore is a special kind of integer variable which can be initialized and can be accessed only through two atomic operations. P and V. If S is the semaphore variable, then, P operation: Wait for semaphore to become positive and then decrement P(S): while(S<=0) do no-op; S=S-1 V Operation: Increment semaphore by 1 V(S): S=S+1;

Atomic operations: When one process modifies the semaphore value, no other process can simultaneously modify that same semaphores value. In addition, in case of the P(S) operation the testing of the integer value of S (S<=0) and its possible modification (S=S-1), must also be executed without interruption. Semaphore operations: P or Down, or Wait: P stands for proberen for Gto test V or Up or Signal: 8utch words. V stands for verhogen %GincreaseG)

wait(sem) -- decrement the semaphore value. if negative, suspend the process and place in queue. (Also referred to as P(), down in literature.) signal%sem) 44 increment the semaphore valueE allow the first process in the ;ueue to continue. %&lso referred to as 3+,, up in literature.)

Page:3:

Compiled 9y: daya

Counting semaphore H integer value can range over an unrestricted domain /inary semap!ore integer value can range only "et&een < and /? can "e simpler to implement )lso +no&n as mute2 loc+s Provides mutual exclusion Semaphore S? $$ initiali@ed to / &ait 6S'? Critical Section signal 6S'? Semaphore implementation without busy waiting: ,mplementation o& 0ait: &ait 6S'S value # #? if 6value W <' S add this process to &aiting 1ueue "loc+6'? U U ,mplementation o& signal: Signal 6S'S valueCC? if 6value WT <' S remove a process P from the &aiting 1ueue &a+eup6P'? U U #define N 100 typedef int semaphore; semaphore mutex = 1; semaphore empty = N; semaphore full = 0; void producer(void) { int item; while (TRUE){ item = produce_item(); down(&empty); down(&mutex); insert_item(item); up(&mutex); up(&full); } } /* number of slots in the buffer */ /* semaphores are a special kind of int */ /* controls access to critical region */ /* counts empty buffer slots */ /* counts full buffer slots */

/* TRUE is the constant 1 */ /* generate something to put in buffer */ /* decrement empty count */ /* enter critical region */ /* put new item in buffer */ /* leave critical region */ /* increment count of full slots */

Page:33

Compiled 9y: daya

void consumer(void) { int item; while (TRUE){ down(&full); down(&mutex); item = remove_item(); up(&mutex); up(&empty); consume_item(item); } }

/* infinite loop */ /* decrement full count */ /* enter critical region */ /* take item from buffer */ /* leave critical region */ /* increment count of empty slots */ /* do something with the item */

Fig: The producer-consumer problem using semaphores.

This solution uses three semaphore. 1. Full: For counting the number of slots that are full, initially 0 2. Empty: For counting the number of slots that are empty, initially equal to the no. of slots in the buffer. 3. Mutex: To make sure that the producer and consumer do not access the buffer at the same time, initially 1. Here in this example seamphores are used in two different ways. 1.For mutual Exclusion: The mutex semaphore is for mutual exclusion. It is designed to guarantee that only one process at at time will be reading or writing the buffer and the associated variable. 2.For synchronization: The full and empty semaphores are needed to guarantee that certain certain event sequences do or do not occur. In this case, they ensure that producer stops running when the buffer is full and the consumer stops running when it is empty. The above definition of the semaphore suffer the problem of busy wait. To overcome the need for busy waiting, we can modify the definition of the P and V operation of the semaphore. When a Process executes the P operation and finds that the semaphores value is not positive, it must wait. However, rather than busy waiting, the process can block itself. The block operation places into a waiting queue associated with the semaphore, and the state of the process is switched to the d!antages of semaphores: Processes do not 9usy wait while waiting for resources. =hile waitingE they are in a Gsuspended<< stateE allowing the CP' to perform other chores. =or5s on %shared memory) multiprocessor systems. 'ser controls synchroni+ation. "isad!antages of semaphores: can only "e invo+ed "y processes##not interrupt service routines "ecause interrupt routines cannot "loc+ user controls synchroni@ation##could mess up(

Page:35

Compiled 9y: daya

Monitors: In concurrent programming, a monitor is an o"-ect or module intended to "e used safely "y more than one thread( ,he defining characteristic of a monitor is that its methods are e2ecuted &ith mutual e2clusion( ,hat is, at each point in time, at most one thread may "e e2ecuting any of its methods( ,his mutual e2clusion greatly simplifies reasoning a"out the implementation of monitors compared to reasoning a"out parallel code that updates a data structure( Monitors also provide a mechanism for threads to temporarily give up e2clusive access, in order to &ait for some condition to "e met, "efore regaining e2clusive access and resuming their tas+( Monitors also have a mechanism for signaling other threads that such conditions have "een met(

Aueue of waiting processes trying to enter the monitor

With semaphores IPC seems easy, but Suppose that the two downs in the producer's code were reversed in order, so mutex was decremented before empty instead of after it. If the buffer were completely full, the producer would block, with mutex set to 0. Consequently, the next time the consumer tried to access the buffer, it would do a down on mutex, now 0, and block too. Both processes would stay blocked forever and no more work would ever be done. This unfortunate situation is called a deadlock.

A higher level synchronization primitive. A monitor is a collection of procedures, variables, and data structures that are all grouped together in a special kind of module or package. Processes may call the procedures in a monitor whenever they want to, but they cannot directly access the monitor's internal data structures from procedures declared outside the monitor. This rule, which is common in modern object-oriented languages such as Java, was relatively unusual for its time, Figure below illustrates a monitor written in an imaginary language, Pidgin Pascal.

Page:3

Compiled 9y: daya

monitor example integer i; condition c; procedure producer (x); . . . end; procedure consumer (x); . . . end; end monitor; Fig: A monitor

Message $assing: Message passing in computer science, is a form of communication used in parallel computing, objectoriented programming, and interprocess communication. In this model processes or objects can send and receive messages (comprising zero or more bytes, complex data structures, or even segments of code) to other processes. By waiting for messages, processes can also synchronize. Message passing is a method of communication where messages are sent from a sender to one or more recipients. Forms of messages include (remote) method invocation, signals, and data packets. When designing a message passing system several choices are made:

Whether messages are transferred reliably Whether messages are guaranteed to be delivered in order Whether messages are passed one-to-one, one-to-many (multicasting or broadcasting), or manyto-one (clientserver). Whether communication is synchronous or asynchronous.

This method of interprocess communication uses two primitives, send and receive, which, like semaphores and unlike monitors, are system calls rather than language constructs. As such, they can easily be put into library procedures, such as send(destination, &message); and
Page:37 Compiled 9y: daya

receive(source, &message); Synchronous message passing systems requires the sender and receiver to wait for each other to transfer the message Asynchronous message passing systems deliver a message from sender to receiver, without waiting for the receiver to be ready.

Classical I$C $ro#lems


/( :ining Philosophers Pro"lem 4( ,he %eaders and *riters Pro"lem 5( ,he Sleeping 9ar"er Pro"lem ). Dining p!ilosop!ers pro%lems: ,here are B philosphers sitting around a circular ta"le eating spaghetti and discussing philosphy( ,he pro"lem is that each philosopher needs 4 for+s to eat, and there are only B for+s, one "et&een each 4 philosophers( :esign an algorithm that the philosophers can follo& that insures that none starves as long as each philosopher eventually stops eating, and such that the ma2imum num"er of philosophers can eat at once( Philosophers eat$thin+ ating needs 4 for+s Pic+ one for+ at a time 0o& to prevent deadloc+

,he pro"lem &as designed to illustrate the pro"lem of avoiding deadloc+, a system state in &hich no progress is possi"le( One idea is to instruct each philosopher to "ehave as follo&s: thin+ until the left for+ is availa"le? &hen it is, pic+ it up thin+ until the right for+ is availa"le? &hen it is, pic+ it up eat put the left for+ do&n put the right for+ do&n repeat from the start ,his solution is incorrect: it allo&s the system to reach deadloc+( Suppose that all five philosophers ta+e their left for+s simultaneously( Bone &ill "e a"le to ta+e their right for+s, and there &ill "e a
Page:3, Compiled 9y: daya

deadloc+( *e could modify the program so that after ta+ing the left for+, the program chec+s to see if the right for+ is availa"le( If it is not, the philosopher puts do&n the left one, &aits for some time, and then repeats the &hole process( ,his proposal too, fails, although for a different reason( *ith a little "it of "ad luc+, all the philosophers could start the algorithm simultaneously, pic+ing up their left for+s, seeing that their right for+s &ere not availa"le, putting do&n their left for+s, &aiting, pic+ing up their left for+s again simultaneously, and so on, forever( ) situation li+e this, in &hich all the programs continue to run indefinitely "ut fail to ma+e any progress is called starvation ,he solution presented "elo& is deadloc+#free and allo&s the ma2imum parallelism for an ar"itrary num"er of philosophers( It uses an array, state, to +eep trac+ of &hether a philosopher is eating, thin+ing, or hungry 6trying to ac1uire for+s'( ) philosopher may move into eating state only if neither neigh"or is eating( Philosopher i.s neigh"ors are defined "y the macros L 7, and %II0,( In other &ords, if i is 4, L 7, is / and %II0, is 5( Solution: Mdefine B E $N num"er of philosophers N$ Mdefine L 7, 6iCB#/'XB $N num"er of i.s left neigh"or N$ Mdefine %II0, 6iC/'XB $N num"er of i.s right neigh"or N$ Mdefine ,0IB8IBI < $N philosopher is thin+ing N$ Mdefine 0!BI%Y / $N philosopher is trying to get for+s N$ Mdefine ),IBI 4 $N philosopher is eating N$ typedef int semaphore? $N semaphores are a special +ind of int N$ int statePBQ? $N array to +eep trac+ of everyone.s state N$ semaphore mute2 T /? $N mutual e2clusion for critical regions N$ semaphore sPBQ? $N one semaphore per philosopher N$ void philosopher6int i' $N i: philosopher num"er, from < to B/ N$ S &hile 6,%! 'S $N repeat forever N$ thin+6'? $N philosopher is thin+ing N$ ta+eRfor+s6i'? $N ac1uire t&o for+s or "loc+ N$ eat6'? $N yum#yum, spaghetti N$ putRfor+s6i'? $N put "oth for+s "ac+ on ta"le N$ U U void ta+eRfor+s6int i' $N i: philosopher num"er, from < to B/ N$ S do&n6Kmute2'? $N enter critical region N$ statePiQ T 0!BI%Y? $N record fact that philosopher i is hungry N$ test6i'? $N try to ac1uire 4 for+s N$ up6Kmute2'? $N e2it critical region N$ do&n6KsPiQ'? $N "loc+ if for+s &ere not ac1uired N$ U void putRfor+s6i' $N i: philosopher num"er, from < to B/ N$ S do&n6Kmute2'? $N enter critical region N$ statePiQ T ,0IB8IBI? $N philosopher has finished eating N$
Page:3. Compiled 9y: daya

test6L 7,'? test6%II0,'? up6Kmute2'?

$N see if left neigh"or can no& eat N$ $N see if right neigh"or can no& eat N$ $N e2it critical region N$

U void test6i' $N i: philosopher num"er, from < to B/N $ S if 6statePiQ TT 0!BI%Y KK statePL 7,Q LT ),IBI KK stateP%II0,Q LT ),IBI' S statePiQ T ),IBI? up6KsPiQ'? U U

Readers (riter pro#lems:


,he dining philosophers pro"lem is useful for modeling processes that are competing for e2clusive access to a limited num"er of resources, such as I$O devices( )nother famous pro"lem is the readers and &riters pro"lem &hich models access to a data"ase 6Courtois et al(, /HF/'( Imagine, for e2ample, an airline reservation system, &ith many competing processes &ishing to read and &rite it( It is accepta"le to have multiple processes reading the data"ase at the same time, "ut if one process is updating 6&riting' the data"ase, no other process may have access to the data"ase, not even a reader( ,he 1uestion is ho& do you program the readers and the &ritersO One solution is sho&n "elo&( Solution to #eaders $riter problems typedef int semaphore? $N use your imagination N$ semaphore mute2 T /? $N controls access to .rc. N$ semaphore d" T /? $N controls access to the data"ase N$ int rc T <? $N M of processes reading or &anting to N$ void reader6void' S &hile 6,%! 'S $N repeat forever N$ do&n6Kmute2'? $N get e2clusive access to .rc. N$ rc T rc C /? $N one reader more no& N$ if 6rc TT /' do&n6Kd"'? $N if this is the first reader ((( N$ up6Kmute2'? $N release e2clusive access to .rc. N$ readRdataR"ase6'? $N access the data N$ do&n6Kmute2'? $N get e2clusive access to .rc. N$ rc T rc /? $N one reader fe&er no& N$ if 6rc TT <' up6Kd"'? $N if this is the last reader ((( N$ up6Kmute2'? $N release e2clusive access to .rc. N$ useRdataRread6'? $N noncritical region N$ U U

Page:5/

Compiled 9y: daya

void &riter6void' S &hile 6,%! 'S $N repeat forever N$ thin+RupRdata6'? $N noncritical region N$ do&n6Kd"'? $N get e2clusive access N$ &riteRdataR"ase6'? $N update the data N$ up6Kd"'? $N release e2clusive access N$ U U In this solution, the first reader to get access to the data "ase does a do&n on the semaphore d"( Su"se1uent readers merely have to increment a counter, rc( )s readers leave, they decrement the counter and the last one out does an up on the semaphore, allo&ing a "loc+ed &riter, if there is one, to get in(

Sleeping *ar#er $ro#lem


customers arrive to a "ar"er, if there are no customers the "ar"er sleeps in his chair( If the "ar"er is asleep then the customers must &a+e him up(

,he analogy is "ased upon a hypothetical "ar"er shop &ith one "ar"er( ,he "ar"er has one "ar"er chair and a &aiting room &ith a num"er of chairs in it( *hen the "ar"er finishes cutting a customer.s hair, he dismisses the customer and then goes to the &aiting room to see if there are other customers &aiting( If there are, he "rings one of them "ac+ to the chair and cuts his hair( If there are no other customers &aiting, he returns to his chair and sleeps in it( ach customer, &hen he arrives, loo+s to see &hat the "ar"er is doing( If the "ar"er is sleeping, then the customer &a+es him up and sits in the chair( If the "ar"er is cutting hair, then the customer goes to the &aiting room( If there is a free chair in the &aiting room, the customer sits in it and &aits his turn( If there is no free chair, then the customer leaves( 9ased on a naive analysis, the a"ove description should ensure that the shop functions correctly, &ith the "ar"er cutting the hair of anyone &ho arrives until there are no more customers, and then sleeping until the ne2t customer arrives( In practice, there are a
Page:51 Compiled 9y: daya

num"er of pro"lems that can occur that are illustrative of general scheduling pro"lems( ,he pro"lems are all related to the fact that the actions "y "oth the "ar"er and the customer 6chec+ing the &aiting room, entering the shop, ta+ing a &aiting room chair, etc(' all ta+e an un+no&n amount of time( 7or e2ample, a customer may arrive and o"serve that the "ar"er is cutting hair, so he goes to the &aiting room( *hile he is on his &ay, the "ar"er finishes the haircut he is doing and goes to chec+ the &aiting room( Since there is no one there 6the customer not having arrived yet', he goes "ac+ to his chair and sleeps( ,he "ar"er is no& &aiting for a customer and the customer is &aiting for the "ar"er( In another e2ample, t&o customers may arrive at the same time &hen there happens to "e a single seat in the &aiting room( ,hey o"serve that the "ar"er is cutting hair, go to the &aiting room, and "oth attempt to occupy the single chair( Solution: Many possi"le solutions are availa"le( ,he +ey element of each is a mute2, &hich ensures that only one of the participants can change state at once( ,he "ar"er must ac1uire this mute2 e2clusion "efore chec+ing for customers and release it &hen he "egins either to sleep or cut hair( ) customer must ac1uire it "efore entering the shop and release it once he is sitting in either a &aiting room chair or the "ar"er chair( ,his eliminates "oth of the pro"lems mentioned in the previous section( ) num"er of semaphores are also re1uired to indicate the state of the system( 7or e2ample, one might store the num"er of people in the &aiting room( ) multiple sleeping "ar"ers pro"lem has the additional comple2ity of coordinating several "ar"ers among the &aiting customers

"eadloc):

Resources
%esources are the passive entities needed "y processes to do their &or+( ) resource can "e a hard&are device 6eg( a dis+ space' or a piece of information 6 a loc+ed record in the data"ase'( 2ample of resources include CP! time, dis+ space, memory etc( ,here are t&o types of resources: Preempta%le ) Preempta"le resources is one that can "e ta+en a&ay from the process o&ing it &ith no ill effect( Memory is an e2ample of preempta"le resources( 1on-preempta%le ) non#preempta"le resources in contrast is one that cannot "e ta+en a&ay from its current o&ner &ithout causing the computation to fail( 2amples are C:#recorder and Printers( If a process has "egun to "urn a C:#%OM, suddenly ta+ing the C: recorder a&ay from it and giving it to another process &ill result in a gar"led C:( C: recorders are not preempta"le at any ar"itrary moment( %ead#only files are typically shara"le( Printers are not shara"le during time of printing (One of the ma-or tas+s of an operating system is to manage resources( :eadloc+s may occur &hen processes have "een granted e2clusive access to %esources( ) resource may "e a hard&are device 6eg( ) tape drive' file or a piece of information 6 a loc+ed record in a data"ase'( In general :eadloc+s involves non preempta"le resources( ,he se1uence of events re1uired to use a resource is:

Page:52

Compiled 9y: daya

/( %e1uest the resource 4( !se the resource 5( %elease the resource (hat is "eadloc)+ In Computer Science a set of process is said to "e in deadloc+ if each process in the set is &aiting for an event that only another process in the set can cause( Since all the processes are &aiting, none of them &ill ever cause any of the events that &ould &a+e up any of the other mem"ers of the set K all the processes continue to &ait forever(

Owned By

Process A

Wait For

Res 1 Wait For

Res 2 Owned By

Process B

Example ): ,&o process ) and 9 each &ant to recored a scanned document on a C:( ) re1uests permission to use Scanner and is granted( 9 is programmed differently and re1uests the C: recorder first and is also granted( Bo&, ) as+ for the C: recorder, "ut the re1uest is denied until 9 releases it( !nfortunately, instead of releasing the C: recorder 9 as+s for Scanner( )t this point "oth processes are "loc+ed and &ill remain so forever( ,his situation is called :eadloc+( Example:* /ridge Crossing Example:

Page:5:

Compiled 9y: daya

ach segment of road can "e vie&ed as a resource Car must o&n the segment under them Must ac1uire segment that they are moving into 7or "ridge: must ac1uire "oth halves ,raffic only in one direction at a time Pro"lem occurs &hen t&o cars in opposite directions on "ridge: each ac1uires one segment and needs ne2t If a deadloc+ occurs, it can "e resolved if one car "ac+s up 6preempt resources and roll"ac+' # Several cars may have to "e "ac+ed up Starvation is possi"le ast#going traffic really fast TTZno one goes &est

Starvation vs, "eadloc) Starvation: thread &aits indefinitely 2ample, lo&#priority thread &aiting for resources constantly in use "y high#priority threads :eadloc+: circular &aiting for resources ,hread ) o&ns %es / and is &aiting for %es 4 ,hread 9 o&ns %es 4 and is &aiting for %es / :eadloc+ TTZ Starvation "ut not vice versa Starvation can end 6"ut doesnJt have to' :eadloc+ canJt end &ithout e2ternal intervention

Conditions for "eadloc):


Coffman et al( 6/HF/' sho&ed that four conditions must hold for there to "e a deadloc+: /( Mutual e2clusion Only one process at a time can use a resource( 4( 0old and &ait Process holding at least one resource is &aiting to ac1uire additional resources held "y other processes( 5( Bo preemption %esources are released only voluntarily "y the process holding the resource, after the process is finished &ith it D( Circular &ait ,here e2ists a set SP/ , [, Pn U of &aiting processes( P/ is &aiting for a resource that is held "y P4 P4 is &aiting for a resource that is held "y P5 [ Pn is &aiting for a resource that is held "y P/ )ll of these four conditions must "e present for a deadloc+ to occur( If one or more of these conditions is a"sent, no :eadloc+ is possi"le( "eadloc) Modeling: :eadloc+s can "e descri"ed more precisely in terms of %esource allocation graph( Its a set of vertices V and a set of edges ( V is partitioned into t&o types:
Page:53 Compiled 9y: daya

P T SP/, P 4, (((, P nU, the set consisting of all the processes in the system( % T S%/, %4 , (((, %mU, the set consisting of all resource types in the system( re1uest edge directed edge Pi %assignment edge directed edge %Pi

P1

R1

Symbols P1 P2

R1
a,. .1 is ho ding $1

P1
b,. .1 re-uests $1

R1

R2

a,.eg. of a $esource a ocation graph

b,.$esource a ocation graph with 4ead oc0

c,.$esource A ocation graph with a Cyc e but no 4ead oc0

Page:55

Compiled 9y: daya

/asic 2acts: If graph contains no cycles no deadloc+( If graph contains a cycle If only one instance per resource type, then deadloc+( If several instances per resource type, possi"ility of :eadloc+ Methods for -andling "eadloc): )llo& system to enter deadloc+ and then recover %e1uires deadloc+ detection algorithm Some techni1ue for forci"ly preempting resources and$or terminating tas+s nsure that system &ill never enter a deadloc+ Beed to monitor all loc+ ac1uisitions Selectively deny those that might lead to deadloc+ Ignore the pro"lem and pretend that deadloc+s never occur in the system !sed "y most operating systems, including !BI> -eadloc( Pre'ention ,o prevent the system from deadloc+s, one of the four discussed conditions that may create a deadloc+ should "e discarded( ,he methods for those conditions are as follo&s: ). Mutual Exclusion: In general, &e do not have systems &ith all resources "eing shara"le( Some resources li+e printers, processing units are non#shara"le( So it is not possi"le to prevent deadloc+s "y denying mutual e2clusion( *. (old and 3ait: One protocol to ensure that hold#and#&ait condition never occurs says each process must re1uest and get all of its resources "efore it "egins e2ecution( )nother protocol is \ ach process can re1uest resources only &hen it does not occupies any resources(] ,he second protocol is "etter( 0o&ever, "oth protocols cause lo& resource utili@ation and starvation( Many resources are allocated "ut most of them are unused for a long period of time( ) process that re1uests several commonly used resources causes many others to &ait indefinitely( +. 1o Preemption: One protocol is \If a process that is holding some resources re1uests another resource and that resource cannot "e allocated to it, then it must release all resources that are currently allocated to it(] )nother protocol is \*hen a process re1uests some resources, if they are availa"le, allocate them( If a resource it re1uested is not availa"le, then &e chec+ &hether it is "eing used or it is allocated to some other process &aiting for other resources( If that resource is not "eing used, then the OS preempts it from the &aiting process and allocate it to the re1uesting process( If that resource is used, the re1uesting process must &ait(] ,his protocol can "e applied to resources &hose states can easily "e saved and restored 6registers, memory space'( It cannot "e applied to resources li+e printers(

Page:5

Compiled 9y: daya

-. Circular 3ait: One protocol to ensure that the circular &ait condition never holds is \Impose a linear ordering of all resource types(] ,hen, each process can only re1uest resources in an increasing order of priority( 7or e2ample, set priorities for r/ T /, r4 T 4, r5 T 5, and rD T D( *ith these priorities, if process P &ants to use r/ and r5, it should first re1uest r/, then r5( )nother protocol is \*henever a process re1uests a resource r-, it must have released all resources r+ &ith priority6r+' ^ priority 6r-'( "eadloc) .voidance: Iiven some additional information on ho& each process &ill re1uest resources, it is possi"le to construct an algorithm that &ill avoid deadloc+ states( ,he algorithm &ill dynamically e2amine the resource allocation operations to ensure that there &on.t "e a circular &ait on resources( ,&o deadloc+ avoidance algorithms: resource#allocation graph algorithm 9an+er.s algorithm %esource#allocation graph algorithm only applica"le &hen &e only have / instance of each resource type claim edge 6dotted edge', li+e a future re1uest edge &hen a process re1uests a resource, the claim edge is converted to a re1uest edge &hen a process releases a resource, the assignment edge is converted to a claim edge )an(ers !lgorithms: ,he 9an+erJs algorithm is a resource allocation and deadloc+ avoidance algorithm developed "y dsger :i-+stra ( %esource allocation state is defined "y the num"er of availa"le and allocated resources and the ma2imum demand of the processes( *hen a process re1uests an availa"le resource, system must decide if immediate allocation leaves the system in a safe state( ,he system is in a safe state if there e2ists a safe se1uence of all processes: Se1uence W P/, P4, ((( Pn Z is safe for the current allocation state if, for each Pi, the resources &hich Pi can still re1uest can "e satisfied "y the currently availa"le resources plus the resources held "y all of the P-.s, &here - W i( If the system is in a safe state, there can "e no deadloc+( If the system is in an unsafe state, there is the possi"ility of deadloc+( ) state is safe if the system can allocate resources to each process in some order avoiding a deadloc+( ) deadloc+ state is an unsafe state( Customer TProcesses !nits T %esource say tape drive 9an+ersTOS
Page:57 Compiled 9y: daya

,he 9an+er algorithm does the simple tas+ If granting the re1uest leads to an unsafe state the re1uest is denied( If granting the re1uest leads to safe state the re1uest is carried out( 9asic 7acts: If a system is in safe state no deadloc+s( If a system is in unsafe state possi"ility of deadloc+( )voidance ensure that a system &ill never enter an unsafe state(

9an+ers )lgorithms for a single resource: Custom !sed er ) 9 C : < < < < Ma2 ; E D F

)vaila"le units: /< In the a"ove fig, &e see four customers each of &hom has "een granted a certain no( of credit units 6eg( / unitT/8 dollar'( ,he 9an+er reserved only /< units rather than 44 units to service them since not all customer need their ma2imum credit immediately( )t a certain moment the situation "ecomes:

Customer !sed ) 9 C : / / 4 D

Ma2 ; E D F

)vaila"le units: 4 Safe State: *ith 4 units left, the "an+er can delay any re1uests e2cept C.s, thus letting C finish and release all four of his resources( *ith four in hand, the "an+er can let either : or 9 have the necessary units K so on( !nsafe State: 9 re1uests one more unit and is granted(
Page:5, Compiled 9y: daya

Customer !sed ) 9 C : / 4 4 D

Ma2 ; E D F

)vaila"le units: / this is an unsafe condition( If all of the customer namely ), 9,C K : as+ed for their ma2imum loans, then 9an+er couldn.t satisfy any of them and &e &ould have deadloc+(It is important to note that an unsafe state does not imply the e2istence or even eventual e2istence of a deadloc+( *hat an unsafe does imply is that some unfortunate se1uence of events might lead a deadloc+(

Page:5.

Compiled 9y: daya

*an)ers .lgorithms for Multiple Resources: ,he algorithm for chec+ing to see if a state is safe can no& "e stated( /( Loo+ for a ro&, %, &hose unmet resource needs are all smaller than or e1ual to )( If no such ro& e2ists, the system &ill eventually deadloc+ since no process can run to completion( 4( )ssume the process of the ro& chosen re1uests all the resources it needs 6&hich is guaranteed to "e possi"le' and finishes( Mar+ that process as terminated and add all its resources to the ) vector( 5( %epeat steps / and 4 until either all processes are mar+ed terminated, in &hich case the initial state &as safe, or until a deadloc+ occurs, in &hich case it &as not(

a,. Current A ocation &atri* b,. $e-uest &atri* 6 2isting %esources': 6; 5 D 4' P 6Processed %esources': 6E 5 4 4' ) 6)vaila"le %esources': 6/ < 4 <' Solution: Process ), 9 K C can.t run to completion since for Process for each process, %e1uest is greater than )vaila"le %esources( Bo& process : can complete since its re1uests ro& is less than that of )vaila"le resources( Step /: *hen : run to completion the total availa"le resources is: ) T 6 /, <, 4, < ' C 6/, /, < , /'T 64, /, 4, /' Bo& Process can run to completion

Step 4: Bo& process can also run to completion K return "ac+ all of its resources( TTZ ) T 6< , <, <, <' C 6 4, /, 4, /' T 64, /, 4, /' Step 5: Bo& process ) can also run to completion leading ) to 65, <, /, /' C 64, /, 4, /' T 6E, /, 5, 4'
Page: / Compiled 9y: daya

Step D: Bo& process C can also run to completion leading ) to 6E, /, 5, 4' C 6/, /, /, <' T 6;, 4, D, 4' Step E: Bo& process 9 can run to completion leading ) to 6;, 4, D, 4' C 6<, /, < , <' T 6 ;, 5, D, 4' ,his implies the state is safe and :ead loc+ free( 4uestions: ) system has three processes and four alloca"le resources( ,he total four resource types e2ist in the amount as T 6D, 4, 5, /'( ,he current allocation matri2 and re1uest matri2 are as follo&s: !sing 9an+ers algorithm, e2plain if this state is deadloc+ safe or unsafe( Current )llocation Matri2 Process %o %/ %4 %5 P< < < / < P/ 4 < < / P/ < / 4 < )llocation %e1uest Matri2 Process %o %/ %4 P< 4 < < P/ / < / P/ 4 / <

%5 / < <

_'( Consider a system &ith five processes P< through PD and three resources types ), 9, C( %esource type ) has /< instances, 9 has E instances and type C has F instances( Suppose at time t< follo&ing snapshot of the system has "een ta+en Process P< P/ P4 P5 PD
Page: 1

)llocation )9C </< 4<< 5<4 4// <<4

Ma2 )9C FE5 544 H<4 444 D55

)vaila"le )9C 554

Compiled 9y: daya

/' *hat &ill "e the content of the need Matri2O 4' Is the system in safe stateO If yes, then &hat is the safe se1uenceO /( Beed Pi,-QT Ma2 Pi,-Q )llocationPi,-Q content of Beed Matri2 is ) P< F P/ / P4 ; P5 < PD D 9 D 4 < / 5 C 5 4 < / /

/( applying Safety algorithms 7or Pi if Beedi WT )vaila"le, then pi is in Safe se1uence, )vaila"le T )vaila"le C )llocationi 7or P<, need<TF,D,5 )vaila"le T 5,5,4 TTZ Condition is false, So P< must &ait( 7or P/ ,need/T /,4,4 )vaila"leT5,5,4 need/W )vaila"le So P/ &ill "e +ept in safe se1uence( K )vaila"le &ill "e updated as: )vaila"leT 5,5,4 C 4,<,< T E,5,4 7or P4, need4T ;,<,< )vaila"le T E,5,4 TTZ condition is again false, so P4 also have to &ait( 7or P5, need5T <,/,/ )vaila"leT E,5,4 TTZ condition is true , P5 &ill "e in safe se1uence( )vaila"le T E,5,4 C 4,/,/ T F,D,5 7or PD, needDT D,5,/ )vaila"le T F,D,5 TTZ condition Beedi WT )vaila"le is true, so PD &ill "e in safe se1uence )vaila"le T F,D,5 C <,<,4 T F,D,E

Page: 2

Compiled 9y: daya

Bo& &e have t&o processes P< and P4 in &aiting state( ither P< or P/ can "e choosen( Let us ta+e P4 &hose need T ;,<,< )vaila"le T F,D,E Since condition is true, P4 no& comes in safe state leaving the )vaila"le T F,D,E C 5,<,4 T /<, D,F

Be2t P< &hose need T F, D, 5 )vaila"le T /<,D,F since condition is true P< also can "e +ept in safe state( So system is in safe state K the safe se1uence is WP/, P5, PD, P4, P<Z "etection and Recovery ) second techni1ue is detection and recovery( *hen this techni1ue is used, the system does not do anything e2cept monitor the re1uests and releases of resources( very time a resource is re1uested or released, the resource graph is updated, and a chec+ is made to see if any cycles e2ist( If a cycle e2ists, one of the processes in the cycle is +illed( If this does not "rea+ the deadloc+, another process is +illed, and so on until the cycle is "ro+en( The Ostrich !lgorithm ,he simplest approach is the ostrich algorithm: stic+ your head in the sand and pretend there is no pro"lem at all(PQ :ifferent people react to this strategy in very different &ays( Mathematicians find it completely unaccepta"le and say that deadloc+s must "e prevented at all costs( Most operating systems, including !BI>, MIBI> 5, and *indo&s, -ust ignore the pro"lem on the assumption that most users &ould prefer an occasional deadloc+ to a rule restricting all users to one process, one open file, and one of everything( If deadloc+s could "e eliminated for free, there &ould not "e much discussion( ,he pro"lem is that the price is high, mostly in terms of putting inconvenient restrictions on processes( ,hus &e are faced &ith an unpleasant trade#off "et&een convenience and correctness, and a great deal of discussion a"out &hich is more important, and to &hom( !nder these conditions, general solutions are hard to find(

Page: :

Compiled 9y: daya

Chapter /: 4ernel: Introduction, Conte2t s&itching 68ernel mode and !ser mode', 7irst level interrupt handing, 8ernel implementation of processes(

Introduction:
,he central module of an operating system( It is the part of the operating system that loads first, and it remains in main memory( 9ecause it stays in memory, it is important for the +ernel to "e as small as possi"le &hile still providing all the essential services re1uired "y other parts of the operating system and applications( ,ypically, the +ernel is responsi"le for memory management, process and tas+ management, and dis+ management(

Conte%t Switch
) context s0itc! is the computing process of storing and restoring the state 6conte2t' of a CP! so that e2ecution can "e resumed from the same point at a later time( ,his ena"les multiple processes to share a single CP!( ,he conte2t s&itch is an essential feature of a multitas+ing operating system( Conte2t s&itches are usually computationally intensive and much of the design of operating systems is to optimi@e the use of conte2t s&itches( ) conte2t s&itch can mean a register conte2t s&itch, a tas+ conte2t s&itch, a thread conte2t s&itch, or a process conte2t s&itch( *hat constitutes the conte2t is determined "y the processor and the operating system( S&itching from one process to another re1uires a certain amount of time for doing the administration # saving and loading registers and memory maps, updating various ta"les and list etc( process can run in t&o modes: /(!ser Mode( 4(8ernel Mode( /(!ser Mode: TZ) mode of the CP! &hen running a program( TZIn this mode ,the user process has no access to the memory locations used "y the +ernel(*hen a program is running in !ser Mode, it cannot directly access the +ernel data structures or the +ernel programs( 4(8ernel Mode: TZ) mode of the CP! &hen running a program( TZIn this mode, it is the +ernel that is running on "ehalf of the user process and directly access the +ernel data structures or the +ernel programs(Once the system call returns,the CP! s&itches "ac+ to user mode( *hen you e2ecute a C program,the CP! runs in user mode till the system call is invo+ed( In this mode,the user process has access to a limited section of the computer.s memory and can e2ecute a restricted set of machine instructions( ho&ever,&hen the process invo+es a system call,the CP! s&itches from user mode to a more privileged mode the +ernel( In this mode ,it is the +ernel that runs on "ehalf of the user process,"ut it has access to any memory location and can e2ecute any machine instruction( )fter the system call has returned,the CP! s&itches "ac+ to user mode(

Page: 3

Compiled 9y: daya

Types Of ernels 1 Monolithic 4ernels


Earlier in this type of kernel architecture, all the basic system services like process and memory management, interrupt handling etc were packaged into a single module in kernel space. This type of architecture led to some serious drawbacks like 1) Size of kernel, which was huge. 2)Poor maintainability, which means bug fixing or addition of new features resulted in recompilation of the whole kernel which could consume hours .

-ig: #onolithic 0ernel In a modern day approach to monolithic architecture, the kernel consists of different modules which can be dynamically loaded and un-loaded. This modular approach allows easy extension of OS's capabilities. With this approach, maintainability of kernel became very easy as only the concerned module needs to be loaded and unloaded every time there is a change or bug fix in a particular module. So, there is no need to bring down and recompile the whole kernel for a smallest bit of change. Also, stripping of kernel for various platforms (say for embedded devices etc) became very easy as we can easily unload the module that we do not want. Examples:

Linu2 *indo&s H2 6HE, HG, Me' Mac OS WT G(; 9S:

* Micro(ernels
) Micro+ernel tries to run most services # li+e net&or+ing, filesystem, etc( # as daemons $ servers in user space( )ll that.s left to do for the +ernel are "asic services, li+e memory allocation 6ho&ever, the actual memory manager is implemented in userspace', scheduling, and messaging 6Inter Process Communication'(

Page: 5

Compiled 9y: daya

This architecture majorly caters to the problem of ever growing size of kernel code which we could not control in the monolithic approach. This architecture allows some basic services like device driver management, protocol stack, file system etc to run in user space. This reduces the kernel code size and also increases the security and stability of OS as we have the bare minimum code running in kernel. So, if suppose a basic service like network service crashes due to buffer overflow, then only the networking service's memory would be corrupted, leaving the rest of the system still functional.

-ig: #icro5ernel In this architecture, all the basic OS services which are made part of user space are made to run as servers which are used by other programs in the system through inter process communication (IPC). eg: we have servers for device drivers, network protocol stacks, file systems, graphics, etc. Microkernel servers are essentially daemon programs like any others, except that the kernel grants some of them privileges to interact with parts of physical memory that are otherwise off limits to most programs. This allows some servers, particularly device drivers, to interact directly with hardware. These servers are started at the system start-up. So, what the bare minimum that microKernel architecture recommends in kernel space? * Managing memory protection * Process scheduling * Inter Process communication (IPC) Apart from the above, all other basic services can be made part of user space and can be run in the form of servers. Examples: ADI 13 &migaOS #ini"

Page:

Compiled 9y: daya

QNX follows the Microkernel approach

/, Exo&(ernel:
Exokernels are an attempt to separate security from abstraction, making non-overrideable parts of the operating system do next to nothing but securely multiplex the hardware. The goal is to avoid forcing any particular abstraction upon applications, instead allowing them to use or implement whatever abstractions are best suited to their task without having to layer them on top of other abstractions which may impose limits or unnecessary overhead. This is done by moving abstractions into untrusted userspace libraries called "library operating systems" (libOSes), which are linked to applications and call the operating system on their behalf.

-ig: !"o5ernel

Interrupt -andler:
)n interrupt handler, also +no&n as an interrupt service routine 6IS%', is a call"ac+ su"routine in operating system or device driver &hose e2ecution is triggered "y the reception of an interrupt( Interrupt handlers have a multitude of functions, &hich vary "ased on the reason the interrupt &as generated and the speed at &hich the interrupt handler completes its tas+(

5irst %e'el .nterrupt 6andler 5%.6$


save registers of current process in PC6 8etermine the source of interrupt Initiate service of interrupt 4 calls interrupt handler ?ardware dependent 4 implemented in assem9ler

In several operating systems # Linu2, !ni2, Mac OS >, Microsoft *indo&s, and some other operating systems in the past, interrupt handlers are divided into t&o parts: the 7irst#Level Interrupt 0andler 67LI0' and the Second#Level Interrupt 0andlers 6SLI0'( 7LI0s are also +no&n as hard interrupt handlers or fast interrupt handlers, and SLI0s are also +no&n as slo&$soft interrupt handlers, :eferred Procedure Call(
Page: 7 Compiled 9y: daya

) 7LI0 implements at minimum platform#specific interrupt handling similarly to interrupt routines( In response to an interrupt, there is a conte2t s&itch, and the code for the interrupt is loaded and e2ecuted( ,he -o" of a 7LI0 is to 1uic+ly service the interrupt, or to record platform#specific critical information &hich is only availa"le at the time of the interrupt, and schedule the e2ecution of a SLI0 for further long#lived interrupt handling(

Page: ,

Compiled 9y: daya

Introduction: Scheduling levels, Scheduling objectives and criteria, Quantum size, Policy versus Mechanism in Scheduling, Preemptive versus No preemptive Scheduling, Scheduling techniques: Priority scheduling, deadline scheduling, First-In First Out scheduling, Round Robin Scheduling, Shortest Job First (SJF) Scheduling, Shortest Remaining Time (SRT) scheduling, Highest Response Ration Next (HRN) scheduling Multilevel Feedback Queues. Chapter 0: Scheduling: %ntroduction: In a multiprogramming system, frequently multiple process competes for the CPU at the same time. When two or more process are simultaneously in the ready state a choice has to be made which process is to run next. This part of the OS is called Scheduler and the algorithm is called scheduling algorithm. Process execution consists of cycles of CPU execution and I/O wait. Processes alternate between these two states. Process execution begins with a CPU burst that is followed by I/O burst, which is followed by another CPU burst then another i/o burst, and so on. Eventually, the final CPU burst ends with a system request to terminate execution. Long term and short term scheduling: If we consider batch systems, there will often be more processes submitted than the number of processes that can be executed immediately. So incoming processes are spooled (to a disk). The long-term scheduler selects processes from this process pool and loads selected processes into memory for execution. The short-term scheduler selects the process to get the processor from among the processes which are already in memory. The short-time scheduler will be executing frequently (mostly at least once every 10 milliseconds). So it has to be very fast in order to achieve a better processor utilization. Time-sharing systems (mostly) have no long-term scheduler. The stability of these systems either depends upon a physical limitation (number of available terminals) or the self-adjusting nature of users (if you can't get response, you quit). It can sometimes be good to reduce the degree of multiprogramming by removing processes from memory and storing them on disk. These processes can then be reintroduced into memory by the medium-term scheduler. This operation is also known as swapping. Swapping may be necessary to improve the process mix or to free memory. Process Behavior: Fig: . Bursts of CPU usage alternate with periods of waiting for I/O. (a) A CPU-bound process. (b) An I/O bound process

Page: .

Compiled 9y: daya

CPU Bound(or compute bound): Some process such as the one shown fig a. above spend most of their time computing. These processes tend to have long CPU burst and thus infrequent I/O waits. example: Matrix multiplication I/O Bound: Some process such as the one shown in fig. b above spend most of their time waiting for I/O. They have short CPU bursts and thus frequent I/O waits. example: Firefox Scheduling Criteria: Many criteria have been suggested for comparison of CPU scheduling algorithms. CPU utilization: we have to keep the CPU as busy as possible. It may range from 0 to 100%. In a real system it should range from 40 90 % for lightly and heavily loaded system. Throughput: It is the measure of work in terms of number of process completed per unit time. For long process this rate may be 1 process per hour, for short transaction, throughput may be 10 process per second. Turnaround Time: It is the sum of time periods spent in waiting to get into memory, waiting in ready queue, execution on the CPU and doing I/O. The interval form the time of submission of a process to the time of completion is the turnaround time. Waiting time plus the service time. Turnaround time= Time of completion of job - Time of submission of job. (waiting time + service time or burst time) Waiting time: its the sum of periods waiting in the ready queue. Response time: in interactive system the turnaround time is not the best criteria. Response time is the amount of time it takes to start responding, not the time taken to output that response. Types of Scheduling: Scheduling algorithms can "e divided into t&o categories &ith respect to ho& they deal &ith cloc+ interrupts( 1.Preemptive Scheduling 2.Non preemptive Scheduling Nonpreemptive scheduling algorithm picks a process to run and then just lets it run until it blocks (either on I/O or waiting for another process) or until it voluntarily releases the CPU. Even it runs for hours, it will not be forceably suspended. In effect no scheduling decisions are made during clock interrupts. In contrasts, a preemptive scheduling algorithm picks a process and lets it run for a maximum of some fixed time. If it is still running at the end of the time interval, it is suspended and the scheduler picks another process to run (if one is available). Doing peemptive scheduling requires having a clock interrupt occur at the end of time interval to give control of the CPU back to the scheduler. If no clock is available, nonpreemptive scheduling is only the option.
Page:7/ Compiled 9y: daya

CPU scheduling decisions may take place under the following four circumstances: 1. When a process switches from the running state to the waiting state (for example , I/O requests, or invocation of wait for the termination of one of the child process). 2. When a process switches from the running state to the ready state ( for example, when an interrupt occurs). 3. When a process switches from the waiting state to the ready state(for example completion of I/O). 4. When a process terminates. Scheduling .lgorithms:

1, 5irst come 5irst Ser'e: FCFS is the simplest non-preemptive algorithm. Processes are assigned the CPU in the order they request it. That is the process that requests the CPU first is allocated the CPU first. The implementation of FCFS is policy is managed with a FIFO(First in first out) queue. When the first job enters the system from the outside in the morning, it is started immediately and allowed to run as long as it wants to. As other jobs come in, they are put onto the end of the queue. When the running process blocks, the first process on the queue is run next. When a blocked process becomes ready, like a newly arrived job, it is put on the end of the queue.

Advantages: 1.Easy to understand and program. With this algorithm a single linked list keeps track of all ready processes. 2.Equally fair., 3.Suitable specially for Batch Operating system. Disadvantages: 1.FCFS is not suitable for time-sharing systems where it is important that each user should get the CPU for an equal amount of arrival time. Consider the following set of processes having their burst time mentioned in milliseconds. CPU burst time indicates that for how much time, the process needs the cpu. Process P1 P2 P3 Burst Time 24 3 3

Calculate the average waiting time if the processes arrive in the order of: a). P1, P2, P3
Page:71 Compiled 9y: daya

b). P2, P3, P1 a. The processes arrive the order P1, P2, P3. Let us assume they arrive in the same time at 0 ms in the system. We get the following gantt chart. P1 / 23 P2 27 P: :/

Waiting time for P1= 0ms , for P2 = 24 ms for P3 = 27ms Avg waiting time: (0+24+27)/3= 17 b. ) If the process arrive in the order P2,P3, P1

P2 / :

P:

P1 :/

Average waiting time: (0+3+6)/3=3. Average waiting time vary substantially if the process CPU burst time vary greatly. *, Shortest 7ob 5irst: When several equally important jobs are sitting in the i/p queue waiting to be started, the scheduler picks the shortest jobs first. 3 3 3 , A B C D

Original order: (Turn Around time) Here we have four jobs A,B,C ,D with run times of 8 , 4, 4 and 4 minutes respectively. By running them in that order the turnaround time for A is 8 minutes, for B 12 minutes, for C 16 minutes and for D 20 minutes for an average of 14 minutes. Now let us consider running these jobs in the shortest Job First. B C D and then A. the turnaround times are now , 4, 8, 12 and 20 minutes giving the average of 11. Shortest job first is probably optimal.
Page:72 Compiled 9y: daya

Consider the four jobs with run times of a, b,c,d. The first job finished at a, the second at a+b and so on. So the mean turnaround time is (4a+3b+2c+d)/4. It is clear that the a contributes more to the average than any other. So it should be the shortest one. The disadvantages of this algorithm is the problem to know the length of time for which CPU is needed by a process. The SJF is optimal when all the jobs are available simultaneously. The SJF is either preemptive or non preemptive. Preemptive SJF scheduling is sometimes called Shortest Remaining Time First scheduling. With this scheduling algorithms the scheduler always chooses the process whose remaining run time is shortest. When a new job arrives its total time is compared to the current process remaining time. If the new job needs less time to finish than the current process, the current process is suspended and the new job is started. This scheme allows new short jobs to get good service.

Q). Calculate the average waiting time in 1). Preemptive SJF and 2). Non Preemptive SJF Note: SJF Default: (Non Preemptive) Process Arrival Time Burst Time P1 P2 P3 P4 0 1 2 3 8 4 9 5

a. Preemptive SJF ( Shortest Remaining Time First):

At t=0ms only one process P1 is in the system, whose burst time is 8ms; starts its execution. After 1ms i.e., at t=1, new process P2 (Burst time= 4ms) arrives in the ready queue. Since its burst time is less than the remaining burst time of P1 (7ms) P1 is preempted and execution of P2 is started. Again at t=2, a new process P3 arrive in the ready queue but its burst time (9ms) is larger than remaining burst time of currently running process (P2 3ms). So P2 is not preempted and continues its execution. Again at t=3 , new process P4 (burst time 5ms ) arrives . Again for same reason P2 is not preempted until its execution is completed.

Page:7:

Compiled 9y: daya

Waiting time of P1: 0ms + (10 1)ms = 9ms Waiting time of P2: 1ms 1ms = 0ms Waiting time of P3: 17ms 2ms = 15ms Waiting time of P4: 5ms 3ms = 2ms Avg waiting time: (9+0+15+2)/4 = 6.5ms

Non-preemptive SJF:

Since its non-preemptive process is not preempted until it finish its execution. Waiting time for P1: 0ms Waiting time for P2: (8-1)ms = 7ms Waiting time for P3: (17 2) ms = 15ms Waiting time for P4: (12 3)ms = 9ms Average waiting time: (0+7+15+9)/4 = 7.75ms /, +ound&+obin Scheduling !lgorithms:
One

of the oldest, simplest, fairest and most &idely used algorithm is round ro"in 6%%'( In the round ro"in scheduling, processes are dispatched in a 7I7O manner "ut are given a limited amount of CP! time called a time#slice or a 1uantum( If a process does not complete "efore its CP!#time e2pires, the CP! is preempted and given to the ne2t process &aiting in a 1ueue( ,he preempted process is then placed at the "ac+ of the ready list( If the the process has "loc+ed or finished "efore the 1uantum has elapsed the CP! s&itching is done( %ound %o"in Scheduling is preemptive 6at the end of time#slice' therefore it is effective in time# sharing environments in &hich the system needs to guarantee reasona"le response times for interactive users( ,he only interesting issue &ith round ro"in scheme is the length of the 1uantum( Setting the 1uantum too short causes too many conte2t s&itches and lo&er the CP! efficiency( On the other hand, setting the 1uantum too long may cause poor response time and appro2imates 7C7S( In any event, the average &aiting time under round ro"in scheduling is on 1uite long( Consider the follo&ing set of processes that arrives at time < ms( Process 6urst Time P1 P2 P: 2/ : 3

If we use time ;uantum of 3ms then calculate the average waiting time using *4* scheduling.
Page:73 Compiled 9y: daya

&ccording to *4* scheduling processes are e"ecuted in -C-S order. SoE firstly P1%9urst timeJ2/ms) is e"ecuted 9ut after 3ms it is preempted and new process P2 %6urst time J :ms) starts its e"ecution whose e"ecution is completed 9efore the time ;uantum. Then ne"t process P: %6urst timeJ3ms) starts its e"ecution and finally remaining part of P1 gets e"ecuted with time ;uantum of 3ms. =aiting time of Process P1: /ms F %11 H 3)ms J 7ms =aiting time of Process P2: 3ms =aiting time of Process P:: 7ms &verage =aiting time: %7F3F7)(:J ms 0, Priority Scheduling: ) priority is associated &ith each process, and the CP! is allocated to the process &ith the highest priority( 1ual priority processes are scheduled in the 7C7S order( .ssigning priority: /(,o prevent high priority process from running indefinitely the scheduler may decrease the priority of the currently running process at each cloc+ interrupt( If this causes its priority to drop "elo& that of the ne2t highest process, a process s&itch occurs( 4( ach process may "e assigned a ma2imum time 1uantum that is allo&ed to run( *hen this 1uantum is used up, the ne2t highest priority process is given a chance to run( Priorities can "e assigned statically or dynamically( 7or !BI> system there is a command nice for assigning static priority( It is often convenient to group processes into priority classes and use priority scheduling among the classes "ut round#ro"in scheduling &ithin each class(

Fig: A schedu ing a go. with four .riority c asses

Page:75

Compiled 9y: daya

Problems in Priority Scheduling: Star!ation: 1ow priority process may never e"ecute. Solution: ging: &s time progress increase the priority of Process.

Multile'el 8ueue Scheduling: In this scheduling processes are classified into different groups. & common e"ample may 9e foreground% or Interactive processes) or 9ac5ground %or 9atch processes).

%eady 1ueue is partitioned into separate 1ueues: foreground 6interactive' "ac+ground 6"atch' Let us loo+ at an e2ample of a multilevel 1ueue scheduling algorithm &ith five 1ueues, listed "elo& in the order of priority( /(System processes 4(Interactive processes 5(Interactive editing processes D(9atch processes E(Student processes ach 1ueue has a"solute priority over lo&er priority 1ueues( Bo processes in the "atch 1ueue, for e2ample could run unless the 1ueue for System processes , interactive processes and interactive editing processes &ere all empty( If an interactive editing process enters the ready 1ueue &hile a "atch process &as running the "atch process &ould "e preempted( )nother possi"ility is to time slice "et&een the 1ueues( 7or instance foreground 1ueue can "e given G<X of the CP! time for %% scheduling among its processes, &hereas the "ac+ground receives 4<X of the CP! time(
Page:7 Compiled 9y: daya

9uaranteed Scheduling:
Ma+e

real promises to the users a"out performance( If there are n users logged in &hile you are &or+ing, you &ill receive a"out / $n of the CP! po&er( Similarly, on a single#user system &ith n processes running, all things "eing e1ual, each one should get / $n of the CP! cycles( ,o ma+e good on this promise, the system must +eep trac+ of ho& much CP! each process has had since its creation( CP! ,ime entitledT 6,ime Since Creation'$n ,hen compute the ratio of )ctual CP! time consumed to the CP! time entitled( ) ratio of <(E means that a process has only had half of &hat it should have had, and a ratio of 4(< means that a process has had t&ice as much as it &as entitled to( ,he algorithm is then to run the process &ith the lo&est ratio until its ratio has moved a"ove its closest competitor( %ottery Scheduling: Lottery Scheduling is a probabilistic scheduling algorithm for processes in an operating system. Processes are each assigned some number of lottery tickets for various system resources such as CPU time.;and the scheduler draws a random ticket to select the next process. The distribution of tickets need not be uniform; granting a process more tickets provides it a relative higher chance of selection. This technique can be used to approximate other scheduling algorithms, such as Shortest job next and Fairshare scheduling. Lottery scheduling solves the problem of starvation. Giving each process at least one lottery ticket guarantees that it has non-zero probability of being selected at each scheduling operation. More important process can be given extra tickets to increase their odd of winning. If there are 100 tickets outstanding, & one process holds 20 of them it will have 20% chance of winning each lottery. In the long run it will get 20% of the CPU. A process holding a fraction f of the tickets will get about a fraction of the resource in questions. T:o&%e'el Scheduling: Performs process scheduling that involves swapped out processes. Two-level scheduling is needed when memory is too small to hold all the ready processes . Consider this problem: A system contains 50 running processes all with equal priority. However, the system's memory can only hold 10 processes in memory simultaneously. Therefore, there will always be 40 processes swapped out written on virtual memory on the hard disk It uses two different schedulers, one lower-level scheduler which can only select among those processes in memory to run. That scheduler could be a Round-robin scheduler. The other scheduler is the higherlevel scheduler whose only concern is to swap in and swap out processes from memory. It does its scheduling much less often than the lower-level scheduler since swapping takes so much time. the higher4level scheduler selects among those processes in memory that have run for a long time and swaps them out

Page:77

Compiled 9y: daya

Scheduling in +eal Time System: Time is crucial and plays an essential role. eg. the computer in a compact disc player gets the bits as they come off the drive and must convert them into music with a very tight time interval. If the calculation takes too long the music sounds peculiar. Other example includes Auto pilot in Aircraft Robot control in automated factory. Patient monitoringin Factory. (ICU) Two types: 1. Hard Real Time system: There are absolute deadline that must be met. 2. Soft Real Time system: Missing an occasional deadline is undesirable but nevertheless tolerable. In both cases real time behavior is achieved by dividing the program into a no. of processes, each of whose behavior is predictable & known in advance. These processes are short lived and can run to completion. Its the job of schedulers to schedule the process in such a way that all deadlines are met. If there are m periodic events and event i occurs with period Pi and requires Ci seconds of CPU time to handle each event, then the load can only be handled if

A real-time system that meets this criteria is said to be schedulable. Policy VS Mechanism: The separation of mechanism and policy is a design principle in computer science. It states that mechanisms (those parts of a system implementation that control the authorization of operations and the allocation of resources) should not dictate (or overly restrict) the policies according to which decisions are made about which operations to authorize, and

All the processes in the system belong to different users and are thus competing for the CPU. sometimes it happens that one process has many children running under its control. For example, a database management system process may have many children. Each child might be working on a different request, or each one might have some specific function to perform (query parsing, disk access, etc.) Unfortunately, none of the schedulers discussed above accept any input from user processes about scheduling decisions. As a result, the scheduler rarely makes the best choice. The solution to this problem is to separate the scheduling mechanism from the scheduling policy. What this means is that the scheduling algorithm is parameterized in some way, but the
Compiled 9y: daya

Page:7,

parameters can be filled in by user processes. Let us consider the database example once again. Suppose that the kernel uses a priority scheduling algorithm but provides a system call by which a process can set (and change) the priorities of its children. In this way the parent can control in detail how its children are scheduled, even though it does not do the scheduling itself. Here the mechanism is in the kernel but policy is set by a user process.

Page:7.

Compiled 9y: daya

Chapter 2: Memory Management Introduction, storage organi@ation and hierarchy, contiguous versus noncontiguous storage allocation, Logical and physical memory, fragmentation, fi2ed partition multiprogramming, varia"le partition multiprogramming, relocation and protection, Coalescing and Compaction, Virtual Memory: Introduction, Paging, , Page ta"les, 9loc+ mapping, :irect mapping, ,L9 6 ,ranslation Loo+ aside 9uffers', Page 7ault, Page %eplacement algorithms, Optimal Page %eplacement algorithm, Bot %ecently !sed Page %eplacement algorithm, 7irst# In# 7irst Out algorithm, Second Chance Page %eplacement algorithm, *or+ing Set Page %eplacement algorithm, *S Cloc+ Page %eplacement algorithm, Segmentation, implementation of pure segmentation, Segmentation &ith Paging. Types of Memory: Primary Memory 6eg( %)M' 0olds data and programs used "y a process that is e2ecuting Only type of memory that a CP! deals &ith Secondary Memory 6eg( hard dis+' Bon#volatile memory used to store data &hen a process is not e2ecuting(

Memory Management Scheme

Contiguous Memory Allocation

Non-contiguous Memory Allocation

Bare Machine

Resident Memory

Multi Programming

Fixed Partition

Variable Partition

Segmentation (User view)

Paging (system view)

Fig:"ypes of &emory management


Page:,/ Compiled 9y: daya

Memory Management: Memory management is the act of managing computer memory( In its simpler forms, this involves providing &ays to allocate portions of memory to programs at their re1uest, and freeing it for reuse &hen no longer needed( ,he management of main memory is critical to the computer system( In a uniprogramming system, Main memory is divided into t&o parts: one part for the OS one part for the program currently "eing e2ecuted( In a multiprogramming system, the user part of the memory must "e further su"divided to accommodate multiple processes( ,he tas+ of su"division is carried out dynamically "y the Operating System and is +no&n as Memory Management( Two ma/or schemes for memory management, /(Contiguous Memory )llocation 4(Bon#contiguous memory )llocation Contiguous allocation It means that each logical object is placed in a set of memory locations with strictly consecutive addresses. ;on&contiguous allocation It implies that a single logical object may be placed in non-consecutive sets of memory locations. Paging (System view) and Segmentation ( ser view) are the two mechanisms that are used to mange non-contiguous memory allocation. Memory $artitioning: 1, 5ixed Partitioning: Main memory is divided into a no( of static partitions at system generation time( ) process may "e loaded into a partition of e1ual or greater si@e( !emory !anager will allocate a region to a process that best fits it nused memory within an allocated partition called internal fragmentation .dvantages: Simple to implement Little OS overhead Disadvantages: Inefficient use of Memory due to internal fragmentation( Main memory utili@ation is e2tremely
Page:,1 Compiled 9y: daya

inefficient( )ny program, no matter ho& small, occupies an entire partition( ,his phenomenon, in &hich there is &asted space internal to a partition due to the fact that the "loc+ of data loaded is smaller than the partition, is referred to as internal &ragmentation. 0o possi%ilities: a#. E5ual si6e partitioning %#. 7ne5ual si6e Partition Bot suita"le for systems in &hich process memory re1uirements not +no&n ahead of time? i(e( timesharing systems(

+a, Fi*ed memory partitions with separate input -ueues for each partition. +b, Fi*ed memory partitions with a sing e input -ueue. *hen the 1ueue for a large partition is empty "ut the 1ueue for a small partition is full, as is the case for partitions / and 5( 0ere small -o"s have to &ait to get into memory, even though plenty of memory is free )n alternative organi@ation is to maintain a single 1ueue as in 7ig( D#46"'( *henever a partition "ecomes free, the -o" closest to the front of the 1ueue that fits in it could "e loaded into the empty partition and run( *,-ynamic#Variable Partitioning: ,o overcome some of the difficulties &ith fi2ed partitioning, an approach +no&n as dynamic partitioning &as developed ( ,he partitions are of varia"le length and num"er( *hen a process is "rought into main memory, it is allocated e2actly as much memory as it re1uires and no more( )n e2ample, using ;D M"ytes of main memory, is sho&n in 7igure ventually it leads to a situation in &hich there are a lot of small holes in memory( )s time goes on,
Page:,2 Compiled 9y: daya

memory "ecomes more and more fragmented, and memory utili@ation declines( ,his phenomenon is referred to as external &ragmentation, indicating that the memory that is e2ternal to all partitions "ecomes increasingly fragmented( One techni1ue for overcoming e2ternal fragmentation is compaction: 7rom time to time, the operating system shifts the processes so that they are contiguous and so that all of the free memory is together in one "loc+( 7or e2ample, in 7igure h, compaction &ill result in a "loc+ of free memory of length /;M( ,his may &ell "e sufficient to load in an additional process( ,he difficulty &ith compaction is that it is a time consuming procedure and &asteful of processor time(

Fig."he 2ffect of dynamic partitioning Memory Management :ith )itmaps: *hen memory is assigned dynamically, the operating system must manage it( *ith a "itmap, memory is divided up into allocation units, perhaps as small as a fe& &ords and perhaps as large as several
Page:,: Compiled 9y: daya

+ilo"ytes( Corresponding to each allocation unit is a "it in the "itmap, &hich is < if the unit is free and / if it is occupied 6or vice versa'( 7igure "elo& sho&s part of memory and the corresponding "itmap(

Fig:+a, A part of memory with five processes and three ho es. "he tic0 mar0s show the memory a ocation units. "he shaded regions +5 in the bitmap, are free. +b, "he corresponding bitmap. +c, "he same information as a ist. ,he si@e of the allocation unit is an important design issue( ,he smaller the allocation unit, the larger the "itmap( ) "itmap provides a simple &ay to +eep trac+ of memory &ords in a fi2ed amount of memory "ecause the si@e of the "itmap depends only on the si@e of memory and the si@e of the allocation unit( ,he main pro"lem &ith it is that &hen it has "een decided to "ring a + unit process into memory, the memory manager must search the "itmap to find a run of + consecutive < "its in the map( Searching a "itmap for a run of a given length is a slo& operation( Memory Management :ith %in(ed %ists )nother &ay of +eeping trac+ of memory is to maintain a lin+ed list of allocated and free memory segments, &here a segment is either a process or a hole "et&een t&o processes(

Fig:Four neighbor combinations for the terminating process, 6.


Page:,3 Compiled 9y: daya

ach entry in the list specifies a hole 60' or process 6P', the address at &hich it starts, the length, and a pointer to the ne2t entry( In this e2ample, the segment list is +ept sorted "y address( Sorting this &ay has the advantage that &hen a process terminates or is s&apped out, updating the list is straightfor&ard( ) terminating process normally has t&o neigh"ors 6e2cept &hen it is at the very top or very "ottom of memory'( ,hese may "e either processes or holes, leading to the four com"inations sho&n in fig a"ove( *hen the processes and holes are +ept on a list sorted "y address, several algorithms can "e used to allocate memory for a ne&ly created process 6or an e2isting process "eing s&apped in from dis+'( *e assume that the memory manager +no&s ho& much memory to allocate( 2irst 2it: ,he simplest algorithm is first fit( ,he process manager scans along the list of segments until it finds a hole that is "ig enough( ,he hole is then "ro+en up into t&o pieces, one for the process and one for the unused memory, e2cept in the statistically unli+ely case of an e2act fit( 7irst fit is a fast algorithm "ecause it searches as little as possi"le( 1ext 2it: It &or+s the same &ay as first fit, e2cept that it +eeps trac+ of &here it is &henever it finds a suita"le hole( ,he ne2t time it is called to find a hole, it starts searching the list from the place &here it left off last time, instead of al&ays at the "eginning, as first fit does( /est 2it: 9est fit searches the entire list and ta+es the smallest hole that is ade1uate( %ather than "rea+ing up a "ig hole that might "e needed later, "est fit tries to find a hole that is close to the actual si@e needed( 3orst 2it: )l&ays ta+e the largest availa"le hole, so that the hole "ro+en off &ill "e "ig enough to "e useful( Simulation has sho&n that &orst fit is not a very good idea either( 4uick 2it: maintains separate lists for some of the more common si@es re1uested( 7or e2ample, it might have a ta"le &ith n entries, in &hich the first entry is a pointer to the head of a list of D#89 holes, the second entry is a pointer to a list of G#89 holes, the third entry a pointer to /4#89 holes, and so on( 0oles of say, 4/ 89, could either "e put on the 4<#89 list or on a special list of odd#si@ed holes( *ith 1uic+ fit, finding a hole of the re1uired si@e is e2tremely fast, "ut it has the same disadvantage as all schemes that sort "y hole si@e, namely, &hen a process terminates or is s&apped out, finding its neigh"ors to see if a merge is possi"le is e2pensive( If merging is not done, memory &ill 1uic+ly fragment into a large num"er of small holes into &hich no processes fit( /uddy-system: 9oth fi2ed and dynamic partitioning schemes have dra&"ac+s( ) fi2ed partitioning scheme limits the num"er of active processes and may use space inefficiently if there is a poor match "et&een availa"le partition si@es and process si@es( ) dynamic partitioning scheme is more comple2 to maintain and includes the overhead of compaction( )n interesting compromise is the "uddy system ( In a "uddy system, memory "loc+s are availa"le of si@e 48 &ords, L`8` !, &here 4L T smallest si@e "loc+ that is allocated 4!Tlargest si@e "loc+ that is allocated? generally 4! is the si@e of the entire memory availa"le for allocation
Page:,5 Compiled 9y: daya

In a "uddy system, the entire memory space availa"le for allocation is initially treated as a single "loc+ &hose si@e is a po&er of 4( *hen the first re1uest is made, if its si@e is greater than half of the initial "loc+ then the entire "loc+ is allocated( Other&ise, the "loc+ is split in t&o e1ual companion "uddies( If the si@e of the re1uest is greater than half of one of the "uddies, then allocate one to it( Other&ise, one of the "uddies is split in half again( ,his method continues until the smallest "loc+ greater than or e1ual to the si@e of the re1uest is found and allocated to it( In this method, &hen a process terminates the "uddy "loc+ that &as allocated to it is freed( *henever possi"le, an unallocated "uddy is merged &ith a companion "uddy in order to form a larger free "loc+( ,&o "loc+s are said to "e companion "uddies if they resulted from the split of the same direct parent "loc+( ,he follo&ing fig( illustrates the "uddy system at &or+, considering a /<4D+ 6/#mega"yte' initial "loc+ and the process re1uests as sho&n at the left of the ta"le(

Fig: 2*amp e of buddy system Swapping: ) process must "e in memory to "e e2ecuted( ) process, ho&ever can "e s&apped temporarily out of memory to a "ac+ing store and then "rought "ac+ into memory for continued e2ecution( 7or e2ample assume a multiprogramming environment &ith a %ound %o"in CP! scheduling algorithms( *hen a 1uantum e2pires, the memory manger &ill start to s&ap out the process that -ust finished and to s&ap another process into the memory space that has "een freed(

Page:,

Compiled 9y: daya

Fig: Swapping of two processes using a dis0 as a bac0ing store %ogical !ddress VS Physical !ddress: )n address generated "y the CP! is commonly referred to as a l ogical address, &hereas address seen "y the memory unit# that is the one loaded into the memory#address register of the memory# is commonly referred to as a p!ysical address( ,he compile time and load time address "inding methods generate identical logical and physical addresses( 0o&ever the e2ecution time address "inding scheme results in differing logical and physical address( In that case &e usually refer to the logical address as Virtual address( ,he run time mapping from virtual address to physical address is done "y a hard&are called Memory management unit "MM7#. Set of all logical address space generated "y a program is +no&n as l ogical address space and the set of all physical addresses &hich corresponds to these logical addresses is called p!ysical address space( ;on&contiguous Memory allocation: 7ragmentation is a main pro"lem is contiguous memory allocation( *e have seen a method called compaction to resolve this pro"lem( Since its an I$O operation system efficiency gets reduced( So, a "etter method to overcome the fragmentation pro"lem is to ma+e our logical address space non# contiguous( Consider a system in &hich "efore applying compaction, there are holes of si@e /8 and 48( If a ne& process of si@e 58 &ants to "e e2ecuted then its e2ecution is not possi"le &ithout compaction( )n alternative approach is divide the si@e of ne& process P into t&o chun+s of /8 and 48 to "e a"le to load them into t&o holes at different places(
Page:,7 Compiled 9y: daya

/( If the chun+s have to "e of same si@e for all processes ready for the e2ecution then the memory management scheme is called P.8,18. 4( If the chun+s have to "e of different si@e in &hich process image is divided into logical segments of different si@es then this method is called SE8ME1 . ,O1. 5( If the method can &or+ &ith only some chun+s in the main memory and the remaining on the dis+ &hich can "e "rought into main memory only &hen its re1uired, then the system is called 9,' 7.$ MEO': M.1.8EME1 S:S EM. 0irtual Memory: ,he "asic idea "ehind virtual memory is that the com"ined si@e of the program, data, and stac+ may e2ceed the amount of physical memory availa"le for it( ,he operating system +eeps those parts of the program currently in use in main memory, and the rest on the dis+( 7or e2ample, a E/4#M9 program can run on a 4E;#M9 machine "y carefully choosing &hich 4E; M9 to +eep in memory at each instant, &ith pieces of the program "eing s&apped "et&een dis+ and memory as needed( Virtual memory can also &or+ in a multiprogramming system, &ith "its and pieces of many programs in memory at once( *hile a program is &aiting for part of itself to "e "rought in, it is &aiting for I$O and cannot run, so the CP! can "e given to another process, the same &ay as in any other multiprogramming system( Virtual memory systems separate the memory addresses used "y a process from actual physical addresses, allo&ing separation of processes and increasing the effectively availa"le amount of %)M using dis+ s&apping(

$aging: Most virtual system uses a techni1ues called paging that permits the physical address space of a process to "e non#contiguous( ,hese program#generated addresses are called virtual addresses and form the virtual address space. On computers &ithout virtual memory, the virtual address is put directly onto the memory "us and causes the physical memory &ord &ith the same address to "e read or &ritten( *hen virtual memory is used, the virtual addresses do not go directly to the memory "us( Instead, they go to an MM! 6Memory Management !nit' that maps the virtual addresses onto the physical memory addresses as illustrated in 7ig

Page:,,

Compiled 9y: daya

Fig:"he position and function of the &&1. Here the &&1 is shown as being a part of the C.1 chip because it common y is nowadays

,he "asic method for implementing paging involves "rea+ing physical memory into fi2ed si@e "loc+ called &rames and "rea+ing logical memory into "loc+s of the same si@e called pages. si@e is po&er of 4, "et&een E/4 "ytes and G,/H4 "ytes *hen a process is to "e e2ecuted, its pages are loaded into any availa"le memory frames from the "ac+ing store( ,he "ac+ing store is divided into fi2ed#si@e "loc+ that are of the same si@e as memory frames( "ac#ing store($) is typically part of a hard dis# that is used by a paging or swapping system to store information not currently in main memory. "ac#ing store is slower and cheaper than main memory. ) very simple e2ample of ho& this mapping &or+s is sho&n in 7ig( "elo&( In this e2ample, &e have a computer that can generate /;#"it addresses, from < up to ;D8( ,hese are the virtual addresses( ,his computer, ho&ever, has only 54 89 of physical memory, so although ;D#89 programs can "e &ritten, they cannot "e loaded into memory in their entirety and run( *ith ;D 89 of virtual address space and 54 89 of physical memory, &e get /; virtual pages and G page frames( ,ransfers "et&een %)M and dis+ are al&ays in units of a page( *hen the program tries to access address <, for e2ample, using the instruction MO9 'E8;< virtual address < is sent to the MM!( ,he MM! sees that this virtual address falls in page < 6< to D<HE', &hich according to its mapping is page frame 4 6G/H4 to /44GF'( It thus transforms the address
Page:,. Compiled 9y: daya

to G/H4 and outputs address G/H4 onto the "us( ,he memory +no&s nothing at all a"out the MM! and -ust sees a re1uest for reading or &riting address G/H4, &hich it honors( ,hus, the MM! has effectively mapped all virtual addresses "et&een < and D<HE onto physical addresses G/H4 to /44GF( Similarly, an instruction MO9 'E8;=)>* is effectively transformed into MO9 'E8;*-?@A. "ecause virtual address G/H4 is in virtual page 4 and this page is mapped onto physical page frame ; 6physical addresses 4DEF; to 4G;F/'( )s a third e2ample, virtual address 4<E<< is 4< "ytes from the start of virtual page E 6virtual addresses 4<DG< to 4DEFE' and maps onto physical address /44GG C 4< T /45<G(

Fig:"he re ation between virtua addresses and physica memory addresses is given by the page tab e. $.1! Fault: ) page &ault is a trap to the soft&are raised "y the hard&are &hen a program accesses a page that is mapped in the virtual address space, "ut not loaded in physical memory( In the typical case the operating system tries to handle the page fault "y ma+ing the re1uired page accessi"le at a location in physical memory or +ills the program in the case of an illegal access( ,he hard&are that detects a page fault is the memory management unit in a processor( ,he e2ception handling soft&are that handles the page fault is generally part of the operating system( *hat happens if the program tries to use an unmapped page, for e2ample, "y using the instruction MO9 'E8;+*@=<
Page:./ Compiled 9y: daya

&hich is "yte /4 &ithin virtual page G 6starting at 54F;G'O ,he MM! notices that the page is unmapped 6indicated "y a cross in the figure' and causes the CP! to trap to the operating system( ,his trap is called a page &ault( ,he operating system pic+s a little#used page frame and &rites its contents "ac+ to the dis+( It then fetches the page -ust referenced into the page frame -ust freed, changes the map, and restarts the trapped instruction( In computer storage technology, a page is a fi%ed-length bloc# of memory that is used as a unit of transfer between physical memory and e%ternal storage li#e a dis#, and a page fault is an interrupt (or e%ception) to the software raised by the hardware, when a program accesses a page that is mapped in address space, but not loaded in physical memory. &n interrupt that occurs when a program re'uests data that is not currently in real memory. (he interrupt triggers the operating system to fetch the data from a virtual memory and load it into )&!. &n invalid page fault or page fault error occurs when the operating system cannot find the data in virtual memory. (his usually happens when the virtual memory area, or the table that maps virtual addresses to real addresses, becomes corrupt. $aging -ardware: ,he hard&are support for the paging is as sho&n in fig( very address generated "y the CP! is divided into t&o parts: a page num"er6p' and a page offset 6d' Page num"er 6p' used as an inde2 into a page ta"le &hich contains "ase address of each page in physical memory( Page offset 6d' com"ined &ith "ase address to define the physical memory address that is sent to the memory unit(

Fig:.aging Hardware
Page:.1

page number p m-n

page offset
Compiled 9y: daya

d n

page number p m-n

page offset d n

Fig:.aging mode of (ogica and .hysica &emory

*hen &e use a paging scheme, &e have no e2ternal fragmentation( )ny free frame can "e allocated to a process that needs it( 0o&ever &e may have some internal fragmentation(

Page:.2

Compiled 9y: daya

$age Replacement .lgorithms: *hen a page fault occurs, the operating system has to choose a page to remove from memory to ma+e room for the page that has to "e "rought in( ,he page replacement is done "y s&apping the re1uired pages from "ac+up storage to main memory and vice#versa( If the page to "e removed has "een modified &hile in memory, it must "e re&ritten to the dis+ to "ring the dis+ copy up to date( If, ho&ever, the page has not "een changed 6e(g(, it contains program te2t', the dis+ copy is already up to date, so no re&rite is needed( ,he page to "e read in -ust over&rites the page "eing evicted( & page replacement algorithm is evaluated by running the particular algorithm on a string of memory references and compute the page faults.)eferenced string is a se'uence of pages being referenced. Page fault is not an error. Contrary to &hat their name might suggest, page faults are not errors and are common and necessary to increase the amount of memory availa"le to programs in any operating system that utili@es virtual memory, including Microsoft *indo&s, Mac OS >, Linu2 and !ni2( ach operating system uses different page replacement algorithms( ,o select the particular algorithm, the algorithm &ith lo&est page fault rate is considered( /(Optimal page replacement algorithm 4(Bot recently used page replacement 5(7irst#In, 7irst#Out page replacement D(Second chance page replacement E(Cloc+ page replacement ;(Least recently used page replacement The Optimal Page +eplacement !lgorithm: ,he algorithm has lo&est page fault rate of all algorithm( ,his algorithm state that: %eplace the page &hich &ill not "e used for longest period of time i(e future +no&ledge of reference string is re1uired( Often called 9alady.s Min 9asic idea: %eplace the page that &ill not "e referenced for the longest time( Impossi"le to implement

Page:.:

Compiled 9y: daya

5.5O: 5irst .n 5irst Out$


,he

oldest page in the physical memory is the one selected for replacement( Very simple to implement( # 8eep a list On a page fault, the page at the head is removed and the ne& page added to the tail of the list

,ssues: poor replacement policy 7I7O doesn.t consider the page usage( %+" %east +ecently "sed$: In this algorithm, the page that has not "een used for longest period of time is selected for replacement( )lthough L%! is theoretically reali@a"le, it is not cheap( ,o fully implement L%!, it is necessary to maintain a lin+ed list of all pages in memory, &ith the most recently used page at the front and the least recently used page at the rear( ,he difficulty is that the list must "e updated on every memory reference( 7inding a page in the list, deleting it, and then moving it to the front is a very time# consuming operation, even in hard&are 6assuming that such hard&are could "e "uilt'(

Page:.3

Compiled 9y: daya

!e 1ot 'ecently 7sed Page 'eplacement .lgorit!m ,&o status "it associated &ith each page( % is set &henever the page is referenced 6read or &ritten'( M is set &hen the page is &ritten to 6i(e(, modified'( *hen a page fault occurs, the operating system inspects all the pages and divides them into four categories "ased on the current values of their % and M "its: Class <: not referenced, not modified( Class /: not referenced, modified( Class 4: referenced, not modified( Class 5: referenced, modified( ,he B%! 6Bot %ecently !sed' algorithm removes a page at random from the lo&est num"ered nonempty class(

The Second Chance Page +eplacement !lgorithm: ) simple modification to 7I7O that avoids the pro"lem of heavily used page( It inspects the % "it If it is <, the page is "oth old and unused, so it is replaced immediately( If the % "it is /, the "it is cleared, the page is put onto the end of the list of pages, and its load time is updated as though it had -ust arrived in memory( ,hen the search continues(

Page:.5

Compiled 9y: daya

The Cloc( Page +eplacement !lgorithm +eep all the page frames on a circular list in the form of a cloc+, as sho&n in 7ig( ) hand points to the oldest page(

*hen a page fault occurs, the page "eing pointed to "y the hand is inspected( If its % "it is <, the page is evicted, the ne& page is inserted into the cloc+ in its place, and the hand is advanced one position( If % is /, it is cleared and the hand is advanced to the ne2t page( ,his process is repeated until a page is found &ith % T < 5uestion: Calculate the no( of page fault for 7I7O, L%! and Optimal for the reference string F,<,/,4,<,5,<,D,4,5,<,5( )ssume there are three 7rames availa"le in the memory(

Page:.

Compiled 9y: daya

External &ragmentation: free space divided into many small pieces result of allocating and deallocating pieces of the storage space of many different si@es one may have plenty of free space, it may not "e a"le to all used, or at least used as efficiently as one &ould li+e to !nused portion of main memory ,nternal &ragmentation: result of reserving a piece of space &ithout ever intending to use it !nused portion of page Segmentation: segmentation is another techni1ues of non#contiguous memory allocation method( Its different from paging as pages are physical in nature and hence are fi2ed in si@e, &hereas the segments are logical in nature and hence are varia"le si@e(

It support the user vie& of the memory rather than system vie& as supported "y paging( In segmentation &e divide the the logical address space into different segments( ,he general division can "e: main program, set of su"routines, procedures, functions and set of data structures6stac+, array etc'( ach segment has a name and length &hich is loaded into physical memory as it is( 7or simplicity the segments are referred "y a segment num"er, rather than a segment name( Virtual address space is divided into t&o parts in &hich high order units refer to .s. i(e(, segment num"er and lo&er order units refer to .d. i(e(,displacement 6limit value'( 15 s
Segment num9er

1: d
8isplacement

7ig:Virtual address space or logical address space 6/; "it'

Segmentation maintains multiple separate virtual address spaces per process. &llows each table to grow or shrin#, independently.
Page:.7 Compiled 9y: daya

Paging Vs Segmentation: Sno. Paging / 9loc+ replacement easy 7i2ed#length "loc+s Segmentation 9loc+ replacement hard Varia"le#length "loc+s Beed to find contiguous, varia"le#si@ed, unused part of main memory

Invisi"le to application programmer

Visi"le to application programmer(

Bo e2ternal fragmentation, 9ut there is Bo Internal 7ragmentation, 9ut there is e2ternal Internal 7ragmentation unused portion of 7ragmentation unused portion of main memory( page(

!nits of code and date are "ro+en into 8eeps "loc+s of code or data as a single units( separate pages( segmentation is a logical unit visi"le to the paging is a physical unit invisi"le to the user.s vie& user.s program and id of ar"itrary si@e and is of fi2ed si@e Segmentation maintains multiple address Paging maintains one address space( spaces per process(( Bo sharing of procedures "et&een users is sharing of procedures "et&een users is facilitated( facilitated(

Page:.,

Compiled 9y: daya

Introduction, Principals of I$O hard&are: I$O devices, device controllers, memory mapped I$O, :M) 6:irect Memory )ccess', Principles of I$O soft&are: Polled I$O versus Interrupt driven I$O, Character !ser Interface and Iraphical !ser Interface, Ioals of I$O soft&are, device drivers, device independent I$O soft&are, :is+, dis+ hard&are arm scheduling algorithms, %)I: 6%edundant )rray of Ine2pensive :is+s' Chapter:< .nput#Output: Introduction, Principals of I$O hard&are: I$O devices, device controllers, memory mapped I$O, :M) 6:irect Memory )ccess', Principles of I$O soft&are: Polled I$O versus Interrupt driven I$O, Character !ser Interface and Iraphical !ser Interface, Ioals of I$O soft&are, device drivers, device independent I$O soft&are, :is+, dis+ hard&are arm scheduling algorithms, %)I: 6%edundant )rray of Ine2pensive :is+s'

(hat a#out I2O+



Page:..

*ithout I$O, computers are useless 6disem"odied "rainsO' 9ut[ thousands of devices, each slightly different 0o& can &e standardi@e the interfaces to these devicesO :evices unrelia"le: media failures and transmission errors 0o& can &e ma+e them relia"leOOO :evices unpredicta"le and$or slo& 0o& can &e manage them if &e donJt +no& &hat they &ill do or ho& they &ill performO
Compiled 9y: daya

Some operational parameters: /yte//lock Some devices provide single "yte at a time 6e.g. +ey"oard' Others provide &hole "loc+s 6e.g. dis+s, net&or+s, etc' Se5uential/'andom Some devices must "e accessed se1uentially 6e.g. tape' Others can "e accessed randomly 6e.g. dis+, cd, etc(' Polling/,nterrupts Some devices re1uire continual monitoring Others generate interrupts &hen they need service I$O devices can "e roughly divided into t&o categories: %lock devices and c!aracter devices( ) "loc+ device is one that stores information in fi2ed#si@e "loc+s, each one &ith its o&n address( Common "loc+ si@es range from E/4 "ytes to 54,F;G "ytes( ,he essential property of a "loc+ device is that it is possi"le to read or &rite each "loc+ independently of all the other ones( :is+s are the most common "loc+ devices( ,he other type of I$O device is the c!aracter device. ) character device delivers or accepts a stream of characters, &ithout regard to any "loc+ structure( It is not addressa"le and does not have any see+ operation( Printers, net&or+ interfaces, mice 6for pointing', rats 6for psychology la" e2periments', and most other devices that are not dis+#li+e can "e seen as character devices( Bloc& "e!ices: e.g. dis5 drivesE tape drivesE 8$84*O# &ccess 9loc5s of data Commands include open%) E read%) E write%) E see5%) *aw I(O or file4system access #emory4mapped file access possi9le Character "e!ices: e.g. 5ey9oardsE miceE serial portsE some 'S6 devices Single characters at a time Commands include get%) E put%) 1i9raries layered on top allow line editing 'et(or& "e!ices: e.g. !thernetE =irelessE 6luetooth 8ifferent enough from 9loc5(character to have own interface 'ni" and =indows include soc5et interface Separates networ5 protocol from networ5 operation Includes select%) functionality 'sage: pipesE -I-OsE streamsE ;ueuesE mail9o"es

Page:1//

Compiled 9y: daya

"evice Controllers:
& device controller is a hardware unit which is attached with the input(output 9us of the computer and provides a hardware interface 9etween the computer and the input(output devices. On one side it 5nows how to communicate with input(output devices and on the other side it 5nows how to communicate with the computer system though input(output 9us. & device controller usually can control several input(output devices.

Fig: A mode for connecting the C.1, memory, contro ers, and 78# devices Typically the controller is on a card %eg. 1&D cardE 'S6 card etc). 8evice Controller play an important role in order to operate that device. It<s Cust li5e a 9ridge 9etween device and operating system. #ost controllers have 8#&%8irect #emory &ccess) capa9ilityE that means they can directly read(write memory in the system. & controller without 8#& capa9ility provide or accept the dataE one 9yte or word at a timeK and the processor ta5es care of storing itE in memory or reading it from the memory. 8#& controllers can transfer data much faster than non48#& controllers. Presently all controllers have 8#& capa9ility. "M is a memory4to4device communication method that 9y passes the CP'.

Memory&mapped Input2Output:
!ach controller has a few registers that are used for communicating with the CP'. 6y writing into these registersE the operating system can command the device to deliver dataE accept dataE switch itself on or offE or otherwise perform some action. 6y reading from these registersE the operating system can learn what the device<s state isE whether it is prepared to accept a new commandE and so on. In addition to the control registersE many devices have a data 9uffer that the operating system can read and write. -or e"ampleE a common way for computers to display pi"els on the screen is to have a video *&#E which is 9asically Cust a data 9ufferE availa9le for programs or the operating system to write into.
Page:1/1 Compiled 9y: daya

There are two alternatives that the CP' communicates with the control registers and the device data 9uffers.

$ort&mapped I2O :
each control register is assigned an I$O port num"er, an G# or /;#"it integer( !sing a special I$O instruction such as IB % I,PO%, the CP! can read in control register PO%, and store the result in CP! register % I( Similarly, using O!, PO%,,% I the CP! can &rite the contents of % I to a control register( Most early computers, including nearly all mainframes, such as the I9M 5;< and all of its successors, &or+ed this &ay( In this scheme, the address spaces for memory and I$O are different, as sho&n in 7ig( 6a'(Port#mapped I$O uses a special class of CP! instructions specifically for performing I$O(

Fig:+a, Separate 78# and memory space. +b, &emory9mapped 78#. +c, Hybrid. On other computers, I$O registers are part of the regular memory address space, as sho&n in 7ig(6"'( ,his scheme is called memory#mapped I$O, and &as introduced &ith the P:P#// minicomputer(Memory#mapped I$O 6not to "e confused &ith memory#mapped file I$O' uses the same address "us to address "oth memory and I$O devices, and the CP! instructions used to access the memory are also used for accessing devices( In order to accommodate the I$O devices, areas of the CP!.s addressa"le space must "e reserved for I$O( "M.: 3"irect Memory .ccess4 Short for direct memory access, a techni1ue for transferring data from main memory to a device &ithout passing it through the CP!( Computers that have :M) channels can transfer data to and from devices much more 1uic+ly than computers &ithout a :M) channel can( ,his is useful for ma+ing 1uic+ "ac+ups and for real#time applications(
Page:1/2 Compiled 9y: daya

4irect &emory Access +4&A, is a method of a owing data to be moved from one ocation to another in a computer without intervention from the centra processor +C.1,.

7irst the CP! programs the :M) controller "y setting its registers so it +no&s &hat to transfer &here 6step / in 7ig('( It also issues a command to the dis+ controller telling it to read data from the dis+ into its internal "uffer and verify the chec+sum( *hen valid data are in the dis+ controller.s "uffer, :M) can "egin( ,he :M) controller initiates the transfer "y issuing a read re1uest over the "us to the dis+ controller 6step 4'( ,his read re1uest loo+s li+e any other read re1uest, and the dis+ controller does not +no& or care &hether it came from the CP! or from a :M) controller( ,ypically, the memory address to &rite to is on the address lines of the "us so &hen the dis+ controller fetches the ne2t &ord from its internal "uffer, it +no&s &here to &rite it( ,he &rite to memory is another standard "us cycle 6step 5'( *hen the &rite is complete, the dis+ controller sends an ac+no&ledgement signal to the dis+ controller, also over the "us 6step D'( ,he :M) controller then increments the memory address to use and decrements the "yte count( If the "yte count is still greater than <, steps 4 through D are repeated until the count reaches <( )t this point the controller causes an interrupt( *hen the operating system starts up, it does not have to copy the "loc+ to memory? it is already there( )ayers of the %*+ soft(are system:

Page:1/:

Compiled 9y: daya

Fig, )ayers of the %*+ system and the main functions of each layer, ,he arro&s in fig a"ove sho& the flo& of control( *hen a user program tries to read a "loc+ from a file, for e2ample, the operating system is invo+ed to carry out the call( ,he device#independent soft&are loo+s for it in the "uffer cache, for e2ample( If the needed "loc+ is not there, it calls the device driver to issue the re1uest to the hard&are to go get it from the dis+( ,he process is then "loc+ed until the dis+ operation has "een completed( *hen the dis+ is finished, the hard&are generates an interrupt( ,he interrupt handler is run to discover &hat has happened, that is, &hich device &ants attention right no&( It then e2tracts the status from the device and &a+es up the sleeping process to finish off the I$O re1uest and let the user process continue(

"evice "river:
In computing, a device driver or soft&are driver is a computer program allo&ing higher#level computer programs to interact &ith a hard&are device( ) driver typically communicates &ith the device through the computer "us or communications su"system to &hich the hard&are connects( *hen a calling program invo+es a routine in the driver, the driver issues commands to the device( Once the device sends data "ac+ to the driver, the driver may invo+e routines in the original calling program( :rivers are hard&are#dependent and operating#system# specific( ,hey usually provide the interrupt handling re1uired for any necessary asynchronous time# dependent hard&are interface( ach device controller has registers used to give it commands or to read out its status or "oth( ,he num"er of registers and the nature of the commands vary radically from device to device( 7or e2ample, a mouse driver has to accept information from the mouse telling ho& far it has moved and &hich "uttons are currently depressed( In contrast, a dis+ driver has to +no& a"out sectors, trac+s, cylinders, heads, arm motion, motor drives, head settling times, and all the other mechanics of ma+ing the dis+ &or+ properly( O"viously, these drivers &ill "e very different(

Page:1/3

Compiled 9y: daya

,hus, each I$O device attached to a computer needs some device#specific code for controlling it( ,his code, called the device driver, is generally &ritten "y the device.s manufacturer and delivered along &ith the device on a C:#%OM( Since each operating system needs its o&n drivers, device manufacturers commonly supply drivers for several popular operating systems( ach device driver normally handles one device type, or one class of closely related devices( 7or e2ample, it &ould pro"a"ly "e a good idea to have a single mouse driver, even if the system supports several different "rands of mice( )s another e2ample, a dis+ driver can usually handle multiple dis+s of different si@es and different speeds, and perhaps a C:#%OM as &ell( On the other hand, a mouse and a dis+ are so different that different drivers are necessary( (ays to do I5$6T2O6T$6T: ,here are three fundamentally different &ays to do I$O( /( Programmed I$O 4( Interrupt#driven 5( :irect Memory access 6'S 8#& ...

CP'

#emory

I(O

8evice Programmed .#O ,he processor issues an I$O command, on "ehalf of a process, to an I$O module? that process then "usy &aits for the operation to "e completed "efore proceeding( *hen the processor is e2ecuting a program and encounters an instruction relating to input$output, it e2ecutes that instruction "y issuing a command to the appropriate input$output module( *ith the programmed input$output, the input$output module &ill perform the re1uired action and then set the appropriate "its in the input$output status register( ,he input$output module ta+es no further action to alert the processor( In particular it doesn.t interrupt the processor( ,hus, it is the responsi"ility of the
Page:1/5 Compiled 9y: daya

processor to chec+ the status of the input$output module periodically, until it finds that the operation is complete( It is simplest to illustrate programmed I$O "y means of an e2ample ( Consider a process that &ants to print the ight character string )9C: 7I0( /( It first assem"le the string in a "uffer in user space as sho&n in fig( 4( ,he user process then ac1uires the printer for &riting "y ma+ing system call to open it(

'ser space

String to 9e printed &6C8 !-B?

Printed page ne"t & ne"t &6C8 !-B? &6

0ernel space

&68C !-B?

*ig. Steps in Printing a string 5( If printer is in use "y other the call &ill fail and enter an error code or &ill "loc+ until printer is availa"le, depending on OS and the parameters of the call( D( Once it has printer the user process ma+es a system call to print it( E( OS then usually copies the "uffer &ith the string to an array, say P in the +ernel space &here it is more easily accessed since the +ernel may have to change the memory map to get to user space( ;( )s the printer is availa"le the OS copies the first character to the printer data register, in this e2ample using memory mapped I$O( ,his action activates the printer( ,he character may not appear yet "ecause some printers "uffer a line or a page "efore printing( F( )s soon as it has copied the first character to the printer the OS chec+s to see if the printer is ready to accept another one( G( Ienerally printer has a second register &hich gives its status ,he action follo&ed "y the OS are summari@ed in fig "elo&( 7irst data are copied to the +ernel, then the OS enters a tight loop outputting the characters one at a time( ,he essentials aspects of programmed I$O is after outputting a character, the CP! continuously polls the device to see if it is ready to accept one( ,his "ehavior is often called polling or 9usy &aiting( copyRfromRuser6"uffer,p,count'? $NP is the +ernel "ufferN$ for6iT<?iWcount?iCC' S $N loop on every charactersN$ &hile6NprinterRstatusRregLT% ):Y'? $Nloop until readyN$ printerRdataRregisterTPPiQ? $Noutput one character N$ U returnRtoRuser6'? Programmed I$O is simple "ut has disadvantages of tying up the CP! full time until all the I$O is done( In an em"edded system &here the CP! has nothing else to do, "usy &aiting is reasona"le( 0o&ever in
Page:1/ Compiled 9y: daya

more comple2 system &here the cpu has to do other things, "usy &aiting is inefficient( ) "etter I$O method is needed( .nterrupt&dri'en .#O: ,he pro"lem &ith the programmed I$O is that the processor has to &ait a long time for the input$output module of concern to "e ready for either reception or transmission of more data( ,he processor, &hile &aiting, must repeatedly interrogate the status of the Input$ Output module( )s a result the level of performance of entire system is degraded( )n alternative approach for this is interrupt driven Input $ Output( ,he processor issue an Input$Output command to a module and then go on to do some other useful &or+( ,he input$ Output module &ill then interrupt the processor to re1uest service, &hen it is ready to e2change data &ith the processor( ,he processor then e2ecutes the data transfer as "efore and then resumes its former processing( Interrupt#driven input$output still consumes a lot of time "ecause every data has to pass &ith processor( -M!: ,he previous &ays of I$O suffer from t&o inherent dra&"ac+s( /( ,he I$O transfer rate is limited "y the speed &ith &hich the processor can test and service a device( 4( ,he processor is tied up in managing an I$O transfer?a num"er of instructions must "e e2ecuted for each I$O transfer( *hen large volumes of data are to "e moved, a more efficient techni1ue is re1uired::irect memory access( ,he :M) function can "e performed "y a separate module on the system "us, or it can "e incorporated into an I$O module( In either case , the techni1ue &or+s as follo&( *hen the processor &ishes to read or &rite a "loc+ of data, it issues a command to the :M) module "y sending the follo&ing information( *hether a read or &rite is re1uested( ,he address of the I$O devices( Starting location in memory to read from or &rite to( ,he num"er of &ords to "e read or &ritten(

Issues *ead(=rite 6loc5 command to I(O #odule

cpu44L8#&
44444L8o something else

*ead Status of 8#& #odule

444444LInterrupt 8#&44LCP'

-ig:8#&
De"t Instruction

Page:1/7

Compiled 9y: daya

,he processor then continues &ith other &or+( It has delegated this I$O operation to the :M) module, and that module &ill ta+e care of it( ,he :M) module transfers the entire "loc+ of data, one &ord at at time, directly to or from memory, &ithout going through the processor( *hen the transfer is complete, the :M) module sends an interrupt signal to the processor( ,hus the processor is involved only at the "eginning and at the end of the transfer( In programmed I/O cpu takes care of whether the device is ready or not. Data may e lost. !hereas in Interrupt"driven I/O# device itself inform the cpu y generating an interrupt signal. if the data rate of the i/o is too fast. Data may e lost. In this case cpu most e cut off# since cpu is too slow for the particular device. the initial state is too fast. it is meaningful to allow the device to put the data directly to the memory. $his is called D%A. dma controller will take over the task of cpu. &pu is general purpose ut the dma controller is specific purpose. ) :M) module controls the e2change of data "et&een main memory and an I$O module( ,he processor sends a re1uest for the transfer of a "loc+ of data to the :M) module and is interrupted only after the entire "loc+ has "een transferred(

"is)s:
)ll real dis+s are organi@ed into cylinders, each one containing as many trac+s as there are heads stac+ed vertically( ,he trac+s are divided into sectors, &ith the num"er of sectors around the circumference typically "eing G to 54 on floppy dis+s, and up to several hundred on some hard dis+s( ,he simplest designs have the same num"er of sectors on each trac+( Disk .rm Sc!eduling .lgorit!ms: /( 7irst come 7irst server 7C7S 4( Shortest See+ 7irst Shortest See+ 7irst 6SS7' dis+ scheduling algorithm( 5( ,he elevator algorithm for scheduling dis+ re1uests( Consider a dis+ &ith D< cylinders( ) re1uest comes in to read a "loc+ on cylinder //( *hile the see+ to cylinder // is in progress, ne& re1uests come in for cylinders /, 5;, /;, 5D, H, and /4, in that order( S!ortest Seek 2irst "SS2# disk sc!eduling algorit!m. Pic# the re'uest closet to the head.

otal no o& Cylinder movement: A)

Page:1/,

Compiled 9y: daya

!e elevator algorit!m &or sc!eduling disk re5uests: 'eep moving in same direction until there are no more outstanding re(uests in that direction# then they switch direction. $he elevator algorithm re(uires software to maintain ) it# the current direction it. *+ or DO!,. If it is *+ the arm is moved to the ne-t highest pending re(uest and if it is DO!, if it moved to the ne-t lowest pending re(uest if any. the elevator algorithm using the same seven re'uests as shown above, assuming the direction bit was initially P. (he order in which the cylinders are serviced is +$, +,, -., -,, /, and +, which yields arm motions of +, ., +0, $, $1, and 0, for a total of ,2 cylinders.

4uestions: ,he dis+ re1uests come in to the dis+ driver for cylinders /<, 4<, 44,4,D<,; and 5<, in the order( ) see+ ta+es ; msec$cylinder moved( 0o& much see+ time is needed for: i' 7C7S ii' Shortest See+ 7irst iii' levator algorithm

Terminals: 7or decades, users have communicated &ith computers using devices consisting of a +ey"oard for user input and a display for computer output( 7or many years, these &ere com"ined into free#standing devices called terminals, &hich &ere connected to the computer "y a &ire( Large mainframes used in the financial and travel industries sometimes still use these terminals, typically connected to the mainframe via a modem, especially &hen they are far from the mainframe(

Page:1/.

Compiled 9y: daya

erminal types.

Cloc): cloc+ also called timers are essential to the operation of any multiprogrammed system for variety of reasons( maintain time of day prevent one process from monopoli@ing the CP! among other things( cloc+ soft&are can ta+e the form of device driver, "ut it is neither a "loc+ device li+e dis+ neither a character li+e mouse( 0o clock: the simpler cloc+ is tied to //< or 44< volt po&er line and causes and interrupt on every voltage cycle at E< or ;<h@( the other +ind of cloc+ "uilt of 5 components a crystal oscillator, a counter and a holding register(

R.I"
'.,D 6redundant array o& independent disks, originally redundant array o& inexpensive disks is a storage technology that com"ines multiple dis+ drive components into a logical unit( :ata is distri"uted across the drives in one of several &ays called A%)I: levelsA, depending on &hat level of redundancy and performance 6via parallel communication' is re1uired( '.,D $evels: %)I: < 6"loc+#level striping &ithout parity or mirroring' has no 6or @ero' redundancy( It provides improved performance and additional storage "ut no fault tolerance( 0ence simple stripe sets are normally referred to as %)I: <( )ny drive failure destroys the array, and the li+elihood of failure increases &ith more drives in the array 6at a minimum, catastrophic data loss is almost t&ice as li+ely compared to single drives &ithout %)I:'( ) single drive failure destroys the entire array "ecause &hen data is &ritten to a %)I: < volume, the data is "ro+en into fragments called "loc+s( ,he num"er of "loc+s is dictated "y the stripe si@e, &hich is a configuration parameter of the array( ,he "loc+s are &ritten to their respective drives simultaneously on the same sector( ,his allo&s smaller sections of the entire chun+ of data to "e read off each drive in parallel, increasing "and&idth( %)I: < does not implement error chec+ing, so any error is uncorrecta"le( More drives in the array means higher "and&idth, "ut greater ris+ of data loss(

Page:11/

Compiled 9y: daya

Page:111

Compiled 9y: daya

In .AID ) 6mirroring &ithout parity or striping', data is &ritten identically to multiple drives, there"y producing a Amirrored setA? at least 4 drives are re1uired to constitute such an array( *hile more constituent drives may "e employed, many implementations deal &ith a ma2imum of only 4? of course, it might "e possi"le to use such a limited level / %)I: itself as a constituent of a level / %)I:, effectively mas+ing the limitation(,he array continues to operate as long as at least one drive is functioning( *ith appropriate operating system support, there can "e increased read performance, and only a minimal &rite performance reduction? implementing %)I: / &ith a separate controller for each drive in order to perform simultaneous reads 6and &rites' is sometimes called multiple%ing 6or duple%ing &hen there are only 4 drives'( In .AID / 6"it#level striping &ith dedicated 0amming#code parity', all dis+ spindle rotation is synchroni@ed, and data is striped such that each se1uential "it is on a different drive( 0amming#code parity is calculated across corresponding "its and stored on at least one parity drive( In .AID 0 6"yte#level striping &ith dedicated parity', all dis+ spindle rotation is synchroni@ed, and data is striped so each se1uential "yte is on a different drive( Parity is calculated across corresponding "ytes and stored on a dedicated parity drive( .AID 1 6"loc+#level striping &ith dedicated parity' is identical to %)I: E 6see "elo&', "ut confines all parity data to a single drive( In this setup, files may "e distri"uted "et&een multiple drives( ach drive operates independently, allo&ing I$O re1uests to "e performed in parallel( 0o&ever, the use of a dedicated parity drive could create a performance "ottlenec+? "ecause the parity data must "e &ritten to a single, dedicated parity drive for each "loc+ of non#parity data, the overall &rite performance may depend a great deal on the performance of this parity drive( .AID 2 6"loc+#level striping &ith distri"uted parity' distri"utes parity along &ith the data and re1uires all drives "ut one to "e present to operate? the array is not destroyed "y a single drive failure( !pon drive failure, any su"se1uent reads can "e calculated from the distri"uted parity such that the drive failure is mas+ed from the end user( 0o&ever, a single drive failure results in reduced performance of the entire array until the failed drive has "een replaced and the associated data re"uilt( )dditionally, there is the potentially disastrous %)I: E &rite hole( %)I: E re1uires at least 5 dis+s( .AID 3 6"loc+#level striping &ith dou"le distri"uted parity' provides fault tolerance of t&o drive failures? the array continues to operate &ith up to t&o failed drives( ,his ma+es larger %)I: groups more practical, especially for high#availa"ility systems( ,his "ecomes increasingly important as large# capacity drives lengthen the time needed to recover from the failure of a single drive( Single#parity %)I: levels are as vulnera"le to data loss as a %)I: < array until the failed drive is replaced and its data re"uilt? the larger the drive, the longer the re"uild ta+es( :ou"le parity gives additional time to re"uild the array &ithout the data "eing at ris+ if a single additional drive fails "efore the re"uild is complete(

Page:112

Compiled 9y: daya

Chapter:= 5ile&systems 7ile naming , file structure, file types, file access, file attri"utes, file operations, 7ile descriptor, )ccess Control Matri2, sharing, )CL 6)ccess Control List', :irectories and directory hierarchy, 7ile system implementation, contiguous allocation, lin+ed list allocation, I#nodes, security and Multi media files( (hat is File&System+ 7rom the user point of vie& one of the most important part of the operating system is file#system( ,he file#system provides the resource a"straction typically associated &ith secondary storage( ,he file system permits users to create data collections, called files, &ith desira"le properties, such as

$ong-term existence: 7iles are stored on dis+ or other secondary storage and do not disappear &hen a user logs off( S!ara%le %et0een processes: 7iles have names and can have associated access permissions that permit controlled sharing( Structure: :epending on the file system, a file can have an internal structure that is convenient for particular applications( In addition, files can "e organi@ed into hierarchical or more comple2 structure to reflect the relationships among files(

File 5aming 7iles are an a"straction mechanism( ,hey provide a &ay to store information on the dis+ and read it "ac+ later( ,his must "e done in such a &ay as to shield the user from the details of ho& and &here the information is stored, and ho& the dis+s actually &or+( Pro"a"ly the most important characteristic of any a"straction mechanism is the &ay the o"-ects "eing managed are named, so &e &ill start our e2amination of file systems &ith the su"-ect of file naming( *hen a process creates a file, it gives the file a name( *hen the process terminates, the file continues to e2ist and can "e accessed "y other processes using its name( Operation Performed on 7iles: /( Creating a 7ile 4( *riting a file 5( %eading a file D( %epositioning a file E( :eleting a file ;( ,runcating a file File .ttri#utes very file has a name and its data( In addition, all operating systems associate other information &ith each file, for e2ample, the date and time the file &as created and the file.s si@e( *e &ill call these e2tra items the file.s attri%utes although some people called them metadata( ,he list of attri"utes varies considera"ly from system to system(

Page:11:

Compiled 9y: daya

.ttri%ute Protection Pass&ord Creator O&ner %ead#only flag 0idden flag System flag )rchive flag )SCII$"inary flag %andom access flag ,emporary flag Loc+ flags %ecord length 8ey position 8ey length Creation time ,ime of last access ,ime of last change Current si@e Ma2imum si@e

Meaning *ho can access the file and in &hat &ay Pass&ord needed to access the file I: of the person &ho created the file Current o&ner < for read$&rite? / for read only < for normal? / for do not display in listings < for normal files? / for system file < for has "een "ac+ed up? / for needs to "e "ac+ed up < for )SCII file? / for "inary file < for se1uential access only? / for random access < for normal? / for delete file on process e2it < for unloc+ed? non@ero for loc+ed Bum"er of "ytes in a record Offset of the +ey &ithin each record Bum"er of "ytes in the +ey field :ate and time the file &as created :ate and time the file &as last accessed :ate and time the file has last changed Bum"er of "ytes in the file Bum"er of "ytes the file may gro& to

File Operations 7iles e2ist to store information and allo& it to "e retrieved later( :ifferent systems provide different operations to allo& storage and retrieval( 9elo& is a discussion of the most common system calls relating to files( /( Create. ,he file is created &ith no data( ,he purpose of the call is to announce that the file is coming and to set some of the attri"utes( 4( Delete( *hen the file is no longer needed, it has to "e deleted to free up dis+ space( ) system call for this purpose is al&ays provided( :. Open. 9efore using a file, a process must open it( ,he purpose of the open call is to allo& the system to fetch the attri"utes and list of dis+ addresses into main memory for rapid access on
Page:113 Compiled 9y: daya

later calls( D( Close. *hen all the accesses are finished, the attri"utes and dis+ addresses are no longer needed, so the file should "e closed to free up some internal ta"le space( Many systems encourage this "y imposing a ma2imum num"er of open files on processes( ) dis+ is &ritten in "loc+s, and closing a file forces &riting of the file.s last "loc+, even though that "loc+ may not "e entirely full yet( E( 'ead. :ata are read from file( !sually, the "ytes come from the current position( ,he caller must specify ho& much data are needed and must also provide a "uffer to put them in( ;( 3rite. :ata are &ritten to the file, again, usually at the current position( If the current position is the end of the file, the file.s si@e increases( If the current position is in the middle of the file, e2isting data are over&ritten and lost forever( F( .ppend. ,his call is a restricted form of &rite( It can only add data to the end of the file( Systems that provide a minimal set of system calls do not generally have append, "ut many systems provide multiple &ays of doing the same thing, and these systems sometimes have append( G( Seek( 7or random access files, a method is needed to specify from &here to ta+e the data( One common approach is a system call, see+, that repositions the file pointer to a specific place in the file( )fter this call has completed, data can "e read from, or &ritten to, that position( H( 8et attri%utes( Processes often need to read file attri"utes to do their &or+( 7or e2ample, the !BI> ma+e program is commonly used to manage soft&are development pro-ects consisting of many source files( *hen ma+e is called, it e2amines the modification times of all the source and o"-ect files and arranges for the minimum num"er of compilations re1uired to "ring everything up to date( ,o do its -o", it must loo+ at the attri"utes, namely, the modification times( /<( Set attri%utes. Some of the attri"utes are user setta"le and can "e changed after the file has "een created( ,his system call ma+es that possi"le( ,he protection mode information is an o"vious e2ample( Most of the flags also fall in this category( //( 'ename. It fre1uently happens that a user needs to change the name of an e2isting file( ,his system call ma+es that possi"le( It is not al&ays strictly necessary, "ecause the file can usually "e copied to a ne& file &ith the ne& name, and the old file then deleted( /4( $ock. Loc+ing a file or a part of a file prevents multiple simultaneous access "y different process( 7or an airline reservation system, for instance, loc+ing the data"ase &hile ma+ing a reservation prevents reservation of a seat for t&o different travelers(

Page:115

Compiled 9y: daya

File Structure:

!ree kinds o& &iles. "a# /yte se5uence. "%# 'ecord se5uence. "c# ree. a. /yte Se5uence: ,he file in 7ig( 6a' is -ust an unstructured se1uence of "ytes( In effect, the operating system does not +no& or care &hat is in the file( )ll it sees are "ytes( )ny meaning must "e imposed "y user#level programs( 9oth !BI> and *indo&s HG use this approach( %. 'ecord Se5uence: In this model, a file is a se1uence of fi2ed#length records, each &ith some internal structure( Central to the idea of a file "eing a se1uence of records is the idea that the read operation returns one record and the &rite operation over&rites or appends one record( )s a historical note, &hen the G<#column punched card &as +ing many 6mainframe' operating systems "ased their file systems on files consisting of G<#character records, in effect, card images c. 'ecord Se5uence: In this organi@ation, a file consists of a tree of records, not necessarily all the same length, each containing a +ey field in a fi2ed position in the record( ,he tree is sorted on the +ey field, to allo& rapid searching for a particular +ey(

Files Organization and .ccess Mechanism: *hen a file is used then the stored information in the file must "e accessed and read into the memory of a computer system( Various mechanism are provided to access a file from the operating system( i( Se1uential access ii( :irect )ccess iii( Inde2 )ccess
Page:11 Compiled 9y: daya

Se5uential .ccess: It is the simplest access mechanism, in &hich informations stored in a file are accessed in an order such that one record is processed after the other( 7or e2ample editors and compilers usually access files in this manner( Direct .ccess: It is an alternative method for accessing a file, &hich is "ased on the dis+ model of a file, since dis+ allo&s random access to any "loc+ or record of a file( 7or this method, a file is vie&ed as a num"ered se1uence of "loc+s or records &hich are read$&ritten in an ar"itrary manner, i(e( there is no restriction on the order of reading or &riting( It is &ell suited for :ata"ase management System( ,ndex .ccess: In this method an inde2 is created &hich contains a +ey field and pointers to the various "loc+( ,o find an entry in the file for a +ey value , &e first search the inde2 and then use the pointer to directly access a file and find the desired entry( *ith large files , the inde2 file itself may "ecome too large to "e +eep in memory( One solution is to create an inde2 for the inde2 file( ,he primary inde2 file &ould contain pointers to secondary inde2 files, &hich &ould point to the actual data items(

4ile Allocation %ethod: ). *. +. -. Contiguous .llocation $inked $ist .llocation $inked $ist .llocation 7sing a a%le in Memory ,-1odes

Contiguous allocation:
It re1uires each file to occupy a set of contiguous addresses on a dis+( It sore each file as a contiguous run of dis+ "loc+s( ,hus on a dis+ &ith /#89 "loc+s, a E<#89 file &ould "e allocated E< consecutive "loc+s( 9oth se1uential and direct access is supported "y the contiguous allocation method( Contiguous dis+ space allocation has t&o significant advantages( /( 7irst, it is simple to implement "ecause +eeping trac+ of &here a file.s "loc+s are is reduced to remem"ering t&o num"ers: the dis+ address of the first "loc+ and the num"er of "loc+s in the file( Iiven the num"er of the first "loc+, the num"er of any other "loc+ can "e found "y a simple addition( 4( Second, t!e read per&ormance is excellent "ecause the entire file can "e read from the dis+ in a single operation( Only one see+ is needed 6to the first "loc+'( )fter that, no more see+s or rotational delays are needed so data come in at the full "and&idth of the dis+( ,hus contiguous allocation is simple to implement and has high performance( !nfortunately, contiguous allocation also has a ma-or dra&"ac+: in time, the dis+ "ecomes fragmented, consisting of files and holes( It needs compaction to avoid this( 2ample of contiguous allocation: C: and :V: %OMs
Page:117 Compiled 9y: daya

%in(ed %ist !llocation: +eep each file as a lin+ed list of dis+ "loc+s as sho&n in the fig( ,he first &ord of each "loc+ is used as a pointer to the ne2t one( ,he rest of the "loc+ is for data(

Fig:Storing a fi e as a in0ed ist of dis0 b oc0s.

!nli+e contiguous allocation, every dis+ "loc+ can "e used in this method( Bo space is lost to dis+ fragmentation( ,he ma-or pro"lem &ith lin+ed allocation is that it can "e used only for se1uential access files( ,o find the ith "loc+ of a file, &e must start at the "eginning of that file, and follo& the pointers until &e get the ith "loc+( It is inefficient to support direct access capa"ility for lin+ed allocation of files( )nother pro"lem of lin+ed list allocation is relia"ility( Since the files are lin+ed together &ith the pointer scattered all over the dis+( Consider &hat &ill happen if a pointer is lost or damaged(

.ndexed allocation .&;odes$: It solves the e"ternal fragmentation and si+e declaration pro9lems of contiguous allocation. In this allocation all pointers are 9rought together into one location called %nde- bloc&, !ach file has its own inde" 9loc5E which is an array of dis549loc5 addresses. The ith entry in the inde" 9loc5 points to the ith 9loc5 of the file. The directory contains the address of the inde" 9loc5.
Page:11, Compiled 9y: daya

7ne* a oction of 4is0 sapce ,o read the ith "loc+, &e use the pointer in the ith inde2 "loc+ entry to find and read the desired "loc+( ,his scheme is similar to the paging scheme(

Page:11.

Fig: An i9node with three eve s of indirect b oc0s.

Compiled 9y: daya

File System 7ayout:

Fig:A possib e fi e system ayout. "irectories: To 5eep trac5 of filesE file systems normally have directories or foldersE whichE in many systemsE are themselves files. In this section we will discuss directoriesE their organi+ationE their propertiesE and the operations that can 9e performed on them. Simple "irectories & directory typically contains a num9er of entriesE one per file. One possi9ility is shown in -ig. %a)E in which each entry contains the file nameE the file attri9utesE and the dis5 addresses where the data are stored.

Page:12/

Compiled 9y: daya

"hree fi e system designs. +a, Sing e directory shared by a users. +b, #ne directory per user. +c, Arbitrary tree per user. "he etters indicate the directory or fi e:s owner.

.ccess Control Matri%


,he access controls provided &ith an operating system typically authenticate principals using some mechanism such as pass&ords or 8er"eros, then mediate their access to files, communications ports, and other system resources( ,heir effect can often "e modelled "y a matri2 of access permissions, &ith columns for files and ro&s for users( *eJll &rite r for permission to read, & for permission to &rite, 2 for permission to e2ecute a program, and 6' for no access at all, as sho&n in 7igure "elo&(

-ig: &ccess Control #atri" In this simplified e2ample, Sam is the system administrator, and has universal access 6e2cept to the audit trail, &hich even he should only "e a"le to read'( )lice, the manager, needs to e2ecute the operating system and application, "ut only through the approved interfacesashe mustnJt have the a"ility to tamper &ith them( She also needs to read and &rite the data( 9o", the auditor, can read everything( )ccess control matrices 6&hether in t&o or three dimensions' can "e used to implement protection mechanisms, as &ell as -ust model them( 9ut they do not scale &ell( 7or instance, a "an+ &ith E<,<<< staff and 5<< applications &ould have an access control matri2 of /E million entries( ,his is
Page:121 Compiled 9y: daya

inconveniently large( It might not only impose a performance pro"lem "ut also "e vulnera"le to administratorsJ mista+es( *e &ill usually need a more compact &ay of storing and managing this information( ,he t&o main &ays of doing this are to use groups or roles to manage the privileges of large sets of users simultaneously, or to store the access control matri2 either "y columns 6access control lists' or ro&s 6capa"ilities, sometimes +no&n as \tic+ets]' or certificates (

.ccess Control 7ist:


)n )CL consists of a set of )CL entries( )n )CL entry specifies the access permissions on the associated o"-ect for an individual user or a group of users as a com"ination of read, &rite and e2ecute permissions(

ach file has an )CL associated &ith it( 7ile 7/ has t&o entries in its )CL 6separated "y a semicolon'( ,he first entry says that any process o&ned "y user ) may read and &rite the file( ,he second entry says that any process o&ned "y user 9 may read the file( )ll other accesses "y these users and all accesses "y other users are for"idden( Bote that the rights are granted "y user, not "y process( )s far as the protection system goes, any process o&ned "y user ) can read and &rite file 7/( It does not matter if there is one such process or /<< of them( It is the o&ner, not the process I:, that matters( 7ile 74 has three entries in its )CL: ), 9, and C can all read the file, and in addition 9 can also &rite it( Bo other accesses are allo&ed( 7ile 75 is apparently an e2ecuta"le program, since 9 and C can "oth read and e2ecute it( 9 can also &rite it( 7nix Operating System Security In !ni2 6and its popular variant Linu2', files are not allo&ed to have ar"itrary access control lists, "ut simply r&2 attri"utes for the resource o&ner, the group, and the &orld( ,hese attri"utes allo& the file to "e read, &ritten, and e2ecuted( ,he access control list as normally displayed has a flag to sho& &hether the file is a directory? then flags r, &, and 2 for o&ner, group, and &orld respectively? it then has the o&nerJs name and the group name( ) directory &ith all flags set &ould have the )CL: dr&2r&2r&2 )lice )ccounts #r&#ra###)lice )ccounts ,his records that the file is not a directory? the file o&ner can read and &rite it? group mem"ers can read it "ut not &rite it? nongroup mem"ers have no access at all? the file o&ner is )lice? and the group is )ccounts(

Page:122

Compiled 9y: daya

Chapter >: -istributed Operating&system


Introduction, Ioals Bet&or+ architecture hard&are and soft&are concepts, Communication in distri"uted systems, ),M 6)synchronous ,ransfer Mode, Client# Server Model, %PC 6%emote Procedure Call', Iroup communication, Processes and Processors in distri"uted System, ta2onomy of MIM: computer system, Cloc+ synchroni@ation, scheduling in distri"uted system( . distri%uted system: Multiple connected CP!s &or+ing together ) collection of independent computers that appears to its users as a single coherent system b 2amples: parallel machines, net&or+ed machines ) distri"uted system is the collection of independent computers that appears to the user of the system as a single computer( ,his definition has t&o aspects( ,he first one deals &ith the hard&are: the machine are autonomous( ,he second one deals &ith the soft&are, the users thin+ of the system as a single computer 2ample of distri"uted system: )utomatic "an+ing 6teller machine' system

.dvantages:

Per&ormance: very often a collection of processors can provide higher performance 6and "etter
Compiled 9y: daya

Page:12:

price$performance ratio' than a centrali@ed computer( Distri%ution: many applications involve, "y their nature, spatially separated machines 6"an+ing, commercial, automotive system'( 'elia%ility 6fault tolerance': if some of the machines crash, the system can survive( ,ncremental gro0t!: as re1uirements on processing po&er gro&, ne& machines can "e added incrementally( S!aring o& data/resources: shared data is essential to many applications 6"an+ing, computer# supported cooperative &or+, reservation systems'? other resources can "e also shared 6e(g( e2pensive printers'( Communication: facilitates human#to#human communication(

Disadvantages: Di&&iculties o& developing distri%uted so&t0are : ho& should operating systems, programming languages and applications loo+ li+eO 1et0orking pro%lems: several pro"lems are created "y the net&or+ infrastructure, &hich have to "e dealt &ith: loss of messages, overloading, ((( Security pro%lems: sharing generates the pro"lem of data security( Design issues t!at arise speci&ically &rom t!e distri%uted nature o& t!e application ,ransparency Communication Performance K scala"ility 0eterogeneity Openness %elia"ility K fault tolerance Security ransparency: 0o& to achieve the single system imageO 0o& to AfoolA everyone into thin+ing that the collection of machines is a AsimpleA computerO .ccess transparency # local and remote resources are accessed using identical operations( $ocation transparency # users cannot tell &here hard&are and soft&are resources 6CP!s, files, data "ases' are located? the name of the resource shouldnJt encode the location of the resource( Migration "mo%ility# transparency # resources should "e free to move from one location to another &ithout having their names changed( 'eplication transparency # the system is free to ma+e additional copies of files and other resources 6for purpose of performance and$or relia"ility', &ithout the users noticing( 2ample: several copies of a file? at a certain re1uest that copy is accessed &hich is the closest to the client( Concurrency transparency # the users &ill not notice the e2istence of other users in the system 6even if they access the same resources'( B 2ailure transparency
Page:123 Compiled 9y: daya

# applications should "e a"le to complete their tas+ despite failures occurring in certain components of the system( B Per&ormance transparency # load variation should not lead to performance degradation( ,his could "e achieved "y automatic reconfiguration as response to changes of the load? it is difficult to achieve(

Communication: Components of a distri"uted system have to communicate in order to interact( ,his implies support at t&o levels: /( Bet&or+ing infrastructure 6interconnections K net&or+ soft&are'( 4( )ppropriate communication primitives and models and their implementation: b communication primitives: # send # receive 6Message Passing' # remote procedure call 6%PC' b communication models # client-server communication: implies a message e2change "et&een t&o processes: the process &hich re1uests a service and the one &hich provides it? - group muticast: the target of a message is a set of processes, &hich are mem"ers of a given group( Per&ormance and Scala%ility Several factors are influencing the performance of a distri"uted system: ,he performance of individual &or+stations( ,he speed of the communication infrastructure 2tent to &hich relia"ility 6fault tolerance' is provided 6replication and preservation of coherence imply large overheads'( 7le2i"ility in &or+load allocation: for e2ample, idle processors 6&or+stations' could "e allocated automatically to a userJs tas+( Scala%ility ,he system should remain efficient even &ith a significant increase in the num"er of users and resources connected: # cost of adding resources should "e reasona"le? # performance loss &ith increased num"er of users and resources should "e controlled? # soft&are resources should not run out 6num"er of "its allocated to addresses, num"er of entries in ta"les, etc(' .rchitecture )rchitectures for Parallel Programming Parallel computers can "e divided into t&o categories: those that contain physical shared memory, called multiprocessors and those that do not, called
Page:125 Compiled 9y: daya

multicomputers () simple ta2onomy is given in 7ig( (ard0are and So&t0are Concept: ven though all distri"uted system consists of multiple cpus, there are several different &ays the hard&are can "e organi@ed in terms of ho& they are interconnected and ho& they communicate( Various classification schemes for multiple CP! computer system have "een proposed( Most fre1uently cited ta2onomy is 7lynn.s although it is fairly rudimentary( 2lynn proposed the follo&ing categories of computer systems: Single instruction single data "S,SD# stream: ) single processor e2ecutes a single instruction stream to operate on data stored in a single memory( )ll traditional uniprocessor computers 6i(e those having only one CP!' fall in this category, from personal computers to mainframes( Single instruction multiple data "S,MD# stream: ,his type refers to array processors &ith one instruction unit that fetches an instruction, and then commands many data units to carry it out in parallel, each &ith its o&n data( ,hese machines are useful for computation that repeat the same calculation on many sets of data(Vector and array processors fall into this category( Multiple instruction single data "M,SD# stream: ) se1uence of data is transmitted to a set of processors, each of &hich e2ecutes a different instruction se1uence( ,his structure has never "een implemented( Bo +no&n computers fit in this model( Multiple instruction multiple data "M,MD# stream : ) set of processors simultaneously e2ecute different instruction se1uences on different data sets( *hich essentially means a group of independent computers, each &ith its o&n program counter, program and data( )ll distri"uted system are MIM:(

-ig: & ta"onomy of parallel and distri9uted computer systems.


Page:12 Compiled 9y: daya

*e divide all MIM: computers into t&o groups: those that have shared memory, usually called Multiprocessors and those that do not, sometimes called Multicomputers. ,he essential difference is this: in a multiprocessor, there is a single virtual address space that is shared "y all CP!s( If any CP! &rites, for e2ample the address value DD to address /<<<, any other CP! su"se1uently reading from its address /<<< &ill get the value DD( )ll the machine share the same memory( In contrast, in a multicomputer, every machine has its o&n private memory( If one CP! &rites the value DD to the address /<<<, &hen another CP! reads the address /<<< it &ill get &hatever value &as there "efore( ,he &rite of DD doesn.t affect its memory at all( ) common e2ample of multicomputers is the collection of personal computers connected "y the net&or+( ach of these category can "e further divided into "us and s&itched "ased on architecture of the interconnection net&or+( 9y "us &e mean that there is is single net&or+, "ac+"one, "us, ca"le, or other medium that connects all machine( Ca"le television uses a scheme li+e this, the ca"le company runs the ca"le do&n the street, and all the su"scri"ers have taps running to it from their television sets( S&itched system do not have a single "ac+"one li+e ca"le television, instead there are individual &ires from machine to machine, &ith many different &iring patterns in use( Messages move along the &ires, &ith an e2plicit s&itching decisions made at each step to route the message along one of the out going &ires( ,he &orld#&ide pu"lic telephone system is organi@ed in this &ay( In a tightly coupled system, the delay e2perienced &hen a message is sent from one computer to other another is short, and the data rate is high? that is the num"er of "its per second that can "e transferred is large( In a loosely coupled system the opposite is true: the inter#machine delay is large and the date rate is lo&( 7or e2ample t&o CP!s chips on the same printed "oard and connected "y &ires are li+ely to "e tightly coupled, &hereas t&o computers connected "y a 4D<< "its$sec modem over the telephone system are certain to "e loosely coupled( ,ightly coupled system tend to "e used more as parallel systems 6&or+ing on a single pro"lem' and loosely coupled system tend to "e used as distri"uted system 6&or+ing on many unrelated pro"lems'(

*us #ased multiprocessor 9us "ased multiprocessor consists of some num"er of CP!s all connected to a common "us, along &ith a memory module(

) simple configuration is to have a high speed "ac+plane or mother"oard into &hich CP! or memory
Page:127 Compiled 9y: daya

cards can "e inserted( ) typical "us has 54 or ;D address lines, 54 or ;D data lines, and perhaps 54 or more control lines, all of &hich operate in parallel( ,he pro"lem &ith this scheme is that &ith as fe& as D or E CP!s , the "us &ill usually "e overloaded and performance &ill drop drastically( ,he solution is to add the high speed cache memory "et&een the CP!s and the "us as sho&n in the fig( ,he cache holds the most recently accessed &ords( )ll memory re1uests goes though the cache( If the &ord re1uested is in the cache, the cache itself responds to the CP! and no "us re1uest is made( If the cache is large enough, the pro"a"ility of success, called the hit rate, &ill "e high and the amount of "us traffic per CP! &ill drop dramatically, allo&ing many more CP!s in the system( 0o&ever the introduction of cache also "ring the serious pro"lem &ith it suppose that t&o CP!s ) and 9 each read the same &ord into their respective caches( ,hen ) over&rites the &ord( *hen 9 ne2t reads that &ord, it gets the old value from its cache, not the value ) -ust &rote( ,he memory is no& incoherent and the system is difficult to program( One of the solution for it is to implement &rite#through cache and snoopy cache( In &rite#through cache, cache memories are designed so that &henever a &ord is &ritten to the cache, it is &ritten through to memory as &ell( In addition, all caches constantly monitor the "us( *henever a cache sees a &rite occurring to a memory address present in its cache, it either removes that entry from its cache or updates the cache entry &ith the ne& value( Such a cache is called snoopy cac!e. "ecause it is al&ays snooping on the "us( *sing it# it is possi le to put a out 0/ or possi ly 31 &+*s on a single us. Switched Multiprocessor ,o "uild a multiprocessor &ith more than ;D processors, a different method is needed to connect the CP!s &ith the memory( ,&o s&itching techni1ues are employed for it( a( Cross"ar S&itch "( Omega S&itch

7ig a'( Cross"ar S&itch


Page:12,

7ig "'( )n Omega S&itching Bet&or+


Compiled 9y: daya

Crossbar S:itch: Memory is divided into the modules and are connected to the CP!s &ith the cross"ar s&itch( ach CP! and each memory has a connection coming out of it, as sho&n( )t every intersection is a tiny electronic crosspoint s&itch that can "e opened and closed in hard&are( *hen a CP! &ants to access a particular memory, the crosspoint s&itch connecting them is closed, to allo& the access to ta+e place( If t&o CP!s try to access the same memory simultaneously, one of them &ill have to &ait( ,he do&nside of the cross"ar s&itch is that &ith n CP!s and n memories, n 4 crosspoint s&itches are needed( 7or large n this num"er can "e prohi"itive( Omega S:itching net:or( It contains 424 s&itches, each having t&o inputs and t&o outputs( ach s&itch can route either input to either output( ) careful loo+ at the figure &ill sho& that &ith proper setting of the s&itches, every CP! can access every memory( In general case, &ith n CP!s and n memories, the omega net&or+ re1uires log4n s&itching stages, each containing n$4 s&itches, for a total of 6nlog4n'$4 s&itches( *us #ased multicomputers 9uilding a multicomputer 6i(e no shared memory ' is easy( ach CP! has a direct connection to its o&n local memory( ,he only pro"lem left is ho& CP!s communicate &ith each other(

7ig( ) multicomputer consisting of &or+stations on a L)B(

CP!s are connected to each other via L)B 6/< or /<<M"ps'(

Switch Multicomputers S&itched multicomputers do not have a single "us over &hich all traffic goes( Instead, they have a collection of point#to#point connections( In &e see t&o of the many designs that have "een proposed and "uilt: a grid and a hypercu"e( ) grid is easy to understand and easy to lay out on a printed circuit "oard or chip( ,his architecture is "est suited to pro"lems that are t&o dimensional in nature 6graph theory, vision,etc(' )nother design is a hypercu"e &hich is an n #dimensional cu"e( One can imagine a D#dimensional hypercu"e as a pair of ordinary cu"es &ith the corresponding vertices connected, as sho&n in fig( Similarly, a E#dimensional hypercu"e can "e represented as t&o copies of 7ig(D(E6"', &ith the
Page:12. Compiled 9y: daya

corresponding vertices connected, andso on( In general, an n #dimensional hypercu"e has 4 n vertices, each holding one CP!( ach CP! has a fan#out of n , so the interconnection comple2ity gro&s logarithmically &ith the num"er of CP!( Various interconnection have "een proposed and "uilt, "ut all have the property that each CP! has direct and e2clusive access to its o&n private memory(

) single s&itch

) ring

) grid

) grid

D#: 0ypercu"e

17M. Mac!ines Various researchers have proposed intermediate designs that try to capture the desira"le properties of "oth architectures( Most of these designs attempt to simulate shared memory on multicomputers( )ll of them have the property that a process e2e#cuting on any machine can access data from its o&n memory &ithout any delay,&hereas access to data located on another machine entails considera"le delay and over#head, as a re1uest message must "e sent there and a reply received( Computers in &hich references to some memory addresses are cheap 6i(e(, local'and others are e2pensive 6i(e(, remote' have "ecome +no&n as B!M) 6 Bon!niform Memory )ccess ' machines( Our definition of B!M)
Page:1:/ Compiled 9y: daya

machines is some&hat "roader than &hat some other &riters have used( *e regard any machine &hich presents the programmer &ith the illusion of shared memory, "ut implements it "y sending mes#sages across a net&or+ to fetch chun+s of data as B!M)( Communication in "istri#uted System: $ayered Communication: )ll communication in distri"uted system is "ased on message passing( *hen a process ) &ants to communicate &ith a process 9 it first "uilds a message in its o&n address space( ,hen it e2ecutes a system call that causes the operating system to fetch the message and send it over the net&or+ to 9( ,o ma+e it easier to deal &ith numerous levels and issues involved in communication, the international Standards Organi@ation has developed a reference model, that clearly identifies the various level involved, gives them standard names and points out &hich level should do &hich -o"( ,his model is called OSI 6Open System Interconnection %eference Model'(

Bet&or+

Page:1:1

Compiled 9y: daya

+PC:
) remote procedure call 6%PC' is an inter#process communication that allo&s a computer program to cause a su"routine or procedure to e2ecute in another address space 6commonly on another computer on a shared net&or+' &ithout the programmer e2plicitly coding the details for this remote interaction( ,hat is, the programmer &rites essentially the same code &hether the su"routine is local to the e2ecuting program, or remote( *hen the soft&are in 1uestion uses o"-ect#oriented principles, %PC is called remote invocation or remote method invocation( Principle o& 'PC:

Client ma+es procedure call 6-ust li+e a local procedure call' to the client stu" Server is &ritten as a standard procedure Stu"s ta+e care of pac+aging arguments and sending messages Pac+aging parameters is called marshalling Stu" compiler generates stu" automatically from specs in an Interface :efinition Language 6I:L' Simplifies programmer tas+

Page:1:2

Compiled 9y: daya

2ample of %PC:

a remote procedure call occurs in t!e &ollo0ing steps: /( ,he client procedure calls the client stu" in the normal &ay( 4( ,he client stu" "uilds a message and calls the local operating system( 5( ,he clientJs OS sends the message to the remote OS( D( ,he remote OS gives the message to the server stu"( E( ,he server stu" unpac+s the parameters and calls the server( ;( ,he server does the &or+ and returns the result to the stu"( F( ,he server stu" pac+s it in a message and calls its local OS( G( ,he serverJs OS sends the message to the clientJs OS( H( ,he clientJs OS gives the message to the client stu"( /<( ,he stu" unpac+s the result and returns to the client( ,he net effect of all these steps is to convert the local call "y the client procedure to the client stu", to a local call to the server procedure &ithout either client or server "eing a&are of the intermediate steps(

!TM
,he \asynchronous] in ),M means ),M devices do not send and receive information at fi2ed speeds or using a timer, "ut instead negotiate transmission speeds "ased on hard&are and information flo& relia"ility( ,he \transfer mode] in ),M refers to the fi2ed#si@e cell structure used for pac+aging information( ),M transfers information in fi2ed#si@e units called cells( ach cell consists of E5 octets, or "ytes as sho&n in 7ig(

Page:1::

Compiled 9y: daya

7igure illustrate the ),M net&or+ &ith D s&itches( ach of these s&itches has D ports, each used for "oth input and output lines( ,he inside of the generic s&itch is as sho&n in fig "(

&T# 1ayer:

Page:1:3

Compiled 9y: daya

Client&ser'er Model:
,he idea "ehind this model is to structure the operating#system as a group of cooperating process, called servers, that offer services to the users, called clients( ,he client and server normally all run the same micro+ernel, &ith "oth clients and severs running as user processes( ) machine may run a single process or it may run multiple clients, multiple servers or mi2ture of "oth(

-ig. Client and server model. ,o avoid the considera"le overhead of the connection oriented protocol such as ,CP and OSI, the client#server model is usually "ased on a simple connectionless re1uest$reply protocol( ,he client sends a re1uest message to the server as+ing for some services 6eg( read a "loc+ of a file'( ,he server does the &or+ and returns the data re1uested or an error#code indicating &hy the &or+ could not "e performed as sho&n in the fig a"ove( 'e&erences: /( Modern Operating System "y: )ndre& S( ,annen"aum 4( Operating System Concepts "y: )"raham Sil"erschat@, I( Iange, P(9 Ialvin 5( &i+ipedia

Page:1:5

Compiled 9y: daya

Вам также может понравиться