Вы находитесь на странице: 1из 108

Chapter 3 Processes and Threads

Discussions on
Process concept. Programmer view of processes. OS view of processes. Interacting processes. Threads. Processes in UNIX. Threads in Solaris ( Unix 5.4 based OS).
Chapter 3- Processes and threads 2

October 14, 2011

Processes and Programs


A process is an execution of a program. Several Processes may represent execution of a same program. Several executions of a program are initiated with its own data. A programmer uses processes to achieve execution of programs in a sequential or concurrent manner.
October 14, 2011 Chapter 3- Processes and threads 3

Processes and Programs


A program is a passive entity that does not perform any actions by itself. An OS considers processes as entities for Scheduling. To understand what is a process, let us discuss how OS executes a program.

October 14, 2011

Chapter 3- Processes and threads

A Program and an abstract view of its execution


Program P File info; Int item; Open (info, read); While not End-of-file (info) Read (info, Item); Print.;stop; CPU Instructions, data area and stack of P Printer info Address space of P

(a)
October 14, 2011

(b)
Chapter 3- Processes and threads 5

A Program and an abstract view of its execution


Program P contains declarations of a file info and a variable item and statements that read values from info, use them to perform some calculations and print a result before coming to halt. To realize the execution of P, the OS allocates memory to accommodate Ps address space, allocates a printer to print, access file info and schedules P for execution.
October 14, 2011 Chapter 3- Processes and threads 6

Relationships between processes and programs


Two kinds of relationships can exist between processes and programs.

Relationship
One-to-one Many-to-one

Examples
A single execution of a sequential program. Many simultaneous executions of a same program, Execution of a concurrent program.

October 14, 2011

Chapter 3- Processes and threads

Child processes
The OS initiates the execution of a program by creating a process for it. This is called the main process for the execution. A main process may create other processes, which becomes its child processes. A child process may create other processes. All these processes form a tree with the main process as a root.

October 14, 2011

Chapter 3- Processes and threads

Process tree and Processes


Consider the real time data logging application receiving data samples from a satellite and stores them on a disk. Each sample arriving from a satellite has to be sent to a special register of the computer. The primary process of the application, which we can call as data_logger process has to perform the following three functions. Copy the sample from the special register into the memory. Write the samples into a disk file. Perform some housekeeping operations e.g. copy some selected fields of the incoming samples into another file used for statistical analysis.
October 14, 2011 Chapter 3- Processes and threads 9

Process tree and Processes


(a) Process tree
register Data_logger Copy_sample Buffer_ area Disk_write

(b) Processes

Housekeeping

Copy_sample Disk_write housekeeping


October 14, 2011 Chapter 3- Processes and threads 10

Process tree and Processes


Data_logger process creates three child processes. Copy_sample copies the sample from the register into a memory area named buffer_area which can hold say 50 samples. Disk_write writes a sample from buffer area into a disk file. Housekeeping performs housekeeping operations.
October 14, 2011 Chapter 3- Processes and threads 11

Process tree and Processes


Execution of the three processes can overlap as follows. Copy_sample can copy a sample into buffer_area. Disk_write can write a previous sample to a disk. Housekeeping can copy fields from samples already stored in disk for statistical analysis. This arrangement provides a smaller worst case response time of the application than if these functions were to be executed sequentially.

October 14, 2011

Chapter 3- Processes and threads

12

A Pseudo code of RTA


Begin /*create process*/ Create process (copy sample, move_buffer (),3); Create process (diskwrite, write_to_disk (),2); Create process (house keeping, analysis (),1); /*check status of all processes*/ Over=false; While (over==false) { If (status (copy sample) ==term && status(diskwrite)=term && status(ousekeeping)==term) { Terminate(); Over=true; } } end
October 14, 2011

Create process is a library routine which takes 3 parameters Process id Procedure name & An integer specifies its priority This system call creates new process. The priority value is entered in an OS data structure & used when scheduling.

Chapter 3- Processes and threads

13

A Pseudo code of RTA


Move buffer() { /* copy the sample of SR*/ . . Terminate(); } Diskwrite() { /*copy the sample from buffer into disk */ . . . Terminate(); }

Housekeeping() { /*perform statistical analysis of the samples on the disk */ . . . Terminate(); }

October 14, 2011

Chapter 3- Processes and threads

14

Advantages of child processes


Advantage
Computation speed-up

Explanation
Creation of multiple processes in an application provides multi-tasking. Enables OS to interleave execution of I/O bound and CPU-bound processes in an application providing computation speed-up

Priority for critical A child process created to perform a critical function functions in an application may be assigned a higher priority than other functions Protecting Parent processes from errors
October 14, 2011

The OS cancels a child process if an error arises during its execution. This action does not affect the parent process.

Chapter 3- Processes and threads

15

Programmer view of processes


In the programmer view, processes are a means to achieve concurrent execution of a program. The main process of a concurrent program creates child processes. Assign priorities among child processes. The main process and the child processes have to interact to achieve common goal. This interaction may involve exchange of data and synchronization between processes.

October 14, 2011

Chapter 3- Processes and threads

16

Programmer view of processes


An OS provides the following four operations to implement the programmer view of processes. Creating child processes and assigning priorities to them. Terminating child processes. Determination of status of child processes. Sharing, communication and synchronization between processes.

1. 2. 3. 4.

October 14, 2011

Chapter 3- Processes and threads

17

Sharing, communication and synchronization between processes Four kinds of process interaction Interaction
Data sharing

Description
Shared data may become inconsistent if several processes update the data at the same time. Hence processes must interact to decide when it is safe for a process to access shared data. Processes exchange information by sending messages to one another. To fulfill a common goal, processes must coordinate their activities and perform their actions in a desired order. A signal is used to convey occurrence of an exceptional situation to a process.
Chapter 3- Processes and threads 18

Message Passing Synchronization

Signals
October 14, 2011

Sharing, communication and synchronization between processes Synchronization: If an action ai is to be performed only after an action aj, the process that wish to perform action ai is made to wait until some other processes performs aj. An OS provides facilities to check if another
process has performed a specific action.

Signal: A signal is used to convey occurrence of an


exceptional situation to a process so that process may perform special actions to handle the situation. The code that a process wishes to execute on receiving a signal is called a signal handler.

October 14, 2011

Chapter 3- Processes and threads

19

Concurrency and Parallelism Parallelism is the quality of occurring at the same time. Two events are parallel if they occur at same time and two tasks are parallel if they are performed at same time. Concurrency is the illusion of parallelism. Thus, two tasks are concurrent if there is an illusion that they are being performed in parallel. Whereas, in reality, only one task can be performed at any time. Concurrency is obtained by interleaving operation of processes on the CPU.
October 14, 2011 Chapter 3- Processes and threads 20

OS VIEW OF PROCESSES
In OS view, a process is an execution of a program. To realize this view, the OS creates processes, schedules them for use of the CPU and terminates them. To perform scheduling an OS must know which processes require the CPU at any moment. OS view of processes is to monitor all processes and know what each process is doing at any moment of time. A process can be executing on the CPU, waiting for the CPU to be allocated to it, waiting for an I/O operation to complete or waiting to be swapped into memory. The OS uses the notion of process state to keep track of what a process is doing at any moment.
October 14, 2011 Chapter 3- Processes and threads 21

OS VIEW OF PROCESSES
Process : A process is comprised of six components: (id, code, data, stack, resources, CPU state) id is a unique name/id assigned to the program. Code is the program code. Data is the data and files used in the programs execution. Resources is the set of resources allocated by the OS. Stack contains parameters of functions and procedures called, and their return address. CPU state is comprised of contents of the PSW fields and the CPU registers.

October 14, 2011

Chapter 3- Processes and threads

22

Controlling Processes The process uses CPU when it is scheduled. Processes uses system resources like memory and user-created resources like files. The OS has to maintain information about all these features of a process. The arrangement to control a process includes process environment and process control block (PCB).
October 14, 2011 Chapter 3- Processes and threads 23

OS VIEW OF PROCESS

Memory info

Resource info

File pointers Process id Process state

code

data

stack

Register values Pc value

Process Environment
October 14, 2011

Process Control Block (PCB)


24

Chapter 3- Processes and threads

Process Environment
Process environment contains the address space of a process i,e. code, data, stack etc.. The OS creates process environment by allocating memory to the process, loading the process code in the allocated memory and setting up data space. OS puts in information concerning access to resources allocated to the process and its interaction with other processes and with the OS.
October 14, 2011 Chapter 3- Processes and threads 25

Components of Process Environment


Component
Code and Data Memory allocation information Status of file processing activities Process interaction information

Description
Code of the program, including its functions and procedures, and its data, including the stack. Memory areas allocated to process. This information is used to implement memory accesses made by the process. Pointers to files opened by the process, and current position in the files. Interprocess messages, signal handlers, ids of parent and child processes.

Resource information Information regarding resources allocated to a process. Miscellaneous information


October 14, 2011

Miscellaneous information needed for interaction of a process with OS


Chapter 3- Processes and threads 26

Process Control Block (PCB)


The process control block is a kernel data structure that contains information concerning to process id and CPU state. The kernel uses three fundamental functions to control processes: Scheduling: Select a process to be executed next on the CPU. Dispatching: Set up execution of selected process on the CPU. Context save: Save information concerning a running process when its execution is suspended.
Chapter 3- Processes and threads 27

1. 2. 3.

October 14, 2011

Fundamental functions to control processes


Event Example: OS contains two processes P1 & P2 with P2 having higher priority than P1 let P2 be blocked & P1 be running. List the sequence OS action when an I/O completion event occurs. Sequence1: P1 is preempted, context save of p1 is performed. Sequence2: I/O completion event is processed, P2 changes its state from blocked to ready. Sequence3: Scheduling of P2 is performed as it has higher priority. Sequence4: P2 is dispatched.

Context save Event processing

Scheduling

Dispatching

October 14, 2011

Chapter 3- Processes and threads

28

Fundamental functions to control processes


Scheduling function selects a process based on scheduling policy in force. Dispatching involves setting up environment of the selected process and loading information into the CPU so that CPU begins (or resumes) execution of instructions in the process code. Context save function saves information concerning the CPU state and the process environment so that execution of process may be resumed sometime in future. Context save is converse of Dispatching.
October 14, 2011 Chapter 3- Processes and threads 29

PROCESS STATE
Definition : Process state describes the nature of the Current activity in a process State transition A state transition of a process pi is the change in its state. A state transition is caused by the occurrence of some event in the system. FUNDAMENTAL STATE TRANSITION FOR A PROCESS COMPLETION

Dispatching

RUNNING

TERMINATED

Resource or I/O request


PREEMPTION READY Note that process state is abstract notion defined by an OS designer to simplify control of process.
30

New process
October 14, 2011

BLOCKED

Resource granted or wait completed


Chapter 3- Processes and threads

Process State There are 4 fundamental states defined for a process. Running Ready Blocked or suspend Terminate Running: A CPU is currently allocated to a process & process is under execution. Ready: The process is not running however it can execute if a CPU is allocated to it. Blocked: The process is waiting for a request to be satisfied or a event to occur such a process doesnt execute even if CPU is available Terminate: The process has completed its execution.
October 14, 2011 Chapter 3- Processes and threads 31

Fundamental state transitions for a process

State transitions 1. Ready 2. Running 3. Running Running Ready Blocked

Cause of state transition Process is scheduled Process is preempted Process makes a request Which cannot be satisfied immediately Process completed its execution The request made by the Process is satisfied
32

4. Running 5. Blocked

Terminated Ready

October 14, 2011

Chapter 3- Processes and threads

Five major causes for blocking a process Process request an I/O operation. Process request memory or some other resource. Process wishes to wait for a specified interval of time. Process waits for a message from another process. Process wishes to wait for some action by another process.
October 14, 2011 Chapter 3- Processes and threads 33

Five major causes for process termination Self termination : The program being executed either completes the task or realizes that it cannot execute meaningfully. Termination by a parent process. Exceeding resource utilization. Abnormal conditions during execution eg:Memory protection violation. Incorrect interaction with other processes.
October 14, 2011 Chapter 3- Processes and threads 34

An Example
Consider a time sharing system which uses a time slice of 10ms. It contains two programs P1 & P2. P1 has a CPU burst of 15ms followed by an I/O burst of 100ms while P2 has a CPU burst of 30ms while an I/O burst of 60ms. Kernel creates two process p1 & p2 for programs P1 & P2. Illustrate the state transition table of the application.

Time 0 10ms 20ms 25ms 35ms 40ms 45ms 105ms

Event ---p1 is preempted p2 is preempted p1 invokes i/o p2 is preempted ---p2 invokes i/o i/o completion interrupt of p2

Remarks p1 is scheduled p2 is scheduled p1 is scheduled p2 is scheduled --p2 is scheduled p2 is blocked p2 is scheduled

p1 Running Ready Running Blocked Blocked Blocked Blocked Blocked

p2 Ready Running Ready Running Ready Running Blocked Running

October 14, 2011

Chapter 3- Processes and threads

35

PROCESS CONTROL BLOCK The Process Control Block is a data structure which contains all information for a process which is used in controlling its execution

Execution of Process was Last suspended

Pointer to another PCB to maintain Scheduling list


October 14, 2011

Process ID Priority Process State PSW CPU registers Event information Memory allocation Signal Information PCB pointer
Chapter 3- Processes and threads

Process Scheduling information

Process in Blocked State or in waiting

36

Process Control Block


Process information like priority, id, state, PC & register values & also information concerning resources allocated to it. The information can be classified into following: Process scheduling information- this information consist of 3 fields containing P (id), priority & state. Register values- PSW & CPU registers Event information - indicates the information concerned about event for which process is waiting. Memory & resource information- this information is useful for de allocating memory & resources when the process terminates. PCB pointer- this pointer is useful for maintaining the scheduling list (PCB is a linked list data structure)

October 14, 2011

Chapter 3- Processes and threads

37

CPU switch from process to process

October 14, 2011

Chapter 3- Processes and threads

38

Events & Event Handling Functions


Events pertaining to a process: Process transitions are caused by occurrence of following events in the system. 1. Resource request event 2. Resource release event 3. I/O request event 4. I/O termination event 5. Timer interrupt 6. Process creation event 7. Process termination 8. Message send event 9. Message receive event 10. Signal send event 11. Signal receive event. 12. A program interrupt Event handling functions 1.Block a process 2.Unblock a process 3.Initiate the I/O 4.I/O completion event function & puts a process in a SL 5.System timer has to indicate the end of timing interval allocated (time slice) 6.Create a process 7.Terminate a process 8.Delivers a message 9.Accept a message
October 14, 2011 Chapter 3- Processes and threads 39

Event Control Block (ECB) When an event occurs, the kernel must find the process that is affected by it. For e.g. when an I/O completion interrupt occurs, the kernel must identify the process awaiting its completion. It can achieve this by searching event information field of PCBs of all processes. This search is expensive and OS uses various schemes to speed it up and one of them is Event control Block.
October 14, 2011 Chapter 3- Processes and threads 40

Event Control Block (ECB)


The event description field describes an event

Event description Process id of awaiting processes ECB Pointer

The process id field contains id of the process awaiting for the event When a process Pi gets blocked for occurrence of an event ei, the kernel forms an ECB and puts relevant information concerning ei and Pi into it. Separate ECB list for each class of event.

October 14, 2011

Chapter 3- Processes and threads

41

The actions of a kernel when process Pi requests an I/O operation on some device d and when I/O operation completes are as follows 1. The kernel creates an ECB and initializes it as follows (a) Event description = End of I/O on device d (b) process awaiting the event= Pi The newly created ECBj is added to the list of ECBs. The state of Pi is changed to blocked and the address of ECBj is put into the Event information" field of Pis PCB. When a interrupt End of I/O on device d occurs, ECBj is located by searching for an ECB with a matching event description field. The id of the affected process I,e. Pi is extracted from the ECBj and Pi state is changed to ready in PCB of Pi
Chapter 3- Processes and threads 42

2. 3.

4.

5.

October 14, 2011

PCB-ECB interrelationship
PCB ECB Pi End of I/O on d (event description) blocked Pi

Event information

October 14, 2011

Chapter 3- Processes and threads

43

EVENT HANDLING ACTIONS OF THE KERNEL


Resource Request

I/O Request
Terminate Process

Block

TIMER I/O termination Send message Resource release


October 14, 2011

Preempt

Schedule

Dispatch

Unblock

Chapter 3- Processes and threads

44

EVENT HANDLING ACTIONS OF THE KERNEL The block action always changes the state of a process that made the system call from ready to blocked. The unblock action finds a process whose request can be fulfilled now and changes its state from blocked to ready. A system call for requesting a resource leads to block action if the resource cannot be allocated to the requesting process. This action is followed by scheduling and dispatching of another process.
October 14, 2011 Chapter 3- Processes and threads 45

EVENT HANDLING ACTIONS OF THE KERNEL

The block action is not performed if the resource can be allocated straightaway. In this case, the interrupted process is simply dispatched again. When a process releases a resource, an unblock action is performed if some other process is waiting for the released resource.

October 14, 2011

Chapter 3- Processes and threads

46

INTERACTING PROCESS
An application program creates many processes to realize the following advantages. Computation speed-up by utilizing multiple CPUs Improved response times or elapsed times of an application Reflecting real world requirements EX : An airline reservation system
Reservations Data

Agent terminals
October 14, 2011 Chapter 3- Processes and threads 47

INTERACTING PROCESS
The process interacts in two ways Data sharing Message sharing Here processes interacting through data sharing has to be dealt carefully, because there should be very good co-ordination between processes when they share some data. The following notation are used to define the interacting process with data sharing read_set i set of data items read by process Pi. write_set i set of data items modified by process Pi.
October 14, 2011 Chapter 3- Processes and threads 48

INTERACTING PROCESS
Process Pi & Pj are interacting processes if and only if(read_seti write_setj)!=0 or (read_set j write_seti) != 0 Processes that do not interact are said to be independent processes.
October 14, 2011 Chapter 3- Processes and threads 49

Race conditions and Data Access synchronization An application may consist of a set of processes sharing some data ds. Data access synchronization involves blocking and activation of these processes such that they correctly share ds. The need for data access synchronization arises because accesses to shared data in an arbitrary manner may lead to wrong results.
October 14, 2011 Chapter 3- Processes and threads 50

Race conditions and Data Access synchronization


Let processes Pi & Pj perform operations on ai and aj on ds. Let ai be an update operation that increments the value of ds by 10 ai : ds:=ds+10; Load-add-store sequence. Let operation aj be a simple copy operation that copies the value of ds into another location. If aj is executed before first instruction of ai, it will obtain old value of ds.

October 14, 2011

Chapter 3- Processes and threads

51

Race conditions and Data Access synchronization


If aj is executed after last instruction of ai, it will obtain a new value of ds. Process pj starts to take a certain action because ds has a certain value, however the value of ds is different by the time pj completes the action. This situation may be considered inconsistent in certain applications. A harmful situation called race condition may arise during execution of concurrent process.
October 14, 2011 Chapter 3- Processes and threads 52

Race conditions and Data Access synchronization


Let ai and aj be the operations on shared data ds performed by two interacting process pi and pj. Let fi(ds), fj(ds) represent the value of ds after performing operations ai and aj respectively. A race condition on a shared data item ds is a situation in which the value of ds resulting from execution of two operations ai and aj may be different from both fi(fj(ds)) and fj(fi(ds)).
October 14, 2011 Chapter 3- Processes and threads 53

Race conditions and Data Access synchronization Let ai and aj be the update operations. ai: ds:=ds+10; aj: ds:=ds+5; If processes pi and pj perform operations ai and aj, respectively, one would expect 15 to be added to the value of ds. A race condition arises if this is not the case.
October 14, 2011 Chapter 3- Processes and threads 54

Race conditions and Data Access synchronization


The result of performing ai and aj would be correct if one of them operates on the value resulting from the other operation. But would be wrong if both ai and aj operate on old value of ds. This could happen if one process is engaged in performing the load-add-store sequence, but the other process performs load instruction before the sequence is completed.
October 14, 2011 Chapter 3- Processes and threads 55

Illustration of race conditions in an airline reservation application and its consequences

Processes of a airline reservations system executes identical code. Processes share the variables nextseatno and capacity. Each process examines the value of nextseatno and updates it by 1 if seat is available.

October 14, 2011

Chapter 3- Processes and threads

56

Data sharing by processes of a reservations system


PROGRAM S1. if nextseatno < capacity then S2 allotedno := nextseatno; MACHINE INSTRUCTIONS S1.1 Load nextseatno in regk S1.2 If regk > capacity goto S4.1 S2.1 Move nextseatno to allotedno S3.1 Load nextseatno in regj S3.2 Add 1 to regj S3.3 Store regj in nextseatno S3.4 Go to S5.1 S4.1 display Sorry, no seats available S5.1
Chapter 3- Processes and threads 57

S3 nextseatno := nextseatno + 1;

else S4 display Sorry, no seats available S5


October 14, 2011

Data sharing by processes of a reservations system


Types of execution of processes Pi and Pj when nextseatno=200 and capacity=200.

Case
1

Execution flow
Process pi executes the if statement and compares value of nextseatno with capacity and proceeds to execute statements s2 and s3 that allocate a seat to it and increment nextseatno. When process pj executes the if statement it finds that no seats are available, so it does not perform any seat selection. Process pi executes the if statement and finds that a seat can be allocated. However pi gets preempted before it can perform allocation. Process pj now executes the if statement and finds that a seat is available. It allocates a seat and exits. nextseatno is now 201. however when process pi is resumed, it proceeds to execute instruction S2.1because seat is available before it was preempted and allocates seat numbered 201 even though 200 seats are available Process pi gets preempted after it loads 200 in regj. Now both pi and pj allocate a seat each however nextseatno is incremented by only 1. Thus cases 2 and 3 involve race conditions.
Chapter 3- Processes and threads 58

October 14, 2011

Race conditions in an airline reservation system


Time instant Pi 1 2 3 4 5 6 7 8 9 10 11 12 13 14
October 14, 2011

Actions of case 1 Pj ----------------------------S1.1 S1.2 S4.1 ----------------Pi S1.1 S1.2 S1.1 S1.2 S2.1 S3.1 S3.2 S3.3 S3.4 -----------------------------

Actions of case 2 Pj --------S1.1 S1.2 S2.1 S3.1 S3.2 S3.3 S3.4 --------------------Pi S1.1 S1.2 S2.1 S3.1

Actions of case 3 Pj ----------------S1.1 S1.2 S2.1 S3.1 S3.2 S3.3 S3.4 ------------59

----------------------------S2.1 S3.1 S3.2 S3.3 S3.4

----------------------------S3.2 S3.3 S3.4

Chapter 3- Processes and threads

Race conditions

Existence of race conditions in a program leads to a practical difficulty. Behavior depends on the order in which instructions of different processes are executed. This feature complicates testing and debugging of programs containing concurrent processes. Best way to handle race condition is to prevent them from arising.
October 14, 2011 Chapter 3- Processes and threads 60

Preventing race conditions Race conditions would not arise if we ensure that operations ai and aj do not execute concurrently. Action aj will not be in execution if ai is in execution. This requirement is called Mutual exclusion. Only one operation to access shared date ds at any time.
October 14, 2011 Chapter 3- Processes and threads 61

Data Access synchronization


Data access synchronization is a technique used to implement mutual exclusion over shared data. It delays a process wishing to access ds if another process is accessing it. A set of processes is said to require data access synchronization if race conditions arise during their execution.
October 14, 2011 Chapter 3- Processes and threads 62

Control synchronization
In control synchronization, interacting processes coordinate their execution with respect to one another. Control synchronization between a pair of processes Pi and Pj implies that execution of some instruction (statement) Sj in process Pj and the instructions (statements) following in the order of execution, are delayed until Process Pi executes an instruction (statement) Si.
October 14, 2011 Chapter 3- Processes and threads 63

Control synchronization
Contol synchronization between process Pi and Pj (a) (b)
Sj 1 Si Sj Sj

Si

Pi
October 14, 2011

Pj

Pi

Pj
64

Chapter 3- Processes and threads

Control synchronization
Shows execution of Process Pi and Pj. Time axis extends downwards and execution of a statement shown at higher level in a process occurs earlier than at lower level. Statement Sj is the first statement of process Pj. Execution of this statement cannot take place until process Pi executes statement Si. Thus synchronization occurs at the part of process Pj. Part (b) shows synchronization occurring in the middle of process Pj because statement Sj of process Pj cannot be executed until process pi executes statement Si.
October 14, 2011 Chapter 3- Processes and threads 65

Illustrates the need for Control synchronization


A program is to be designed to reduce the elapsed time of a computation that consists of the following actions: Compute Y=HCF (Amax, X), where array A contains n elements and Amax is the maximum value in the array A. Insert Y in array A.. Arrange in ascending order. The problem can be split into following steps Read n elements of array A. Find the maximum magnitude Amax. Read X. Compute Y=HCF (Amax, X). Include Y in array A and arrange elements of A in ascending order.

1. 2. 3. 4. 5.

October 14, 2011

Chapter 3- Processes and threads

66

Illustrates the need for Control synchronization


Decide which of these statements can be performed concurrently. Process for statements 1 and 3 can be performed concurrently. Process for statements 2,4 and 5 are interacting processes. They cannot be executed concurrently because they share array A. However concurrency can be achieved by splitting steps 2 and 5 into two parts each as 2(a) Copy array A into array B. 2(b) Find Amax. 5(a) Arrange array B in ascending order. 5(b) Include Y in array B at an appropriate place. Now processes executing steps 2(b) and 5(a) are independent processes, so these steps can be executed concurrently. Once 2(b) has been performed step 4 can also be performed with 5(a).
October 14, 2011 Chapter 3- Processes and threads 67

Illustrates the need for Control synchronization

Concurrent processes
Process P1
Read n elements of A Copy A into array B

Process P2
Find Amax

Process P3
Read X

Process P4
Compute Y=HCF (Amax,X)

Process P5
Arrange B in ascending order

Process P6
Include Y array B in

October 14, 2011

Chapter 3- Processes and threads

68

Illustrates the need for Control synchronization


Processes P1 and P3 can be executed concurrently. Processes P1 and P2 cannot be initiated concurrently because they share array A. Process P2 must be initiated after Process P1. P4 can be initiated when both P2 and P3 terminates. P5 can be initiated as soon as process P1 terminates. P6 can start only after P4 an P5 have terminated. The motivation for structuring the application into a set of concurrent processes is to obtain benefits of multiprogramming and multiprocessing within a program. Advantage if the computer system has multiple CPUs.

October 14, 2011

Chapter 3- Processes and threads

69

Message Passing

Processes use message passing to exchange information. Such messages are called interprocess messages. Message passing is implemented through system calls. These calls are issued by library function send and receive respectively.

October 14, 2011

Chapter 3- Processes and threads

70

Message Passing
Process Pi
..

Process Pj
. Receive (Pi,<area address>);

send (Pj, <message>);


.

Shows message passing between processes Pi and Pj. Process Pi sends a massage msgk to process Pj by executing a function call send (Pj,msgk) which leads to system call send. The kernel has to ensure that the message msgk reaches Process Pj when it wishes to receive message, I.e. when it executes system call receive.
October 14, 2011 Chapter 3- Processes and threads 71

Message Passing
Kernel first copies message into a buffer area and awaits a receive call from process Pj. This call occurs when Pj executes the function call receive (Pi, alpha). The kernel now copies msgk out of its buffer and into data area allocated to alpha. The send and receive calls are executed by different processes, so one cannot assume that send call always precede the receive call. If many messages have been send to process Pj, the kernel queues them and delivers them in FIFO order when Pj executes receive calls.
October 14, 2011 Chapter 3- Processes and threads 72

Advantages of Message Passing


Processes do not need to use shared data for exchanging information. Message passing is tamper proof as messages reside in the system buffers untill delivery. The kernel can give a warning if data area mentioned in the receive call is smaller in size than the message to be received. The kernel takes the responsibility to block a process executing a receive when no messages exists for it. The processes may belong to different applications and may even exists in different computer systems.
October 14, 2011 Chapter 3- Processes and threads 73

Signals
The Signals mechanism is implemented along the same lines as interrupts. A process Pi wishing to send a signal (exceptional situation) to another process Pj invokes the library function signal with two parameters: id of the destination process i.e. Pj and a signal number that indicates the kind of signal to be passed. This function uses the software interrupt instruction <SI_instrn> <interrupt_code> to make a system call called signal.
October 14, 2011 Chapter 3- Processes and threads 74

Signals
Two interesting issues arise in the implementation signals. The process sending a signal should know the id of the destination process. This requirement restricts the scope of signals to processes within a process tree. The second issue concerns kernel action if the process to which a signal is being sent is in the blocked state. The kernel would have to change the state of the process temporarily to ready so that it can execute its signal handling code. After signal handling code is executed, the kernel would have to change the process state back to blocked.
October 14, 2011 Chapter 3- Processes and threads 75

Threads
Use of Processes to provide concurrency within an application incurs high process switching overhead. Threads provides low cost method of implementing concurrency that is suitable for certain kinds of applications. Process switching overhead has two components: Execution related overhead: A process is defined as an execution of a program. Hence while switching between a process dispatching and context save function are overheads. Resource use related overhead: Process Environment contains information concerning resources allocated to a process and interaction with other processes. It leads to a large size of process state information, which adds to the process switching overhead.
October 14, 2011 Chapter 3- Processes and threads 76

Threads If processes Pi and Pj belong to same application, they share the code, data and resources. Their state information differs only in values contained in CPU registers and stacks. Much of the saving and loading process state information while switching from Pi to Pj is thus redundant. This feature is exploited to achieve a reduction in switching overhead. The notion of a thread is used for this purpose.
October 14, 2011 Chapter 3- Processes and threads 77

Threads

A thread is a program execution that uses the resources of a process. Since thread is a program execution, it has its own stack and CPU state. Threads of a same process share code, data and resources with one another. The process abstraction can be used as before except that processes typically have distinct code and data parts.
October 14, 2011 Chapter 3- Processes and threads 78

Threads in a process Pi: (a) concept, (b) implementation

(a)

(b)

Stacks

Pi

Memory Resource File pointers info info

PCB

code

data

stack

Thread control blocks (TCBs)


October 14, 2011 Chapter 3- Processes and threads 79

Threads

Process Pi has three threads represented by wavy lines. Kernel allocates a stack and a thread control block (TCB) to each thread. The thread execute within the environment of Pi. The OS is aware of this fact, so it saves only the CPU state and the stack pointer while switching between threads of the same process.
October 14, 2011 Chapter 3- Processes and threads 80

Threads state and state transitions

Threads states and thread state transitions are analogous to process state and process state transitions. When thread is created, it is put in the ready state because its parent process already has necessary resources allocated to it. Enters running state when it is scheduled.

October 14, 2011

Chapter 3- Processes and threads

81

Advantages of threads

An application process can create many threads to execute its code. Creation of multiple threads are having more advantage than creation of many processes. Low overhead while switching the CPU from one thread to another thread of a same process. The resource state is switched only while switching between the threads of different process.
October 14, 2011 Chapter 3- Processes and threads 82

Advantages of threads

Use of threads provides concurrency. It can provide computation speed-up. If one thread of a process blocks on an I/O operation , the CPU can be switched to another thread of the same process. In an airline reservation system or a banking system, a new thread can be created to handle each new request. OS would schedule these threads to provide concurrency.
October 14, 2011 Chapter 3- Processes and threads 83

Advantages of threads
1. Low overhead: Thread state consist only state of computation (execution state), resource allocation state & communication state are not a part of thread. This fact leads to low process switching overhead. Speed up: Concurrency within a process can be realized by creating many threads. This can speed up the execution of an application on uni-processsor & in multiprocessors. As we need to save only thread state the execution is faster and cheaper. Efficient communication: Threads of a process can communicate one another through shared data space. This avoids the use of system calls for communication thereby avoiding the kernel overhead.
Chapter 3- Processes and threads 84

2.

3.

October 14, 2011

Implementation of threads
Threads are implemented in different ways. The main difference is in how much the kernel and the application program know about the threads. Three methods of implementing threads. Kernel-level threads. User-level threads. Hybrid-threads. Switching between kernel-level threads of a process is over 10 times faster than switching between processes. Switching between user-level threads of a process is over 100 times faster than switching between processes.
October 14, 2011 Chapter 3- Processes and threads 85

Kernel-level threads A kernel-level thread is implemented by the kernel. Hence creation and termination of kernel-level threads and checking their status is performed through system calls. When a process makes a create_thread system call, the kernel allocates an id to it and allocates TCB. The TCB contains pointer to the PCB of the process.
October 14, 2011 Chapter 3- Processes and threads 86

Event

Flow chart for Scheduling of Kernel level threads


N

Save thread state Event Processing Scheduling

Save Process context Load new Context


P

Thread of Same process ? No N


October 14, 2011

Dispatch thread
YES P

Chapter 3- Processes and threads

87

Scheduling of kernel-level threads

PCB

PCB

Selected TCB

October 14, 2011

Chapter 3- Processes and threads

88

User-level threads
User-level threads are implemented by thread library, which is linked to the code of a process. Kernel is not aware of presence of user-level threads in a process; it sees only the process. Thread library code is the part of each process. It performs scheduling to select a thread and organizes its execution. The thread library uses information in the TCBs to decide which thread should operate at any time.
October 14, 2011 Chapter 3- Processes and threads 89

Scheduling of user-level threads

TCBs
Mapping performed by threads PCBs library

Selected PCB
October 14, 2011 Chapter 3- Processes and threads 90

Actions of the thread library (N,R,B indicate running, ready and blocked)
h1 h2 h3 Pi h1 h2 h3 Pi

TCBs

B h1

B h1

PCB of Pi

Running

Running

October 14, 2011

(a)

Chapter 3- Processes and threads

(b)

91

Hybrid Thread Models

PCBs

TCBs

KTCBs

(a)
October 14, 2011

(b)
Chapter 3- Processes and threads

(c)
92

Hybrid Thread Models The hybrid model has both user-level threads and kernel-level threads. The thread library creates user-level threads in a process and associates a TCB with each user level thread. The kernel creates kernel-level threads in a process and associates a KTCB with each kernel level thread.
October 14, 2011 Chapter 3- Processes and threads 93

PROCESSES IN UNIX
There are two data structures in Unix to hold the control data about processes 1. u area ( user area ) 2. Proc structure.

U area data structure include PCB blocked process and cpu status Pointer to process structure User and group ids information concerning signal handlers for the process information concerning all open files and current directory terminals attached to process, if any CPU usage information

October 14, 2011

Chapter 3- Processes and threads

94

Significant fields of the proc structures

Process id Process state Priority Pointers to proc structure signal handling mask Memory management information

October 14, 2011

Chapter 3- Processes and threads

95

TYPES OF PROCESSES
User processes execute user computations with provided terminals create processes form tree of processes co-ordinate processes activities Daemon Processes- perform functions on a system wide basis controlling computational environment of the system examples : Print spooling, Network mgmt. once created daemon processes exist throughout life time of OS Kernel Processes : execute code of the kernel. concerned with allocation and utilization of system resources They have access to data structure of OS

October 14, 2011

Chapter 3- Processes and threads

96

PROCESS CREATION AND TERMINATION IN UNIX

Process creation by fork system calls Process termination is by exit system call exit ( status_code); Where status_code is a code indicating termination status of a process. A process can wait for termination of a child process through the System call wait ( addr (xyz)); Where xyz is a variable within address space of a process Pi. The wait call stores the terminated status of a terminated child process in xyz.

October 14, 2011

Chapter 3- Processes and threads

97

SIGNALS IN UNIX SIGCHLD child process suspended SIGFPE Arithmetic fault SIGILL illegal instruction SIGINT Control-C SIGKILL kill process SIGSEGV- segmentation fault SIGSYS invalid system call SIGXCPU exceed CPU limit SIGXFSZ exceed file size limit
Chapter 3- Processes and threads 98

October 14, 2011

PROCESS STATE TRANSITIONS IN UNIX


User process executes user code
Interrupt Or system call EXIT

User running return

User process executes kernel code


SCHEDULING

KERNEL RUNNING

Zombie

PREEMPTION

RESOURCE I/O REQUEST

READY

BLOCKED
GRANT OR I/O TERMINATION

October 14, 2011

Chapter 3- Processes and threads

99

PROCESS STATE TRANSITIONS IN UNIX Two distinct running states, these are called as user running and kernel running. A user process executes user code while in the user running state and kernel code while in kernel running state. Transition occurs from user running to kernel running when there is an interrupt or I/O operation. Process does not get blocked or preempted in user running state.
October 14, 2011 Chapter 3- Processes and threads 100

Threads in SOLARIS SOLARIS is UNIX 5.4 based Operating System. There are three kind of threads User threads : user threads are created and managed by thread library. Light weight processes : intermediately between user threads and kernel threads. Kernel threads : thread created by the kernel for concurrency control
October 14, 2011 Chapter 3- Processes and threads 101

Threads in SOLARIS

Threads

Pi

Pj
Mapping by Threads library .. LWP control blocks
Scheduler data structure

KTCB

SELECTED KTCB
KERNEL THREAD CONTROL BLOCK October 14, 2011 Chapter 3- Processes and threads 102

Chapter 4

Message Passing

October 14, 2011

Chapter 3- Processes and threads

103

Message Passing
Process Pi
..

Process Pj
. Receive (Pi,<area address>);

send (Pj, <message>);


.

Shows message passing between processes Pi and Pj. Process Pi sends a massage msgk to process Pj by executing a function call send (Pj,msgk) which leads to system call send. The kernel has to ensure that the message msgk reaches Process Pj when it wishes to receive message, I.e. when it executes system call receive.
October 14, 2011 Chapter 3- Processes and threads 104

Issues of message passing


Issue
Naming of processes

Aspect

Processes participating in message transfer are explicitly indicated or deduced by the kernel Method for Whether a sender process is blocked until transferring a message sent by it is delivered, the order messages in which messages are delivered etc Kernel Buffering of messages pending delivery to responsibilities recipient processes. Blocking and activation of processes.
October 14, 2011 Chapter 3- Processes and threads 105

Direct and Indirect naming


Send and receive statements shown below use direct naming. Sender and receiver processes mention each others name using the following syntax.
Send(<destination_ process>, <message>); Receive (<source_process>, <message_area>);

In indirect naming, processes do not mention each others names in send and receive statements.
October 14, 2011 Chapter 3- Processes and threads 106

Blocking and Non-blocking sends


The send primitive has two variants. A blocking send blocks a sender process till the message being sent is delivered to the destination. A non blocking send permits a sender to continue execution after executing a send irrespective of message being delivered immediately. Receive primitive is typically blocking. Process performing a receive must wait until a message can be delivered to it. Message passing using blocking and non blocking sends are known as synchronous and asynchronous message passing respectively.
October 14, 2011 Chapter 3- Processes and threads 107

Exceptional conditions
Several exceptional conditions can arise during message delivery The destination process mentioned in a send does not exist. A send cannot be executed because the kernel has run out of buffer memory. No message exits for a process when it executes a receive statement.

October 14, 2011

Chapter 3- Processes and threads

108

Вам также может понравиться