Академический Документы
Профессиональный Документы
Культура Документы
Discussions on
Process concept. Programmer view of processes. OS view of processes. Interacting processes. Threads. Processes in UNIX. Threads in Solaris ( Unix 5.4 based OS).
Chapter 3- Processes and threads 2
(a)
October 14, 2011
(b)
Chapter 3- Processes and threads 5
Relationship
One-to-one Many-to-one
Examples
A single execution of a sequential program. Many simultaneous executions of a same program, Execution of a concurrent program.
Child processes
The OS initiates the execution of a program by creating a process for it. This is called the main process for the execution. A main process may create other processes, which becomes its child processes. A child process may create other processes. All these processes form a tree with the main process as a root.
(b) Processes
Housekeeping
12
Create process is a library routine which takes 3 parameters Process id Procedure name & An integer specifies its priority This system call creates new process. The priority value is entered in an OS data structure & used when scheduling.
13
14
Explanation
Creation of multiple processes in an application provides multi-tasking. Enables OS to interleave execution of I/O bound and CPU-bound processes in an application providing computation speed-up
Priority for critical A child process created to perform a critical function functions in an application may be assigned a higher priority than other functions Protecting Parent processes from errors
October 14, 2011
The OS cancels a child process if an error arises during its execution. This action does not affect the parent process.
15
16
1. 2. 3. 4.
17
Sharing, communication and synchronization between processes Four kinds of process interaction Interaction
Data sharing
Description
Shared data may become inconsistent if several processes update the data at the same time. Hence processes must interact to decide when it is safe for a process to access shared data. Processes exchange information by sending messages to one another. To fulfill a common goal, processes must coordinate their activities and perform their actions in a desired order. A signal is used to convey occurrence of an exceptional situation to a process.
Chapter 3- Processes and threads 18
Signals
October 14, 2011
Sharing, communication and synchronization between processes Synchronization: If an action ai is to be performed only after an action aj, the process that wish to perform action ai is made to wait until some other processes performs aj. An OS provides facilities to check if another
process has performed a specific action.
19
Concurrency and Parallelism Parallelism is the quality of occurring at the same time. Two events are parallel if they occur at same time and two tasks are parallel if they are performed at same time. Concurrency is the illusion of parallelism. Thus, two tasks are concurrent if there is an illusion that they are being performed in parallel. Whereas, in reality, only one task can be performed at any time. Concurrency is obtained by interleaving operation of processes on the CPU.
October 14, 2011 Chapter 3- Processes and threads 20
OS VIEW OF PROCESSES
In OS view, a process is an execution of a program. To realize this view, the OS creates processes, schedules them for use of the CPU and terminates them. To perform scheduling an OS must know which processes require the CPU at any moment. OS view of processes is to monitor all processes and know what each process is doing at any moment of time. A process can be executing on the CPU, waiting for the CPU to be allocated to it, waiting for an I/O operation to complete or waiting to be swapped into memory. The OS uses the notion of process state to keep track of what a process is doing at any moment.
October 14, 2011 Chapter 3- Processes and threads 21
OS VIEW OF PROCESSES
Process : A process is comprised of six components: (id, code, data, stack, resources, CPU state) id is a unique name/id assigned to the program. Code is the program code. Data is the data and files used in the programs execution. Resources is the set of resources allocated by the OS. Stack contains parameters of functions and procedures called, and their return address. CPU state is comprised of contents of the PSW fields and the CPU registers.
22
Controlling Processes The process uses CPU when it is scheduled. Processes uses system resources like memory and user-created resources like files. The OS has to maintain information about all these features of a process. The arrangement to control a process includes process environment and process control block (PCB).
October 14, 2011 Chapter 3- Processes and threads 23
OS VIEW OF PROCESS
Memory info
Resource info
code
data
stack
Process Environment
October 14, 2011
Process Environment
Process environment contains the address space of a process i,e. code, data, stack etc.. The OS creates process environment by allocating memory to the process, loading the process code in the allocated memory and setting up data space. OS puts in information concerning access to resources allocated to the process and its interaction with other processes and with the OS.
October 14, 2011 Chapter 3- Processes and threads 25
Description
Code of the program, including its functions and procedures, and its data, including the stack. Memory areas allocated to process. This information is used to implement memory accesses made by the process. Pointers to files opened by the process, and current position in the files. Interprocess messages, signal handlers, ids of parent and child processes.
1. 2. 3.
Scheduling
Dispatching
28
PROCESS STATE
Definition : Process state describes the nature of the Current activity in a process State transition A state transition of a process pi is the change in its state. A state transition is caused by the occurrence of some event in the system. FUNDAMENTAL STATE TRANSITION FOR A PROCESS COMPLETION
Dispatching
RUNNING
TERMINATED
New process
October 14, 2011
BLOCKED
Process State There are 4 fundamental states defined for a process. Running Ready Blocked or suspend Terminate Running: A CPU is currently allocated to a process & process is under execution. Ready: The process is not running however it can execute if a CPU is allocated to it. Blocked: The process is waiting for a request to be satisfied or a event to occur such a process doesnt execute even if CPU is available Terminate: The process has completed its execution.
October 14, 2011 Chapter 3- Processes and threads 31
Cause of state transition Process is scheduled Process is preempted Process makes a request Which cannot be satisfied immediately Process completed its execution The request made by the Process is satisfied
32
4. Running 5. Blocked
Terminated Ready
Five major causes for blocking a process Process request an I/O operation. Process request memory or some other resource. Process wishes to wait for a specified interval of time. Process waits for a message from another process. Process wishes to wait for some action by another process.
October 14, 2011 Chapter 3- Processes and threads 33
Five major causes for process termination Self termination : The program being executed either completes the task or realizes that it cannot execute meaningfully. Termination by a parent process. Exceeding resource utilization. Abnormal conditions during execution eg:Memory protection violation. Incorrect interaction with other processes.
October 14, 2011 Chapter 3- Processes and threads 34
An Example
Consider a time sharing system which uses a time slice of 10ms. It contains two programs P1 & P2. P1 has a CPU burst of 15ms followed by an I/O burst of 100ms while P2 has a CPU burst of 30ms while an I/O burst of 60ms. Kernel creates two process p1 & p2 for programs P1 & P2. Illustrate the state transition table of the application.
Event ---p1 is preempted p2 is preempted p1 invokes i/o p2 is preempted ---p2 invokes i/o i/o completion interrupt of p2
35
PROCESS CONTROL BLOCK The Process Control Block is a data structure which contains all information for a process which is used in controlling its execution
Process ID Priority Process State PSW CPU registers Event information Memory allocation Signal Information PCB pointer
Chapter 3- Processes and threads
36
37
38
Event Control Block (ECB) When an event occurs, the kernel must find the process that is affected by it. For e.g. when an I/O completion interrupt occurs, the kernel must identify the process awaiting its completion. It can achieve this by searching event information field of PCBs of all processes. This search is expensive and OS uses various schemes to speed it up and one of them is Event control Block.
October 14, 2011 Chapter 3- Processes and threads 40
The process id field contains id of the process awaiting for the event When a process Pi gets blocked for occurrence of an event ei, the kernel forms an ECB and puts relevant information concerning ei and Pi into it. Separate ECB list for each class of event.
41
The actions of a kernel when process Pi requests an I/O operation on some device d and when I/O operation completes are as follows 1. The kernel creates an ECB and initializes it as follows (a) Event description = End of I/O on device d (b) process awaiting the event= Pi The newly created ECBj is added to the list of ECBs. The state of Pi is changed to blocked and the address of ECBj is put into the Event information" field of Pis PCB. When a interrupt End of I/O on device d occurs, ECBj is located by searching for an ECB with a matching event description field. The id of the affected process I,e. Pi is extracted from the ECBj and Pi state is changed to ready in PCB of Pi
Chapter 3- Processes and threads 42
2. 3.
4.
5.
PCB-ECB interrelationship
PCB ECB Pi End of I/O on d (event description) blocked Pi
Event information
43
I/O Request
Terminate Process
Block
Preempt
Schedule
Dispatch
Unblock
44
EVENT HANDLING ACTIONS OF THE KERNEL The block action always changes the state of a process that made the system call from ready to blocked. The unblock action finds a process whose request can be fulfilled now and changes its state from blocked to ready. A system call for requesting a resource leads to block action if the resource cannot be allocated to the requesting process. This action is followed by scheduling and dispatching of another process.
October 14, 2011 Chapter 3- Processes and threads 45
The block action is not performed if the resource can be allocated straightaway. In this case, the interrupted process is simply dispatched again. When a process releases a resource, an unblock action is performed if some other process is waiting for the released resource.
46
INTERACTING PROCESS
An application program creates many processes to realize the following advantages. Computation speed-up by utilizing multiple CPUs Improved response times or elapsed times of an application Reflecting real world requirements EX : An airline reservation system
Reservations Data
Agent terminals
October 14, 2011 Chapter 3- Processes and threads 47
INTERACTING PROCESS
The process interacts in two ways Data sharing Message sharing Here processes interacting through data sharing has to be dealt carefully, because there should be very good co-ordination between processes when they share some data. The following notation are used to define the interacting process with data sharing read_set i set of data items read by process Pi. write_set i set of data items modified by process Pi.
October 14, 2011 Chapter 3- Processes and threads 48
INTERACTING PROCESS
Process Pi & Pj are interacting processes if and only if(read_seti write_setj)!=0 or (read_set j write_seti) != 0 Processes that do not interact are said to be independent processes.
October 14, 2011 Chapter 3- Processes and threads 49
Race conditions and Data Access synchronization An application may consist of a set of processes sharing some data ds. Data access synchronization involves blocking and activation of these processes such that they correctly share ds. The need for data access synchronization arises because accesses to shared data in an arbitrary manner may lead to wrong results.
October 14, 2011 Chapter 3- Processes and threads 50
51
Race conditions and Data Access synchronization Let ai and aj be the update operations. ai: ds:=ds+10; aj: ds:=ds+5; If processes pi and pj perform operations ai and aj, respectively, one would expect 15 to be added to the value of ds. A race condition arises if this is not the case.
October 14, 2011 Chapter 3- Processes and threads 54
Processes of a airline reservations system executes identical code. Processes share the variables nextseatno and capacity. Each process examines the value of nextseatno and updates it by 1 if seat is available.
56
S3 nextseatno := nextseatno + 1;
Case
1
Execution flow
Process pi executes the if statement and compares value of nextseatno with capacity and proceeds to execute statements s2 and s3 that allocate a seat to it and increment nextseatno. When process pj executes the if statement it finds that no seats are available, so it does not perform any seat selection. Process pi executes the if statement and finds that a seat can be allocated. However pi gets preempted before it can perform allocation. Process pj now executes the if statement and finds that a seat is available. It allocates a seat and exits. nextseatno is now 201. however when process pi is resumed, it proceeds to execute instruction S2.1because seat is available before it was preempted and allocates seat numbered 201 even though 200 seats are available Process pi gets preempted after it loads 200 in regj. Now both pi and pj allocate a seat each however nextseatno is incremented by only 1. Thus cases 2 and 3 involve race conditions.
Chapter 3- Processes and threads 58
Actions of case 1 Pj ----------------------------S1.1 S1.2 S4.1 ----------------Pi S1.1 S1.2 S1.1 S1.2 S2.1 S3.1 S3.2 S3.3 S3.4 -----------------------------
Actions of case 2 Pj --------S1.1 S1.2 S2.1 S3.1 S3.2 S3.3 S3.4 --------------------Pi S1.1 S1.2 S2.1 S3.1
Actions of case 3 Pj ----------------S1.1 S1.2 S2.1 S3.1 S3.2 S3.3 S3.4 ------------59
Race conditions
Existence of race conditions in a program leads to a practical difficulty. Behavior depends on the order in which instructions of different processes are executed. This feature complicates testing and debugging of programs containing concurrent processes. Best way to handle race condition is to prevent them from arising.
October 14, 2011 Chapter 3- Processes and threads 60
Preventing race conditions Race conditions would not arise if we ensure that operations ai and aj do not execute concurrently. Action aj will not be in execution if ai is in execution. This requirement is called Mutual exclusion. Only one operation to access shared date ds at any time.
October 14, 2011 Chapter 3- Processes and threads 61
Control synchronization
In control synchronization, interacting processes coordinate their execution with respect to one another. Control synchronization between a pair of processes Pi and Pj implies that execution of some instruction (statement) Sj in process Pj and the instructions (statements) following in the order of execution, are delayed until Process Pi executes an instruction (statement) Si.
October 14, 2011 Chapter 3- Processes and threads 63
Control synchronization
Contol synchronization between process Pi and Pj (a) (b)
Sj 1 Si Sj Sj
Si
Pi
October 14, 2011
Pj
Pi
Pj
64
Control synchronization
Shows execution of Process Pi and Pj. Time axis extends downwards and execution of a statement shown at higher level in a process occurs earlier than at lower level. Statement Sj is the first statement of process Pj. Execution of this statement cannot take place until process Pi executes statement Si. Thus synchronization occurs at the part of process Pj. Part (b) shows synchronization occurring in the middle of process Pj because statement Sj of process Pj cannot be executed until process pi executes statement Si.
October 14, 2011 Chapter 3- Processes and threads 65
1. 2. 3. 4. 5.
66
Concurrent processes
Process P1
Read n elements of A Copy A into array B
Process P2
Find Amax
Process P3
Read X
Process P4
Compute Y=HCF (Amax,X)
Process P5
Arrange B in ascending order
Process P6
Include Y array B in
68
69
Message Passing
Processes use message passing to exchange information. Such messages are called interprocess messages. Message passing is implemented through system calls. These calls are issued by library function send and receive respectively.
70
Message Passing
Process Pi
..
Process Pj
. Receive (Pi,<area address>);
Shows message passing between processes Pi and Pj. Process Pi sends a massage msgk to process Pj by executing a function call send (Pj,msgk) which leads to system call send. The kernel has to ensure that the message msgk reaches Process Pj when it wishes to receive message, I.e. when it executes system call receive.
October 14, 2011 Chapter 3- Processes and threads 71
Message Passing
Kernel first copies message into a buffer area and awaits a receive call from process Pj. This call occurs when Pj executes the function call receive (Pi, alpha). The kernel now copies msgk out of its buffer and into data area allocated to alpha. The send and receive calls are executed by different processes, so one cannot assume that send call always precede the receive call. If many messages have been send to process Pj, the kernel queues them and delivers them in FIFO order when Pj executes receive calls.
October 14, 2011 Chapter 3- Processes and threads 72
Signals
The Signals mechanism is implemented along the same lines as interrupts. A process Pi wishing to send a signal (exceptional situation) to another process Pj invokes the library function signal with two parameters: id of the destination process i.e. Pj and a signal number that indicates the kind of signal to be passed. This function uses the software interrupt instruction <SI_instrn> <interrupt_code> to make a system call called signal.
October 14, 2011 Chapter 3- Processes and threads 74
Signals
Two interesting issues arise in the implementation signals. The process sending a signal should know the id of the destination process. This requirement restricts the scope of signals to processes within a process tree. The second issue concerns kernel action if the process to which a signal is being sent is in the blocked state. The kernel would have to change the state of the process temporarily to ready so that it can execute its signal handling code. After signal handling code is executed, the kernel would have to change the process state back to blocked.
October 14, 2011 Chapter 3- Processes and threads 75
Threads
Use of Processes to provide concurrency within an application incurs high process switching overhead. Threads provides low cost method of implementing concurrency that is suitable for certain kinds of applications. Process switching overhead has two components: Execution related overhead: A process is defined as an execution of a program. Hence while switching between a process dispatching and context save function are overheads. Resource use related overhead: Process Environment contains information concerning resources allocated to a process and interaction with other processes. It leads to a large size of process state information, which adds to the process switching overhead.
October 14, 2011 Chapter 3- Processes and threads 76
Threads If processes Pi and Pj belong to same application, they share the code, data and resources. Their state information differs only in values contained in CPU registers and stacks. Much of the saving and loading process state information while switching from Pi to Pj is thus redundant. This feature is exploited to achieve a reduction in switching overhead. The notion of a thread is used for this purpose.
October 14, 2011 Chapter 3- Processes and threads 77
Threads
A thread is a program execution that uses the resources of a process. Since thread is a program execution, it has its own stack and CPU state. Threads of a same process share code, data and resources with one another. The process abstraction can be used as before except that processes typically have distinct code and data parts.
October 14, 2011 Chapter 3- Processes and threads 78
(a)
(b)
Stacks
Pi
PCB
code
data
stack
Threads
Process Pi has three threads represented by wavy lines. Kernel allocates a stack and a thread control block (TCB) to each thread. The thread execute within the environment of Pi. The OS is aware of this fact, so it saves only the CPU state and the stack pointer while switching between threads of the same process.
October 14, 2011 Chapter 3- Processes and threads 80
Threads states and thread state transitions are analogous to process state and process state transitions. When thread is created, it is put in the ready state because its parent process already has necessary resources allocated to it. Enters running state when it is scheduled.
81
Advantages of threads
An application process can create many threads to execute its code. Creation of multiple threads are having more advantage than creation of many processes. Low overhead while switching the CPU from one thread to another thread of a same process. The resource state is switched only while switching between the threads of different process.
October 14, 2011 Chapter 3- Processes and threads 82
Advantages of threads
Use of threads provides concurrency. It can provide computation speed-up. If one thread of a process blocks on an I/O operation , the CPU can be switched to another thread of the same process. In an airline reservation system or a banking system, a new thread can be created to handle each new request. OS would schedule these threads to provide concurrency.
October 14, 2011 Chapter 3- Processes and threads 83
Advantages of threads
1. Low overhead: Thread state consist only state of computation (execution state), resource allocation state & communication state are not a part of thread. This fact leads to low process switching overhead. Speed up: Concurrency within a process can be realized by creating many threads. This can speed up the execution of an application on uni-processsor & in multiprocessors. As we need to save only thread state the execution is faster and cheaper. Efficient communication: Threads of a process can communicate one another through shared data space. This avoids the use of system calls for communication thereby avoiding the kernel overhead.
Chapter 3- Processes and threads 84
2.
3.
Implementation of threads
Threads are implemented in different ways. The main difference is in how much the kernel and the application program know about the threads. Three methods of implementing threads. Kernel-level threads. User-level threads. Hybrid-threads. Switching between kernel-level threads of a process is over 10 times faster than switching between processes. Switching between user-level threads of a process is over 100 times faster than switching between processes.
October 14, 2011 Chapter 3- Processes and threads 85
Kernel-level threads A kernel-level thread is implemented by the kernel. Hence creation and termination of kernel-level threads and checking their status is performed through system calls. When a process makes a create_thread system call, the kernel allocates an id to it and allocates TCB. The TCB contains pointer to the PCB of the process.
October 14, 2011 Chapter 3- Processes and threads 86
Event
Dispatch thread
YES P
87
PCB
PCB
Selected TCB
88
User-level threads
User-level threads are implemented by thread library, which is linked to the code of a process. Kernel is not aware of presence of user-level threads in a process; it sees only the process. Thread library code is the part of each process. It performs scheduling to select a thread and organizes its execution. The thread library uses information in the TCBs to decide which thread should operate at any time.
October 14, 2011 Chapter 3- Processes and threads 89
TCBs
Mapping performed by threads PCBs library
Selected PCB
October 14, 2011 Chapter 3- Processes and threads 90
Actions of the thread library (N,R,B indicate running, ready and blocked)
h1 h2 h3 Pi h1 h2 h3 Pi
TCBs
B h1
B h1
PCB of Pi
Running
Running
(a)
(b)
91
PCBs
TCBs
KTCBs
(a)
October 14, 2011
(b)
Chapter 3- Processes and threads
(c)
92
Hybrid Thread Models The hybrid model has both user-level threads and kernel-level threads. The thread library creates user-level threads in a process and associates a TCB with each user level thread. The kernel creates kernel-level threads in a process and associates a KTCB with each kernel level thread.
October 14, 2011 Chapter 3- Processes and threads 93
PROCESSES IN UNIX
There are two data structures in Unix to hold the control data about processes 1. u area ( user area ) 2. Proc structure.
U area data structure include PCB blocked process and cpu status Pointer to process structure User and group ids information concerning signal handlers for the process information concerning all open files and current directory terminals attached to process, if any CPU usage information
94
Process id Process state Priority Pointers to proc structure signal handling mask Memory management information
95
TYPES OF PROCESSES
User processes execute user computations with provided terminals create processes form tree of processes co-ordinate processes activities Daemon Processes- perform functions on a system wide basis controlling computational environment of the system examples : Print spooling, Network mgmt. once created daemon processes exist throughout life time of OS Kernel Processes : execute code of the kernel. concerned with allocation and utilization of system resources They have access to data structure of OS
96
Process creation by fork system calls Process termination is by exit system call exit ( status_code); Where status_code is a code indicating termination status of a process. A process can wait for termination of a child process through the System call wait ( addr (xyz)); Where xyz is a variable within address space of a process Pi. The wait call stores the terminated status of a terminated child process in xyz.
97
SIGNALS IN UNIX SIGCHLD child process suspended SIGFPE Arithmetic fault SIGILL illegal instruction SIGINT Control-C SIGKILL kill process SIGSEGV- segmentation fault SIGSYS invalid system call SIGXCPU exceed CPU limit SIGXFSZ exceed file size limit
Chapter 3- Processes and threads 98
KERNEL RUNNING
Zombie
PREEMPTION
READY
BLOCKED
GRANT OR I/O TERMINATION
99
PROCESS STATE TRANSITIONS IN UNIX Two distinct running states, these are called as user running and kernel running. A user process executes user code while in the user running state and kernel code while in kernel running state. Transition occurs from user running to kernel running when there is an interrupt or I/O operation. Process does not get blocked or preempted in user running state.
October 14, 2011 Chapter 3- Processes and threads 100
Threads in SOLARIS SOLARIS is UNIX 5.4 based Operating System. There are three kind of threads User threads : user threads are created and managed by thread library. Light weight processes : intermediately between user threads and kernel threads. Kernel threads : thread created by the kernel for concurrency control
October 14, 2011 Chapter 3- Processes and threads 101
Threads in SOLARIS
Threads
Pi
Pj
Mapping by Threads library .. LWP control blocks
Scheduler data structure
KTCB
SELECTED KTCB
KERNEL THREAD CONTROL BLOCK October 14, 2011 Chapter 3- Processes and threads 102
Chapter 4
Message Passing
103
Message Passing
Process Pi
..
Process Pj
. Receive (Pi,<area address>);
Shows message passing between processes Pi and Pj. Process Pi sends a massage msgk to process Pj by executing a function call send (Pj,msgk) which leads to system call send. The kernel has to ensure that the message msgk reaches Process Pj when it wishes to receive message, I.e. when it executes system call receive.
October 14, 2011 Chapter 3- Processes and threads 104
Aspect
Processes participating in message transfer are explicitly indicated or deduced by the kernel Method for Whether a sender process is blocked until transferring a message sent by it is delivered, the order messages in which messages are delivered etc Kernel Buffering of messages pending delivery to responsibilities recipient processes. Blocking and activation of processes.
October 14, 2011 Chapter 3- Processes and threads 105
In indirect naming, processes do not mention each others names in send and receive statements.
October 14, 2011 Chapter 3- Processes and threads 106
Exceptional conditions
Several exceptional conditions can arise during message delivery The destination process mentioned in a send does not exist. A send cannot be executed because the kernel has run out of buffer memory. No message exits for a process when it executes a receive statement.
108