Вы находитесь на странице: 1из 24

Processes and Threads

Overview

Threads Fundamentals Solaris Threads in some detail

Process: 2 Characteristics

Resource Ownership

Virtual address space for process image May own from time to time main memory, I/O channels and devices, files. OS protects between processes Execution path/trace through one or more programs.

Scheduling/Execution

These are two independent issues!

Another Way to Think About It

2 Major Problems with notion of a process.

Association of just one processing activity with each process, even though process is an expensive resource to create and manage. Not efficient for applications that have several concurrent tasks that can run in parallel but can share a common address space and other resources. Cant take advantage of multiprocessors.

Solution: Generalize the notion of a process so that it can be associated with multiple activities.

Threads

Unit of dispatching: thread Unit of resource ownership: process. Multithreading: ability of an OS to support multiple threads of execution in a single process.

In a multithreaded environment:

Process: Defined as unit of resource allocation and a unit of protection. Associated with process:

Virtual address space with process image Protected access to processors, other processes.

Within a process, there may be one or more threads, each with


Thread execution state Saved thread context when not running (ind. Program counter) Execution stack Some per-thread static storage for local variables Access to memory and resources of its process, shared with all other threads in that process.

Why Threads?

Performance!

Takes less time to create a new thread in an existing process than to create a brand new process Takes less time to terminate Less time to switch between two threads. Enhance efficiency in communication between different executing programs.

You could do all the same things with processes, just would not be as efficient.

Where Threads?

File Server:

New thread for each request. Many threads created/destroyed in a short period. Multiprocessor enables you to take advantage Share file data and coordinate actions

Another Server Example

Consider server with N threads, each of which receives request message from port and processes. Avg: 2 milliseconds of processing and 8 of I/O delay.

100 transactions/second. Schedule one when the other blocked for I/O. 125/sec.

Two threads:

Introduce caching: get 75% hit rate. I/O on average is 2 milliseconds. Average processor time to 2.5 milliseconds. 400 requests/second. SMP, 2 processors. I/O bound. 500/second.

Why Threads on a Uniprocessor?


Foreground and background work

Spreadsheet: display vs updating

Asynchronous Processing: Word processor: background thread to save to disk. Speed execution:

Compute while reading next batch of data

Modular program structure

Thread Functionality

Thread States

Running, Ready and Blocked

Suspension is a process-level notion

Four Basic Operations Associated with a change in thread state:


Spawn Block

Does this cause entire process to block?

Unblock Finish

What is the Downside?


All of the threads of a process share the same address space and other resources, such as open files. Need to synchronize the activities of the various threads so that they do not interfere with each other or corrupt data structures. Need to be careful and not get carried away and get a huge number of thrashing threads. We will discuss this in depth.

User-Level vs. Kernel-Level Threads


Two broad categories. User-level: all of the work of thread management done by user. No magic hooks into secret kernel routines.

Advantages of ULT

Thread creation, scheduling switching does not require kernel mode privileges. Saves overhead of mode switch, no use of kernel resources can be fast and cheap. Scheduling can be application specific ULTs can run on any OS.

Disadvantages of ULT

When thread makes blocking system call, entire process blocks Cant easily take advantage of multiprocessing.

KLT

All work done by kernel. No thread management code in application, simply an API call to kernel thread facility. (W2K, LINUX) Kernel maintains context information for the process as a whole and for individual threads within the process. Overcomes the two major drawbacks Disadvantage: Transfer of control from one thread to another requires a mode switch to the kernel.

Combined Approach (Solaris)

Thread creation is done completely in user space as is the bulk of scheduling and synchronization of threads within an application. Multiple ULTs from a single application are mapped onto some smaller or equal number of KLTs. Programmer may adjust the number of KLTs for a particular application and machine to achieve best overall results. Could be the best of both worlds.

Win2K Processes and Threads Resources


http://www.win2000mag.com/Articles/Index.cfm?ArticleID=7597&pg=1 http://www.microsoft.com/mspress/books/sampchap/4354.asp#151

Solaris Model

Process: normal UNIX process. Includes users address space, tack and process control block. Threads (user level). Implemented through a threads library in the address space of a process, invisible to OS. Interface for application parallelism Lightweight processes (LWP).

Mapping between ULTs and kernel threads Each LWP supports one or more threads and maps to one kernel thread

Kernel threads fundamental entities that can be scheduled and dispatched to run on one of the system processors.

Motivation, Examples

Flexibility = power Multiple windows, only one active at a time

Many threads on one LWP. Creation, Destruction, blocking, etc. w/o involving kernel.

Application with threads that block multiple LWPs. Nonblocked threads running happens naturally. Independent matrix computations: 1-1.

Thread Execution

4 Scheduling States for Threads

Active

Assigned to a LWP; executes when underlying kernel thread executes. Ready to run, not enough LWPs to get one. Remains in this state until an active thread loses LWP or until a new thread created. Waiting for sync variable Call to thr_suspend() made. Needs thr_continue()

Runnable

Sleeping

Stopped

Thread Scheduling Details


Global Scheduling: Scheduling done by kernel

Bind thread to LWP.

Local Scheduling: Done by threads library. Library chooses which unbound thread will be put on which LWP.

Once a thread starts running on a LWP it will continue to run, potentially forever. 4 means to cause a running thread to context switch.

Synchronization T1 requests mutex lock and does not get it. T1 sleeps Suspension: Call thr_suspend on T1 Running thread causes something that makes a higher priority thread to be runnable. Yielding yield to another runnable thread of same priority.

Scheduler for unbound threads has a very simple alg to decide which thread to run: highest priority.

Вам также может понравиться