Академический Документы
Профессиональный Документы
Культура Документы
Operating-System Services
An operating system provides an environment for the execution of programs. It provides certain
services to programs and to the users of those programs. These operating system services are
provided for the convenience of the programmer, to make the programming task easier.
File-system manipulation - The file system is of particular interest. Programs need to read
and write files and directories, create and delete them, search them, list file Information,
permission management.
Communications Processes may exchange information, on the same computer or between
computers over a network. Communications may be implemented via - Shared memory - in which two or more processes read and write to a shared section
of memory.
- Message passing - in which packets of information in predefined formats are moved
between processes by the operating system.
Error detection OS needs to be constantly aware of possible errors
- May occur in the CPU and memory hardware, in I/O devices, in user program
- For each type of error, OS should take the appropriate action to ensure correct and
consistent computing
- Debugging facilities can greatly enhance the users and programmers abilities to
efficiently use the system
Another set of operating system functions exists not for helping the user but rather for
ensuring the efficient operation of the system itself.
Resource allocation - When multiple users or multiple jobs running concurrently, resources
must be allocated to each of them.
- Many types of resources - CPU cycles, main memory, file storage, I/O devices.
Accounting - To keep track of which users use how much and what kinds of computer
resources. This record keeping may be used for accounting or simply for accumulating usage
statistics. Usage statistics may be a valuable tool for researchers who wish to reconfigure the
system to improve computing services.
Protection and security - The owners of information stored in a multiuser or networked
computer system may want to control use of that information, concurrent processes should
not interfere with each other.
- Protection involves ensuring that all access to system resources is controlled.
- Security of the system from outsiders requires user authentication, extends to
defending external I/O devices from invalid access attempts.
System Calls
System calls provide an interface to the services made by an operating system. Typically written
in a high-level language (C or C++) though certain low-level tasks may have to be written using
assembly-language instructions. Mostly accessed by programs via a high-level Application
Programming Interface (API) rather than direct system call use. The API specifies a set of
functions that are available to an application programmer, including the parameters that are
passed to each function and the return values the programmer can expect.
The run-time support system (a set of functions built into libraries included with a compiler)
provides a system call interface that serves as the link to system calls made available by the
operating system. The system-call interface intercepts function calls in the API and invokes the
necessary system calls within the operating system. Typically, a number is associated with each
system call, and the system-call interface maintains a table indexed according to these numbers.
Chapter 03 (Processes)
Process
A process is a program in execution. The execution of a process must progress in a sequential
fashion. Definition of process is following.
A process is defined as an entity which represents the basic unit of work to be
implemented in the system.
A process is more than the program code, also called text
section.
It includes the current activity as represented by the value of the
program counter and the contents of the processors registers.
A process generally also includes the process stack, which
contains temporary data (such as function parameters, return
addresses, and local variables), and a data section, which
contains global variables. A process may also include a heap,
which is memory that is dynamically allocated during process run
time.
A program becomes a process when an executable file is loaded
into memory.
Fig: Process in Memory
Process State
As a process executes, it changes state. The state of a process is defined in part by the current
activity of that process. A process may be in one of the following states:
1. New: The process is being created.
2. Running: Instructions are being executed.
3. Waiting: The process is waiting for some event to occur (such as an I/O completion or
reception of a signal).
4. Ready: The process is waiting to be assigned to a processor.
5. Terminated: The process has finished execution.
Threads
A thread is a basic unit of CPU utilization, consisting of a program counter, a stack, and a set of
registers. Traditional (heavyweight) processes have a single thread of control - There is one
program counter, and one sequence of instructions that can be carried out at any given time.
Multi-threaded applications have multiple threads within a single process, each having their
own program counter, stack and set of registers, but sharing common code, data, and certain
structures such as open files. Threads are very useful in modern programming whenever a
process has multiple tasks to perform independently of the others.
Process Scheduling
The process scheduling is the activity of the process manager that handles the removal of the
running process from the CPU and the selection of another process on the basis of a particular
strategy.
Process scheduling is an essential part of a multiprogramming operating system. Such operating
systems allow more than one process to be loaded into the executable memory at a time and
loaded process shares the CPU using time multiplexing.
Scheduling queues
Scheduling queues refers to queues of processes or devices. When the process enters into the
system, then this process is put into a job queue. This queue consists of all processes in the
system. The operating system also maintains other queues such as device queue. Device queue
is a queue for which multiple processes are waiting for a particular I/O device. Each device has
its own device queue.
This figure shows the queuing diagram of process scheduling.
Queue is represented by rectangular box.
The circles represent the resources that serve the queues.
The arrows indicate the process flow in the system.
Schedulers
Schedulers are special system software which handles process scheduling in various ways. Their
main task is to select the jobs to be submitted into the system and to decide which process to
run. Schedulers are of three types
1. Long Term Scheduler
It is also called job scheduler. Long term scheduler determines which programs are
admitted to the system for processing. Job scheduler selects processes from the queue
and loads them into memory for execution. Process loads into the memory for CPU
scheduling. The primary objective of the job scheduler is to provide a balanced mix of
jobs, such as I/O bound and processor bound. It also controls the degree of
multiprogramming. If the degree of multiprogramming is stable, then the average rate
of process creation must be equal to the average departure rate of processes leaving
the system.
2. Short Term Scheduler
It is also called CPU scheduler. Main objective is increasing system performance in
accordance with the chosen set of criteria. It is the change of ready state to running
state of the process. CPU scheduler selects process among the processes that are
ready to execute and allocates CPU to one of them. Short term scheduler also known
as dispatcher, execute most frequently and makes the fine grained decision of which
process to execute next. Short term scheduler is faster than long term scheduler.
3. Medium Term Scheduler
Medium term scheduling is part of the swapping. It removes the processes from the
memory. It reduces the degree of multiprogramming. The medium term scheduler is
in-charge of handling the swapped out-processes. Running process may become
suspended if it makes an I/O request. Suspended processes cannot make any progress
towards completion. In this condition, to remove the process from memory and make
space for other process, the suspended process is moved to the secondary storage.
This process is called swapping, and the process is said to be swapped out or rolled
out. Swapping may be necessary to improve the process mix.
Context Switch
When an interrupt occurs, the system needs to save the current context of the process running
on the CPU so that it can restore that context when its processing is done, essentially suspending
the process and then resuming it. The context is represented in the PCB of the process. It
includes the value of the CPU registers, the process state, and memory-management
information. Generically, we perform a state save of the current state of the CPU, be it in kernel
or user mode, and then a state restore to resume operations.
Switching the CPU to another process requires performing a state save of the current process
and a state restore of a different process. This task is known as a context switch. When a context
switch occurs, the kernel saves the context of the old process in its PCB and loads the saved
context of the new process scheduled to run. Context-switch time is pure overhead, because
the system does no useful work while switching. Switching speed varies from machine to
machine, depending on the memory speed, the number of registers that must be copied, and
the existence of special instructions (such as a single instruction to load or store all registers). A
typical speed is a few milliseconds.
Chapter 04 (Threads)
Thread
A thread is a basic unit of CPU utilization. It comprises a thread ID, a program counter, a register
set, and a stack. It shares with other threads belonging to the same process its code section,
data section, and other operating-system resources, such as open files and signals. A traditional
(or heavyweight) process has a single thread of control. If a process has multiple threads of
control, it can perform more than one task at a time. In figure illustrates the difference between
a traditional single-threaded process and a multithreaded process.
Example:
A web browser might have one thread display images or text while another thread retrieves
data from the network, for example. A word processor may have a thread for displaying
graphics, another thread for responding to keystrokes from the user, and a third thread for
performing spelling and grammar checking in the background.
A busy web server may have several (perhaps thousands of) clients concurrently accessing it. If
the web server ran as a traditional single-threaded process, it would be able to service only one
client at a time, and a client might have to wait a very long time for its request to be serviced.
If the web-server process is multithreaded, the server will create a separate thread that listens
for client requests. When a request is made, rather than creating another process, the server
creates a new thread to service the request and resume listening for additional requests. This
is illustrated in Figure.
Benefits
The benefits of multithreaded programming can be broken down into four major categories:
1. Responsiveness Multithreading an interactive may allow continued execution if part of
process is blocked or is performing a lengthy operation, especially important for user
interfaces.
2. Resource Sharing Threads share the memory and the resources of the process to which
they belong by default. The benefit of sharing code and data is that it allows an application
to have several different threads of activity within the same address space.
3. Economy Allocating memory and resources for process creation is costly. Cheaper than
process creation, thread switching lower overhead than context switching. In Solaris, for
example, creating a process is about thirty times slower than is creating a thread, and
context switching is about five times slower.
4. Scalability The benefits of multithreading can be even greater in a multiprocessor
architecture, where threads may be running in parallel on different processing cores. A
single-threaded process can run on only one processor, regardless how many are
available.
CPU Scheduler
Whenever, the CPU becomes idle, the OS must select one of the process in the ready queue to
be executed. The selection process is carried out the short term scheduler or CPU scheduler.
The CPU scheduler selects process among the processes that are ready to execute and allocates
CPU to one of them.
Preemptive Scheduling
CPU-scheduling decisions may take place under the following four circumstances:
1. When a process switches from the running state to the waiting state (for example, as the
result of an I/O request or an invocation of wait() for the termination of a child process)
2. When a process switches from the running state to the ready state (for example, when
an interrupt occurs)
3. When a process switches from the waiting state to the ready state (for example, at
completion of I/O)
4. When a process terminates
Scheduling under 1 and 4 is nonpreemptive.
- Once a process is in the running state, it will continue until it terminates or blocks itself.
Scheduling under 2 and 3 is preemptive.
- Currently running process may be interrupted and moved to the Ready state by OS.
- Allows for better service since any one process cannot monopolize the processor for very
long
Dispatcher
The dispatcher is the module that gives control of the CPU to the process selected by the shortterm scheduler. This function involves the following:
- Switching context
- Switching to user mode
- Jumping to the proper location in the user program to restart that program
The dispatcher should be as fast as possible, since it is invoked during every process switch. The
time it takes for the dispatcher to stop one process and start another running is known as the
dispatch latency.
Scheduling Criteria
The scheduling criteria include the following:
CPU utilization
- We want to keep the CPU as busy as possible. Conceptually, CPU utilization can range
from 0 to 100 percent. In a real system, it should range from 40 percent (for a lightly
loaded system) to 90 percent (for a heavily loaded system).
Throughput
- The number of processes that are completed per time unit, called throughput. For long
processes, this rate may be one process per hour; for short transactions, it may be ten
processes per second.
Turnaround time
- The interval from the time of submission of a process to the time of completion is the
turnaround time. Turnaround time is the sum of the periods spent waiting to get into
memory, waiting in the ready queue, executing on the CPU, and doing I/O.
Waiting time
- The amount of time that a process spends waiting in the ready queue. Waiting time is
the sum of the periods spent waiting in the ready queue.
Response time
- The response time is the time it takes to start responding.
Scheduling Algorithm
CPU scheduling deals with the problem of deciding which of the processes in the ready queue is
to be allocated the CPU. There are many different CPU-scheduling algorithms.
First-Come, First-Served Scheduling (FCFS)
- Advantages
Easy to implement.
Minimum overhead.
- Disadvantages
Average waiting is more.
Convoy effect occurs. Even very small process should wait for its turn to
come to utilize the CPU. Short process behind long process results in lower
CPU utilization.
Shortest-Job-First Scheduling (SJFS)
- Advantages
Minimum average waiting time.
SJF algorithm is optimal.
- Disadvantages
Indefinite postponement of some jobs.
Difficulty is knowing the length of the next CPU request.
Priority Scheduling
- Advantage
Good response for the highest priority processes.
- Disadvantage
Starvation or indefinite blocking low priority processes may never execute.
The solution for the Starvation is Aging as time progresses increase the
priority of the process.
Round Robin
- Advantages
Provides fair CPU allocation.
Good response time for short processes.
- Disadvantages
Requires selection of good time slice.
Throughput is low if time quantum is too small.
Load Balancing
Push Migration
Pull Migration
Algorithm Evolution
1.
2.
3.
4.
Deterministic Modeling
Queue Models
Simulations
Implementions.