You are on page 1of 22

OPERATING SYSTEM

Hardware is nothing but finely designed machinery. A machine is


ultimately a machine only, which is always made to work. In case
of computers, it is either us if we do that or some other which
does it for us. This some other is nothing but our very own
operating System.

Operating System is a program which acts as an a user and the
hardware (i.e. computer resources).

An Operating system is an important component of a computer
system which controls all other components of the computer
system.
Types of OS
Single Program
Multi Program
Time Sharing
Real Time
Multiprocessing
Interactive / GUI
Single Program
As the name suggests, this OS is
single user operating system, so only
one user program can be supported
and executed by it at any point of time.
Multi Program
Unlike single program OS, this is multiuser
OS. It supports multiprogramming i.e. more
than one user can be supported by it,
therefore, more than one user programs are
loaded and active in the main store at the
same time. These active programs are
executed using some techniques one by
one.

Time Sharing
Time sharing is a technique which enables many people, located at
various terminals, to use a particular computer system at the same
time. Time-sharing or multitasking is a logical extension of
multiprogramming. Processor's time which is shared among multiple
users simultaneously is termed as time-sharing.

Multiple jobs are executed by the CPU by switching between them,
but the switches occur so frequently. Thus, the user can receives an
immediate response. For example, in a transaction processing,
processor execute each user program in a short burst or quantum of
computation. That is if n users are present, each user can get time
quantum. When the user submits the command, the response time is
in few seconds at most.

Operating system uses CPU scheduling and multiprogramming to
provide each user with a small portion of a time. Computer systems
that were designed primarily as batch systems have been modified to
time-sharing systems.

Real Time
Real time system is defines as a data processing system in
which the time interval required to process and respond to
inputs is so small that it controls the environment. Real time
processing is always on line whereas on line system need not
be real time. The time taken by the system to respond to an
input and display of required updated information is termed as
response time. So in this method response time is very less as
compared to the online processing.

Real-time systems are used when there are rigid time
requirements on the operation of a processor or the flow of
data and real-time systems can be used as a control device in
a dedicated application. Real-time operating system has well-
defined, fixed time constraints otherwise system will fail. For
example Scientific experiments, medical imaging systems,
industrial control systems, weapon systems, robots, and home-
applicance controllers, Air traffic control system etc.

Multiprocessing
Multi-processing refers to the ability of a
system to support more than one processor
at the same time. Applications in a multi-
processing system are broken to smaller
routines that run independently. The
operating system allocates these threads to
the processors improving performance of
the system.
Interactive / GUI
The Operating systems are interactive in nature.
GUI Operating Systems are much easier for end-
users to learn and use because commands do not
need to be known or memorized. Because of their
ease of use, GUI Operating Systems have become
the dominant operating system used by end-users
today. GUI uses windows, icons, and menus to
carry out commands such as opening files,
deleting files, and moving files.
Need of Operating System
Operating System is just like our secretary. As the
boss gives orders to his secretary and the
secretary himself decided : How to do ? What to do
? When to do ? Etc. etc. Same the way, we pass
our orders / requests to operating system and the
operating system does it for us. The primary goad
of operating system is thus to make the computer
system convenient to use and secondary goal is to
use computer hardware in an efficient manner.
Uses of Operating System
Easy interaction between the human & computer.
Starting computer operation automatically when
power in turned on.
Loading & scheduling users program.
Controlling input & output.
Controlling program execution.
Managing use of main memory.
Providing security to users program.

Operating Systems Functions
Operating systems functions broadly fall into three
categories Essential functions, monitoring
functions and service functions.

Essential functions ensure effective utilization of
computer system resources, monitoring functions
monitor and collect information related to system
performance and service functions enhance
facilities provided to use.
Processor Management
Processor management means managing the process or
processor i.e. the CPU. Therefore, this very function is also
term as CPU Scheduling.

Multiprogramming, undoubtly improves the overall efficiency
of the computers. Whenever the CPU becomes idle, it is the
job of the CPU Scheduler to select another process from the
ready queue to run next. The storage structure for the ready
queue and the algorithm used to select the next process are
not necessarily a FIFO queue. There are several alternatives
to choose from, as well as numerous adjustable parameters
for each algorithm, which is the basic subject of this entire
chapter.

The Benefits of Multi Programming
Increased CPU utilization.
Higher total job throughput.

Throughput is an important measure of system performance. It is
calculated as follows :

The number of jobs completed
Throughput = -----------------------------------------------
Total time taken to complete the jobs

Another important factor that influence throughput is priority
assigned to different jobs i.e. job scheduling.

Job Scheduling
Job scheduling not only assigns priority to jobs but also
admits new jobs for processing at appropriate times. Before
we start with job scheduling techniques, let us first understand
basic terminology.

Program is a set of instructions submitted to computer.
Process is a program in execution. Job and process are the
terms which are almost used technology.
Process State
A process is a program in execution. During
execution, the process changes its states. The state
of a process is defined by its current activity. A
process can have these states : new, active, waiting
or halted.
Process Control Block
Process Control Block (PCB, also called Task Controlling
Block, Task Struct, or Switch frame) is a data structure in
the operating system kernel containing the information
needed to manage a particular process. The PCB is "the
manifestation of a process in an operating system".

The role of the PCBs is central in process management: they
are accessed and/or modified by most OS utilities, including
those involved with scheduling, memory and I/O resource
access and performance monitoring. It can be said that the
set of the PCBs defines the current state of the operating
system. Data structuring for processes is often done in terms
of PCBs. For example, pointers to other PCBs inside a PCB
allow the creation of those queues of processes in various
scheduling states ("ready", "blocked", etc.) that we previously
mentioned.
In modern sophisticated multitasking systems, the PCB stores many different
items of data, all needed for correct and efficient process management.
Though the details of these structures are obviously system-dependent, we can
identify some very common parts, and classify them in three main categories:

Process identification data
Processor state data
Process control data

Process identification data always include a unique identifier for the process
(almost invariably an integer number) and, in a multiuser-multitasking system,
data like the identifier of the parent process, user identifier, user group
identifier, etc. The process id is particularly relevant, since it's often used to
cross-reference the OS tables defined above, e.g. allowing to identify which
process is using which I/O devices, or memory areas. Processor state data are
those pieces of information that define the status of a process when it's
suspended, allowing the OS to restart it later and still execute correctly. This
always includes the content of the CPU general-purpose registers, the CPU
process status word, stack and frame pointers etc. During context switch, the
running process is stopped and another process is given a chance to run. The
kernel must stop the execution of the running process, copy out the values in
hardware registers to its PCB, and update the hardware registers with the
values from the PCB of the new process.

Process control information is used by the OS to manage the process
itself. This includes:
The process scheduling state, e.g. in terms of "ready",
"suspended", etc., and other scheduling information as
well, like a priority value, the amount of time elapsed since the
process gained control of the CPU or since it was suspended.
Also, in case of a suspended process, event identification data
must be recorded for the event the process is waiting for.
Process structuring information: process's children id's,
or the id's of other processes related to the current one
in some functional way, which may be represented as a
queue, a ring or other data structures.
Interprocess communication information: various flags,
signals and messages associated with the communication
among independent processes may be stored in the PCB.
Process privileges, in terms of allowed/unallowed access
to system resources.
Accounting information, such as when the process was
last run, how much CPU time it has accumulated, etc.

Storage Management
The term storage management encompasses the technologies
and processes organizations use to maximize or improve the
performance of their data storage resources. It's a broad
category that includes virtualization, replication, mirroring,
security, compression, traffic analysis, process automation,
storage provisioning and related techniques.
By some estimates, the amount of digital information stored in
the world's computer systems is doubling every year. As a
result, organizations feel constant pressure to expand their
storage capacity. However, doubling a company's storage
capacity every year is an expensive proposition. In order to
reduce some of those costs and improve the capabilities and
security of their storage solutions, organizations turn to a
variety of storage management solutions.

Many storage management technologies, like storage virtualization,
deduplication and compression, allow companies to better utilize their
existing storage. The benefits of these approaches include lower costs --
both the one-time capital expenses associated with storage devices and
the ongoing operational costs for maintaining those devices.
Most storage management techniques also simplify the management of
storage networks and devices. That can allow companies to save time and
even reduce the number of IT workers needed to maintain their storage
systems, which in turn, also reduces overall storage operating costs.
Storage management can also help improve a data center's performance.
For example, compression and technology can enable faster I/Os, and
automatic storage provisioning can speed the process of assigning
storage resources to various applications.
In addition, virtualization and automation technologies can help an
organization improve its agility. These storage management techniques
make it possible to reassign storage capacity quickly as business needs
change, reducing wasted space and improving a company's ability to
respond to evolving market conditions.
Finally, many storage management technologies, such as replication,
mirroring and security, can help a data center improve its reliability and
availability. These techniques are often particularly important for backup
and archive storage, although they also apply to primary storage. IT
departments often turn to these technologies for help in meeting SLAs or
achieving compliance goals.