Вы находитесь на странице: 1из 7

BT0036 – 01

Marks –30
OPERATING SYSTEMS

No. Of Credits: 2 Book ID: BT00 36

Q.1 Define:

(a) Batch Systems: Batch systems allowed automatic job sequencing by a resident operating system
and greatly improved the overall utilization of the computer. The computer no longer had to wait
for human operation. CPU utilization was still low, however, because of the slow speed of the I/O
devices relative to that of the CPU. Off-line operation of slow devices provides a means to use
multiple reader-to-tape and tape-to-printer systems for one CPU. Spooling allows the CPU to
overlap the input of one job with the computation and output of other jobs.
(b) Multiprogramming: To improve the overall performance of the system, developers introduced
the concept of multiprogramming. With multiprogramming, several jobs are kept in memory at one
time; the CPU is switched back and forth among them to increase CPU utilization and to decrease
the total time needed to execute the jobs.
(c) Time sharing systems: Time- shared operating systems allow many users to use a computer
system interactively at the same time. Time- sharing systems were developed to provide interactive
use of computer system at a reasonable cost. A time- shared operating system uses CPU
scheduling and multiprogramming to provide each user with a small portion of a time- shared
computer.
(d) Parallel Processing: Parallel systems have more than one CPU in close communications; the
CPUs share the computer bus, and sometimes share memory and peripheral devices. Such systems
can provide increased throughput and enhanced reliability.
(e) Distributed Systems: A distributed system is a collection of processors that do not share memory
of a clock. Instead, each processor has its own local memory, and the processors communicate
with one another through various communication lines, such as high speed buses or telephone
lines. A distributed system provides the user with access to the various resources located at remote
sites. There is a variety of reasons for building distributed systems, the major ones being these:

• Resource sharing: If a number of different sites are connected to one another, then a user at
one site may be able to use the resources available at another.
• Computation speedup: If a particular computation can be partitioned into a number of sub
computations that can run concurrently, then a distributed system may allow us to distribute
the computation among the various sites- to run that computation concurrently.
• Reliability: If one site fails in a distributed system, the remaining sites can potentially
continue operating.
• Communication: There are many instances in which programs need to exchange data with
one another on one system.
(f) Real Time Systems: A hard real- time system is often used as a control device in a dedicated
application. A hard real- time operating system has well- defined, fixed time constraints.
Processing must be done within the defined constraints, or the system will fail. Soft real- time
systems have less stringent timing constraints, and do not support deadline scheduling.

Q.2 Define Process. With a Block diagram explain various states a process resides in.
Ans. A process is a program in execution. A process is more than the program code. It also includes the
current activity, as represented by the value the program counter and contents of the processor’s
registers. The execution of a process in progress in a sequential fashion. That is, at any time, at
most one instruction is executed on behalf of the process.
Process State: As a process executes, it changes state. The state of a process is defined in part by
the current activity of that process. Each process may be in one of the following states:

• New: The process is being created.


• Running: When a process is executing, then it is called in running state.
• Waiting: when a process is executing and other processes are waiting for execution, that
processes are called in waiting state.
• Ready: when a process executes and after execution it waits for an input from the user then
it is called in ready state.
• Terminated: when a process finished its execution, it is called terminated.

Q.3 Write Notes on:

• Context switching mechanism: A context switch is the computing process of storing and
restoring the state (context) of a CPU such that multiple processes can share a single CPU
resource. The context switch is an essential feature of a multitasking operating system. Context
switches are usually computationally intensive and much of the design of operating systems is to
optimize the use of context switches. A context switch can mean a register context switch, a task
context switch, a thread context switch, or a process context switch. What constitutes the context
is determined by the processor and the operating system. Switching the CPU to another process
requires saving the state of the old process loading the saved state for the new process. This task
is known as context switch. Context- switch time is pure overhead, because the system does
useful work while switching. Its speed varies from machine to machine, spending on the memory
speed, the number of registers which must be piled, and the existence of special instructions.
Typically, the speed ranges from 1 to 1000 microseconds. Context-switch times are highly
dependent on hardware support.
• Inter Process Communication: (IPC) is a set of techniques for the exchange of data among two
or more threads in one or more processes. Processes may be running on one or more computers
connected by a network. IPC techniques are divided into methods for message passing,
synchronization, shared memory, and remote procedure calls (RPC). The method of IPC used
may vary based on the bandwidth and latency of communication between the threads, and the
type of data being communicated.
• Co-operating Process: The process executing in the operating system may have either
independent processes of co-operating process. Co-operating processes must have the means to
communicate with each other. Principally, there exist two complementary communication
schemes: shared memory and message systems. The shared memory method requires
communicating processes to share some variables. The processes are expected to exchange
information through the use of shared variables. The method system method allows the processes
to exchange message. The responsibility for providing communication then rests with the
operating system itself.

Q.4 What do you mean by Deadlock? Explain the conditions that results in a deadlock situation.

Ans. Deadlock: A set of processes is in a deadlock state when every process in the set is waiting for
an event that can be caused by only another process in the set. The events with which we mainly
concerned here are resource acquisition and release. The resources may be either physical
resources or logical resources.

Conditions for occurring deadlock situations:

• Mutual exclusion: At least one resource must be held in a non- sharable mode; that is only one
process at a time can use the resource.
• Hold and wait: There must exist a process that is holding at least one resource and is waiting to
acquire additional resources that are currently being held by other processes.
• No preemption: Resources can not be preempted; that is, a resource can be released only
voluntarily by the process holding it, after that process has completed its task.
• Circular wait: There must exist a set {P0, P1, ….. Pn} of waiting processes such that P0 is
waiting for a resources that is held by P1, P1 is waiting for a resource that is held by P2….. Pn-1 is
waiting for a resource that is held by Pn, and Pn is waiting for a resource that is held by P0.

Q.5 Use First-Come First Serve Algorithm and Shortest Job First Algorithm to schedule the
following process:

Process Burst Time


P1 7
P2 5
P3 8
P4 2
P5 3

Ans. The average waiting time under First Come First Serve policy, however, is often quite long.
Considering the following set of processes that arrive at time 0, with the length of the CPU- burst time
given in milliseconds:

P1 P2 P3 P4 P5
0 7 5 8 2 3
If the processes arrive in the order P1, P2, P3, P4 and P5 and are served in FCFS order, we get the results
as:

The waiting time is 0 milliseconds for process P1, 7 milliseconds for process P2, 12 milliseconds for P3,
20 milliseconds for P4 and 22 milliseconds for process P5. There the average waiting time is (0 + 7 + 12
+ 20 + 2)/5 = 8.2 milliseconds.
If the processes arrive in the order P1, P2, P3, P4 and P5 and are served in Shortest Job First order, we get
the results as:
P4 P5 P2 P1 P3
0 2 3 5 7
8
The waiting time is 0 milliseconds for P4, 2 milliseconds for process P5, 5 milliseconds for P2, 10
milliseconds for P1 and 17 milliseconds for process P3.The average waiting time is (0 + 2 + 3 + 5 + 10 +
17) /5 = 7.4 milliseconds. Thus the average waiting time under FCFS policy is not minimal and may
vary if the process CPU burst times vary greatly.
BT0036 – 02
Marks –30
OPERATING SYSTEMS

No. Of Credits: 2 Book ID: BT00 36

Q.1 Use Priority Scheduling Algorithm schedules the following process:

Process Burst Time Priority/Time


P1 7 3
P2 5 1
P3 8 3
P4 2 4
P5 3 2

Q.2 Write Resource-Request Algorithm and substantiate it with a process allocation example.

Ans. Round robin (RR) scheduling is more appropriate for a time shared system. RR scheduling
allocates the CPU to the first process in the ready queue for q time units, where q is the time quantum.
After q time units, if the process has not relinquished the CPU, it is preempted and the process is put at
the tail of the ready queue. The major problem is selection of the time quantum. If the quantum is too
large, RR scheduling degenerates to FCFS scheduling; if the quantum is too small, scheduling overhead
in the form of context- switch time becomes excessive. The FCFS algorithm is non preemptive; the RR
algorithm is preemptive. The SJF and priority algorithms may be either preemptive or non preemptive.

Q.3 What are the two types of fragmentations? Illustrate them with block diagrams.

Ans. As processes are loaded and removed from memory, the free memory is broken into little piece.
Two types of fragmentations are:

• External fragmentation: External fragmentation exists when enough memory space exists to
satisfy a request, but it is not contiguous; it is fragmented into a large number of small holes. The
free memory space is fragmented into two pieces, neither of which is large enough, by itself, to
satisfy the memory request of process. This fragmentation problem can be severe.
• Internal fragmentation: The general approach is to allocate very small holes as part of the
larger request. Thus, the allocated memory may be slightly larger than the requested memory.
The difference between these two numbers is internal fragmentation- memory that is internal to a
partition, but is not being used.

Q.4 How do we overcome Fragmentation problems? Write a note o Paging and Segmentation
methods.
Ans. One solution to the problem of external fragmentation is compaction. The goal is to shuffle the
memory contents to place all free memory together in one large block. For example, the memory map of
figure 1(a) can be compacted as shown in figure 1(b). The three holes of size 100k, 300k and 260k can
be compacted into one hole of size 660k. Compaction is not always possible. Notice that, in fig 1(b), we
move processes P4 and P3. For these processes to be able to execute in their n locations, all internal
addresses must be relocated. If relocation is static is done at assembly or load time, compaction can’t be

Operating
system
Allocate P5 P5
900k
1000k P4

2560k
Fig 1(a)

0 Operating
system
400k
P5
900k
1000k
100k

1700k P4
300k

2000k

2300

260k
2560k

fig 1(b)
Operating
400k system
900k
1600k P5
1900k
2560k
P4
Done; compaction possible only
if relocation is dynamic, and is done at P3
execution time.

Q.5 Use FIFO Page-Replacement and 660k


LRU algorithms to allocate
memory pages for following reference string:
6, 1, 2, 0, 3, 0, 4, 2, 3, 0, 2, 3, 1, 0, 6, 0, 1
(Note: Use three frames)

Ans.

Вам также может понравиться