Академический Документы
Профессиональный Документы
Культура Документы
PART B- (5 X 16 = 80 Marks)
11. (a) (i) Explain how hardware protection can be achieved and discuss in detail the dual mode of
operations. (8 Marks)
(ii) Explain in detail any two operating system structures. (8 Marks)
(Or)
(b) What is meant by a process? Explain states of process with neat sketch and discuss the process state
transition with a neat diagram. (16 Marks)
12. (a) What is a critical section? Give examples. What are the minimum requirements that should be
satisfied by a solution to critical section problem? Write Peterson Algorithm for 2-process
synchronization to critical section problem and discuss briefly. (16 Marks)
(Or)
(b) Allocation Maximum Available
abcdabcdabcd
p0 0 0 1 2 0 0 1 2 1 5 2 0
p1 1 0 0 0 1 7 5 0
p2 1 3 5 4 2 3 5 6
p3 0 6 3 2 0 6 5 2
p4 0 0 1 4 0 6 5 6
13. (a) Given memory partition of 100 KB, 500 KB, 200 KB and 600 KB (in order). Show with neat
sketch how would each of the first-fit , best-fit and worst-fit algorithms place processes of 412 KB, 317
KB, 112 KB and 326 KB(in order). Which algorithm is most efficient in memory allocation? (16 Marks)
(Or)
(b) Explain the concept of demand paging. How can demand paging be implemented with virtual
memory? (16 Marks)
14. (a) (i) Explain various file allocation methods in detail. (8 Marks)
(ii) What are the possible structures for directory? Discuss them in detail. (8 Marks)
(Or)
(b) Explain in detail the free space management with neat diagram. (16 Marks)
15. (a) Explain in detail various disk scheduling algorithms with suitable example. (16 Marks)
(Or)
(b) Write short notes on the following :
(i) I/O Hardware (8 Marks)
(ii) RAID structure. (8 Marks)
4. What is a deadlock?
A process requests resources; if the resources are not available at that time, the process
enters a wait state. Waiting processes may never again change state, because the resources they
have requested are held by other waiting processes. This situation is called a deadlock.
If the process tries to access a page that was not brought into memory causes page fault.
1.Sequential file access 2. Random file access 3. Indexed sequential file access.
9. What is rotational latency?
Rotational latency: the amount of time for the desired sector of a disk to rotate under the read/write
heads of the disk drive.
PART B- (5 X 16 = 80 Marks)
11. (a) (i) Explain how hardware protection can be achieved and discuss in detail the dual mode of
operations. (8 Marks)
(ii) Explain in detail any two operating system structures. (8 Marks)
Sharing system resources requires operating system to ensurethat an incorrect program cannot
cause other programs to execute incorrectly.Provide hardware support to differentiate between at least
twomodes of operations.
1. User mode – execution done on behalf of a user.
2. Monitor mode (also supervisor mode or system mode) –
execution done on behalf of operating system
I/O Protection
_ All I/O instructions are privileged instructions. Must ensure that a user program could never gain
control of the computer in monitor mode (i.e., a user program that, as part of its execution, stores a new
address in the interruptvector).
Memory Protection
Must provide memory protection at least for the interrupt vectorand the interrupt service routines.
In order to have memory protection, add two registers that determine the range of legal addresses a
program may access:base register – holds the smallest legal physical memory
address. limit register – contains the size of the range.Memory outside the defined range is protected.
Operating
Process Management
The operating system manages many kinds of activities ranging from user programs to system
programs like printer spooler, name servers, file server etc. Each of these activities is
encapsulated in a process. A process includes the complete execution context (code, data, PC,
registers, OS resources in use etc.).
It is important to note that a process is not a program. A process is only ONE instant of a
program in execution. There are many processes can be running the same program. The five
major activities of an operating system in regard to process management are
Main-Memory Management
Primary-Memory or Main-Memory is a large array of words or bytes. Each word or byte has its
own address. Main-memory provides storage that can be access directly by the CPU. That is to
say for a program to be executed, it must in the main memory.
Keep track of which part of memory are currently being used and by whom.
Decide which process are loaded into memory when memory space becomes available.
Allocate and deallocate memory space as needed.
(b) What is meant by a process? Explain states of process with neat sketch and
discuss the process state transition with a neat diagram. (16 Marks)
Definition
The term "process" was first used by the designers of the MULTICS in 1960's. Since then, the
term process, used somewhat interchangeably with 'task' or 'job'. The process has been given
many definitions for instance
A program in Execution.
An asynchronous activity.
The 'animated sprit' of a procedure in execution.
The entity to which processors are assigned.
The 'dispatchable' unit.
Following are six(6) possible transitions among above mentioned five (5) states
FIGURE
Transition 1 occurs when process discovers that it cannot continue. If running process initiates
an I/O operation before its allotted time expires, the running process voluntarily relinquishes the
CPU.
The current state of the process i.e., whether it is ready, running, waiting, or whatever.
Unique identification of the process in order to track "which is which" information.
A pointer to parent process.
Similarly, a pointer to child process (if it exists).
The priority of process (a part of CPU scheduling information).
Pointers to locate memory of processes.
A register save area.
The processor it is running on.
The PCB is a certain store that allows the operating systems to locate key information about a
process. Thus, the PCB is the data structure that defines a process to the operating systems.
12. (a) What is a critical section? Give examples. What are the minimum
requirements that should be
satisfied by a solution to critical section problem? Write Peterson Algorithm for
2-process
synchronization to critical section problem and discuss briefly. (16 Marks)
Mutual exclusion with busy waiting
The operating system can choose not to preempt itself. That is, no preemption for system
processes (if the OS is client server) or for processes running in system mode (if the OS is self
service). Forbidding preemption for system processes would prevent the problem above where
x<--x+1 not being atomic crashed the printer spooler if the spooler is part of the OS.
Does not work for user-mode programs. So the Unix printer spooler would not be helped.
Does not prevent conflicts between the main line OS and interrupt handlers.
o This conflict could be prevented by disabling interrupts while the main line is in its
critical section.
o Indeed, disabling (a.k.a. blocking) interrupts is often done for exactly this reason.
o Do not want to block interrupts for too long or the system will seem unresponsive.
Initially P1wants=P2wants=false
critical-section critical-section
non-critical-section } non-critical-section }
Let's try again. The trouble was that setting want before the loop permitted us to get stuck. We
had them in the wrong order!
Initially P1wants=P2wants=false
critical-section critical-section
non-critical-section } non-critical-section }
So let's be polite and really take turns. None of this wanting stuff.
Initially turn=1
non-critical-section } non-critical-section }
This one forces alternation, so is not general enough. Specifically, it does not satisfy condition
three, which requires that no process in its non-critical section can stop another process from
entering its critical section. With alternation, if one process is in its non-critical section (NCS)
then the other can enter the CS once but not again.
In fact, it took years (way back when) to find a correct solution. Many earlier ``solutions'' were
found and several were published, but all were wrong. The first true solution was found by
Dekker. It is very clever, but I am skipping it (I cover it when I teach G22.2251). Subsequently,
algorithms with better fairness properties were found (e.g., no task has to wait for another task to
enter the CS twice).
What follows is Peterson's solution. When it was published, it was a surprise to see such a simple
soluntion. In fact Peterson gave a solution for any number of processes. A proof that the
algorithm for any number of processes satisfies our properties (including a strong fairness
condition) can be found in Operating Systems Review Jan 1990, pp. 18-22.
critical-section critical-section
non-critical-section non-critical-section
TAS(b), where b is a binary variable, ATOMICALLY sets b<--true and returns the OLD value of b.
Of course it would be silly to return the new value of b since we know the new value is true.
CS
s<--false EXIT
NCS
Note: We will only do busy waiting, which is easier. Some authors use the term semaphore only
for blocking solutions and would call our solutions spin locks.
End of Note.
The entry code is often called P and the exit code V (Tanenbaum only uses P and V for blocking,
but we use it for busy waiting). So the critical section problem is to write P and V so that
loop forever
critical-section
non-critical-section
satisfies
1. Mutual exclusion.
2. No speed assumptions.
NEED= MAX-ALLOCATION
Abcd
P0 0000
P1 0750
P2 1002
P3 0020
P4 0642
The state of the system is the current allocation of resources to processes and consist of two
vectors. A safe state is one in which there is at least one sequence that does not result in
deadlock ie all processes can run to completetion. Let us analyze various states:
Initial state:
Allocation Maximum Available
abcd abcd ab c d
p0 0000 0012 3 14 12 12
p1 0750 1 0 0 0 Resource vector
p2 1002 1354
p3 0020 0632 A B C D
p4 0642 0014 1 5 3 2
P0 runs to completion
P2 runs to completion
Now P1 requests for 0,4,2,0 and if the resources are allocated immediately the state will be as
given below.
Allocation Maximum Available
abcd abcd ab c d
p0 0000 0012 3 14 12 12
p1 0330 1 4 2 0 Resource vector
p2 1002 1354
p3 0020 0632 A B C D
p4 0642 0014 1 1 1 2
P2 exits available vector will be (1,1,1,1)+(1,3,5,4)=2,4,6,6
P3 available 2,4,6,6
P3 exits available vector will be (2,4,6,6)+(0,6,3,2)=2,10,9,8
13. (a) Given memory partition of 100 KB, 500 KB, 200 KB and 600 KB (in order). Show with
neat
sketch how would each of the first-fit , best-fit and worst-fit algorithms place processes of 412
KB, 317 KB, 112 KB and 326 KB(in order). Which algorithm is most efficient in memory
allocation? (16 Marks)
a. First fit:
b. 212K is put in 500k Partition
c. 417 K is put in 600k Partition
d.112k Is put in 288k partition (new partition 288k=500k-212k)
e. 426K must wait.
f. Best fit:
g.212K is put in 300k Partition
h. 417 K is put in 500k Partition
i.112k Is put in 200k partition
j. 426K is put in 600k partition.
K. Worst fit:
l.212K is put in 600k Partition
m. 417 K is put in 500k Partition
n.112k Is put in 388k partition
e. 426K must wait.
In this example Best fit turns out to be the best.
(b) Explain the concept of demand paging. How can demand paging be implemented
with virtual
memory? (16 Marks)
Virtual Memory
major advantage: programs larger than main memory can run (Fig. 9.1)
allows processes to share files easily and implement shared memory (Fig. 9.3)
motivation
o in cases where entire program is needed, it is generally not needed all at the same time
benefits:
o less I/O required to load or swap programs into memory → each program runs faster
o more pages required only if the heap or stack grow (Fig. 9.2)
Demand Paging
same idea as pure paging, but now you only load a page on demand
this gives a higher degree of multiprogramming with only a little more overhead
use a lazy swapper (now called a pager) (Fig. 9.4)
in demand paging, a page fault means an additional page must be brought into memory and the
instruction re-started
pure demand paging: bring no page(s) in initially; every process will page fault at least once
Fig. 9.6
ma = 200ns
pft = 8ms
Copy-on-Write
copy-on-write: both processes (parent and child) share all pages initially, and a shared page is
only copied if and when either process writes to a shared page
not all shared pages need to be set to copy-on-write (e.g., pages containing the executable code)
o parent is suspended
o vfork is intended to be used when the child calls exec immediately because no copying of
pages takes place
CHAINED ALLOCATION
file. Indirection blocks are introduced each time the total number of blocks
“overflows” the previous index allocation. Typically, the indices are neither
stored with the file-allocation table nor with the file, and are retained in memory
when the file is opened.
Directory structure:
Naming problem
Grouping problem
Two-Level Directory
Separate directory for each user (used in early systems)
Path name
n Can have the same file name for different user
n Efficient searching
n No grouping capability
Tree-Structured Directories:
Efficient searching
Grouping Capability
Current directory (working directory)
cd /spell/mail/prog
type list
Absolute or relative path name
Absolute or relative path name
Creating a new file is done in current directory
Delete a file
rm <file-name>
Creating a new subdirectory is done in current directory
mkdir <dir-name>
Example: if in current directory /mail
mkdir count
mail
prog copy prt exp count
Deleting “mail” Þ deleting the entire subtree rooted by “mail”
Acyclic-Graph Directories
Have shared subdirectories and files
Hard Li
Two different names (aliasing)
If dict deletes list Þ dangling pointer
Solutions:
Backpointers, so we can delete all pointers
Variable size records a problem
Backpointers using a daisy chain organization
Entry-hold-count solution
New directory entry type
Link – another name (pointer) to an existing file
Resolve the link – follow pointer to locate the file
(b) Explain in detail the free space management with neat diagram. (16 Marks)
PAGE CACHE:
A page cache caches pages rather than diskblocks using virtual memory techniques
Memory-mapped I/O uses a page cache
Routine I/O through the file system uses thebuffer (disk) cache
15. (a) Explain in detail various disk scheduling algorithms with suitable example. (16 Marks)
FCFS
Illustration shows total head movement of 640 cylinders
SSTF
n Selects the request with the minimum seek time from the current head
position
n SSTF scheduling is a form of SJF scheduling; may cause starvation of
some requests
n Illustration shows total head movement of 236 cylinders
SCAN
n The disk arm starts at one end of the disk, and moves toward the other end,
servicing requests until it gets to the other end of the disk, where the head
movement is reversed and servicing continues.
n Direction bit specifies moving direction
n SCAN algorithm Sometimes called the elevator algorithm
n Illustration shows total head movement of 208 cylinders
n Variant of SCAN is LOOK
l Return at final request not at disk boundary
C-SCAN
n Provides a more uniform wait time than SCAN
n The head moves from one end of the disk to the other, servicing requests as
it goes.l When it reaches the other end, however, it immediately returns to the
beginning of the disk, without servicing any requests on the return trip
n Treats the cylinders as a circular list that wraps around from the last cylinder
to the first one
C-LOOK
n Version of C-SCAN
n Arm only goes as far as the last request in each direction, then reverses
direction immediately, without first going all the way to the end of the disk
Selecting a Disk-Scheduling Algorithm:
n SSTF is common and has a natural appeal
n SCAN and C-SCAN perform better for systems that place a heavy load on
the disk. Used on file servers.
n Performance depends on the number and types of requests
n Requests for disk service can be influenced by the file-allocation method
n The disk-scheduling algorithm should be written as a separate module ofthe operating system, allowing
it to be replaced with a different algorithm if necessary
n Either SSTF or LOOK is a reasonable choice for the default algorithm