Вы находитесь на странице: 1из 14

OPERATING SYSTEM

• What are the basic functions of an operating system? - Operating system controls and coordinates
the use of the hardware among the various applications programs for various uses. Operating system
acts as resource allocator and manager. Since there are many possibly conflicting requests for
resources the operating system must decide which requests are allocated resources to operating the
computer system efficiently and fairly. Also operating system is control program which controls the
user programs to prevent errors and improper use of the computer. It is especially concerned with the
operation and control of I/O devices.
• Why paging is used? - Paging is solution to external fragmentation problem which is to permit the
logical address space of a process to be noncontiguous, thus allowing a process to be allocating
physical memory wherever the latter is available.
• While running DOS on a PC, which command would be used to duplicate the entire diskette?
diskcopy .
• What resources are used when a thread created? How do they differ from those when a process
is created? - When a thread is created the threads does not require any new resources to execute the
thread shares the resources like memory of the process to which they belong to. The benefit of code
sharing is that it allows an application to have several different threads of activity all within the same
address space. Whereas if a new process creation is very heavyweight because it always requires new
address space to be created and even if they share the memory then the inter process communication
is expensive when compared to the communication between the threads.
• What is virtual memory? - Virtual memory is hardware technique where the system appears to have
more memory that it actually does. This is done by time-sharing, the physical memory and storage
parts of the memory one disk when they are not actively being used.
• What is Throughput, Turnaround time, waiting time and Response time? - Throughput - number
of processes that complete their execution per time unit. Turnaround time - amount of time to execute
a particular process. Waiting time - amount of time a process has been waiting in the ready queue.
Response time - amount of time it takes from when a request was submitted until the first response is
produced, not output (for time-sharing environment).
• What is the state of the processor, when a process is waiting for some event to occur? - Waiting
state
• What is the important aspect of a real-time system or Mission Critical Systems? - A real time
operating system has well defined fixed time constraints. Process must be done within the defined
constraints or the system will fail. An example is the operating system for a flight control computer
or an advanced jet airplane. Often used as a control device in a dedicated application such as
controlling scientific experiments, medical imaging systems, industrial control systems, and some
display systems. Real-Time systems may be either hard or soft real-time. Hard real-time: Secondary
storage limited or absent, data stored in short term memory, or read-only memory (ROM), Conflicts
with time-sharing systems, not supported by general-purpose operating systems. Soft real-time:
Limited utility in industrial control of robotics, Useful in applications (multimedia, virtual reality)
requiring advanced operating-system features.
• What is the difference between Hard and Soft real-time systems? - A hard real-time system
guarantees that critical tasks complete on time. This goal requires that all delays in the system be
bounded from the retrieval of the stored data to the time that it takes the operating system to finish
any request made of it. A soft real time system where a critical real-time task gets priority over other

Page 1
tasks and retains that priority until it completes. As in hard real time systems kernel delays need to be
bounded
• What is the cause of thrashing? How does the system detect thrashing? Once it detects
thrashing, what can the system do to eliminate this problem? - Thrashing is caused by under
allocation of the minimum number of pages required by a process, forcing it to continuously page
fault. The system can detect thrashing by evaluating the level of CPU utilization as compared to the
level of multiprogramming. It can be eliminated by reducing the level of multiprogramming.
• What is multi tasking, multi programming, multi threading? - Multi programming:
Multiprogramming is the technique of running several programs at a time using timesharing. It allows
a computer to do several things at the same time. Multiprogramming creates logical parallelism. The
concept of multiprogramming is that the operating system keeps several jobs in memory
simultaneously. The operating system selects a job from the job pool and starts executing a job, when
that job needs to wait for any i/o operations the CPU is switched to another job. So the main idea here
is that the CPU is never idle. Multi tasking: Multitasking is the logical extension of
multiprogramming .The concept of multitasking is quite similar to multiprogramming but difference
is that the switching between jobs occurs so frequently that the users can interact with each program
while it is running. This concept is also known as time-sharing systems. A time-shared operating
system uses CPU scheduling and multiprogramming to provide each user with a small portion of
time-shared system. Multi threading: An application typically is implemented as a separate process
with several threads of control. In some situations a single application may be required to perform
several similar tasks for example a web server accepts client requests for web pages, images, sound,
and so forth. A busy web server may have several of clients concurrently accessing it. If the web
server ran as a traditional single-threaded process, it would be able to service only one client at a
time. The amount of time that a client might have to wait for its request to be serviced could be
enormous. So it is efficient to have one process that contains multiple threads to serve the same
purpose. This approach would multithread the web-server process, the server would create a separate
thread that would listen for client requests when a request was made rather than creating another
process it would create another thread to service the request. To get the advantages like
responsiveness, Resource sharing economy and utilization of multiprocessor architectures
multithreading concept can be used.
• What is hard disk and what is its purpose? - Hard disk is the secondary storage device, which
holds the data in bulk, and it holds the data on the magnetic medium of the disk.Hard disks have a
hard platter that holds the magnetic medium, the magnetic medium can be easily erased and
rewritten, and a typical desktop machine will have a hard disk with a capacity of between 10 and 40
gigabytes. Data is stored onto the disk in the form of files.
• What is fragmentation? Different types of fragmentation? - Fragmentation occurs in a dynamic
memory allocation system when many of the free blocks are too small to satisfy any request.
External Fragmentation: External Fragmentation happens when a dynamic memory allocation
algorithm allocates some memory and a small piece is left over that cannot be effectively used. If too
much external fragmentation occurs, the amount of usable memory is drastically reduced. Total
memory space exists to satisfy a request, but it is not contiguous. Internal Fragmentation: Internal
fragmentation is the space wasted inside of allocated memory blocks because of restriction on the
allowed sizes of allocated blocks. Allocated memory may be slightly larger than requested memory;
this size difference is memory internal to a partition, but not being used
• What is DRAM? In which form does it store data? - DRAM is not the best, but it’s cheap, does
the job, and is available almost everywhere you look. DRAM data resides in a cell made of a
capacitor and a transistor. The capacitor tends to lose data unless it’s recharged every couple of
Page 2
milliseconds, and this recharging tends to slow down the performance of DRAM compared to
speedier RAM types.
• What is Dispatcher? - Dispatcher module gives control of the CPU to the process selected by the
short-term scheduler; this involves: Switching context, Switching to user mode, Jumping to the
proper location in the user program to restart that program, dispatch latency – time it takes
for the dispatcher to stop one process and start another running.
• What is CPU Scheduler? - Selects from among the processes in memory that are ready to execute,
and allocates the CPU to one of them. CPU scheduling decisions may take place when a process:
1.Switches from running to waiting state. 2.Switches from running to ready state. 3.Switches from
waiting to ready. 4.Terminates. Scheduling under 1 and 4 is non-preemptive. All other scheduling is
preemptive.
• What is Context Switch? - Switching the CPU to another process requires saving the state of the old
process and loading the saved state for the new process. This task is known as a context switch.
Context-switch time is pure overhead, because the system does no useful work while switching. Its
speed varies from machine to machine, depending on the memory speed, the number of registers
which must be copied, the existed of special instructions(such as a single instruction to load or store
all registers).
• What is cache memory? - Cache memory is random access memory (RAM) that a computer
microprocessor can access more quickly than it can access regular RAM. As the microprocessor
processes data, it looks first in the cache memory and if it finds the data there (from a previous
reading of data), it does not have to do the more time-consuming reading of data from larger
memory.
• What is a Safe State and what is its use in deadlock avoidance? - When a process requests an
available resource, system must decide if immediate allocation leaves the system in a safe state.
System is in safe state if there exists a safe sequence of all processes. Deadlock Avoidance: ensure
that a system will never enter an unsafe state.
• What is a Real-Time System? - A real time process is a process that must respond to the events
within a certain time period. A real time operating system is an operating system that can run real
time processes successfully
***********************************************************************************
• What is MUTEX ? - Mutex is a program object that allows multiple program threads to share the
same resource, such as file access, but not simultaneously. When a program is started a mutex is
created woth a unique name. After this stage, any thread that needs the resource must lock the mutex
from other threads while it is using the resource. the mutex is set to unlock when the data is no longer
needed or the routine is finished.
• What isthe difference between a ‘thread’ and a ‘process’? - A process is a collection of virtual
memory space, code, data, and system resources. A thread is code that is to be serially executed
within a process. A processor executes threads, not processes, so each application has at least one
process, and a process always has at least one thread of execution, known as the primary thread. A
process can have multiple threads in addition to the primary thread. Prior to the introduction of
multiple threads of execution, applications were all designed to run on a single thread of execution.
When a thread begins to execute, it continues until it is killed or until it is interrupted by a
thread with higher priority (by a user action or the kernel’s thread scheduler). Each
thread can run separate sections of code, or multiple threads can execute the same section of

Page 3
code. Threads executing the same block of code maintain separate stacks. Each thread in a
process shares that process’s global variables and resources.
• What is INODE?- Data structures that contain information about the files that are created when unix
file systems are created.Each file has an i-node & is identified by an inode number(i-number) in the
file system where it resides.inode provides important information on files such as group
ownership,acess mode(read,write,execute permissions).
• Marshelling.- The process of gathering data and transforming it into a standard format before it is
transmitted over a network so that the data can transcend network boundaries. In order for an object
to be moved around a network, it must be converted into a data stream that corresponds with the
packet structure of the network transfer protocol. This conversion is known as data marshalling. Data
pieces are collected in a message buffer before they are marshaled. When the data is transmitted, the
receiving computer converts the marshaled data back into an object.
• What is Mutex Object? - A mutex object is a synchronization object whose state is set to signaled
when it is not owned by any thread, and non-signaled when it is owned. For example, to prevent two
threads from writing to shared memory at the same time, each thread waits for ownership of a mutex
object before executing the code that accesses the memory. After writing to the shared memory, the
thread releases the mutex object.
• What is semaphore? - A semaphore object is a synchronization object that maintains a count
between zero and a specified maximum value. The count is decremented each time a thread
completes a wait for the semaphore object and incremented each time a thread releases the
semaphore. When the count reaches zero, no more threads can successfully wait for the semaphore
object state to become signaled. The state of a semaphore is set to signaled when its count is greater
than zero, and non-signaled when its count is zero. The semaphore object is useful in controlling a
shared resource that can support a limited number of users. It acts as a gate that limits the number of
threads sharing the resource to a specified maximum number.
• What is the difference between a MUTEX and a binary semaphore? - A binary Semaphore and
mutex are one and the same.A binary semaphore has different types like counting semaphore,binary
semaphore.Let me put in this way a mutex is a semaphore with count as 1 that means synchronization
is achived to process/thread for the resource in one by one manner.
The difference between an ordinary binary semaphore and a mutex semaphore is in the way the
semaphore is initialized.
For an ordinary binary semaphore, a task attempting to synchronize to an external event creates an
empty semaphore. When it reaches the point to be synchronized, it attempts to take a semaphore. If
unavailable, it waits at this point. A second task which controls the synchronization event gives the
semaphore when it is no longer needed. The first task receives notification that the semaphore is
available and proceeds.
For a mutex semaphore, a task wishing to block other tasks from a resouce first creates a full
semaphore, and then takes the semaphore, making it unavailable to other tasks. When it is no longer
needed, it the task gives the semaphore, making the resource available. A mutual exclusion
semaphore must have matching “takes” and “gives”.
• A process may be in anyone of the following states -
1.NEW
2.READY
3.WAIT
4.RUNNING
5.TERMINATE

Page 4
Difference between a process and a program -
− A program is a set of instructions that are to perform a designated task, where as the process is an
operation which takes the given instructions and perform the manipulations as per the code, called
‘execution of instructions’. A process is entirely dependent of a ‘program’.
− A process is a module that executes modules concurrently. They are separate loadable modules. Where
as the program perform the tasks directly relating to an operation of a user like word processing,
executing presentation software etc.
What is the meaning of physical memory and virtual memory?
Physical memory is the only memory that is directly accessible to the CPU. CPU reads the instructions
stored in the physical memory and executes them continuously. The data that is operated will also be
stored in physical memory in uniform manner.
Virtual memory is one classification of memory which was created by using the hard disk for simulating
additional RAM, the addressable space available for the user. Virtual addresses are mapped into real
addresses.
What is the difference between socket and pipe?
Sockets:
Socket is a part of OSI layer model. Communication among different layers is performed through
sockets. Application layer serves through some sockets to the presentation layer and upper application
layer.
Sockets are used in Secure Socket Layer networks.
Communication is in bi-directional in sockets.
Pipes:
Pipes are related to processing in CPU.
Pipes are the segments for processes in execution. By processing multiple processes simultaneously, the
productivity can be improved.
Communication is uni-directional in pipes.
What are the difference between THREAD, PROCESS and TASK?
A program in execution is known as ‘process’. A program can have any number of processes. Every
process has its own address space.
Threads uses address spaces of the process. The difference between a thread and a process is, when the
CPU switches from one process to another the current information needs to be saved in Process
Descriptor and load the information of a new process. Switching from one thread to another is simple.
A task is simply a set of instructions loaded into the memory. Threads can themselves split themselves
into two or more simultaneously running tasks.
Difference between NTFS and FAT32
The differences are as follows:
NTFS:
- Allows the access local to Windows 2000, Windows 2003, Windows NT with service pack 4 and later
versions may get access for some file.
- Maximum partition size is 2TB and more.
- Maximum size of file is upto 16TB
- File and folder encryption is possible.

Page 5
FAT 32:
- Allows the access local to Windows 95, Windows 98, Windows ME, Windows 2000, Windows xp on
local partition.
- Maximum partition size is 2TB
- Maximum size of file is upto 4GB
- File and folder encryption is not possible.
Differentiate between RAM and ROM
RAM:
- Volatile memory
- Electricity needs to flow continuously
- Program information is stored in RAM
- RAM is read / write memory
- Cost is high
ROM:
- Permanent memory
- Instructions are stored in ROM permanently.
- BIOS has information to boot the system
- ROM is read only memory
- Access speed is less
What is DRAM? In which form does it store data?
DRAM – Dynamic Random Access Memory. One of the read / write memory. DRAM is cheap and does
the given task. DRAM has cells made up of a capacitor and a transistor, where the data resides.
Capacitors need to recharge for every couple of milliseconds. The process of recharging cells tends to
performance slow down of DRAM as compared with speedier RAM types.
What is cache memory? Explain its functions.
Cache memory is RAM. The most recently processing data is stored in cache memory. CPU can access
this data more quickly than it can access data in RAM. When the microprocessor starts processing the
data, it first checks in cache memory.
The size of each cache block ranges from 1 to 16 bytes. Every location has an index that corresponds to
the location which has data to access. This index is known as address. The locations have tags; each
contains the index and the datum in the memory that is needed to be cached.
Differentiate between Complier and Interpreter -
- The program syntax is checked by the compiler; where as the keywords of the program is checked by
the interpreter.
- The complete program is checked by the compiler, where as the interpreter checks simultaneously in
the editor.
- Color coding is provided to the program by the interpreter, and enables self debugging while authoring
a program.
- Interpreter converts each source code line into machine code and executes on the fly.
- Compiler takes more time for analyzing and processing the program, where as the interpreter takes very
less time for analyzing and processing the program
Describe different job scheduling in operating systems.
Page 6
Job scheduling is an activity for deciding the time for a process to receive the resources they request.
First Come First Served: In this scheduling, the job that is waiting for a long time is served next.
Round Robin Scheduling: A scheduling method, in which every process gets a time slice for running and
later it is preempted and the next process gets running. This process is known as time sharing, which
provides the effect of all the processes running at the same time.
Shortest Job First: It is a non-preemptive scheduling, in which the jobs were chosen which will execute in
the shortest possible time.
Priority Scheduling: A scheduling method which assigns the highest priority process is assigned to the
resource.
What is a Real-Time System?
Real-time system is the study of hardware and software, which are subject to the operational
deadlines from event to the system response. A real-time system’s application can be considered as mission
critical. The deadlines are relative to their events. Regardless of system load, a real system must be met.
What do you mean by deadlock?
Dead lock is a situation of two or more processes waiting for each other to finish their tasks. In this
situation no progress or no advancement is made. It is a standstill resulting from two evenly matched
processes, which are waiting in cyclic configuration.
Difference between Primary storage and secondary storage
- Primary memory storages are temporary; where as the secondary storage is permanent.
- Primary memory is expensive and smaller, where as secondary memory is cheaper and larger
- Primary memory storages are faster, where as secondary storages are slower.
- Primary memory storages are connected through data buses to CPU, where as the secondary storages are
connect through data cables to CPU
Give an example of page replacement in the areas of computer design?
Majority of the computers have more than one memory caches consisting of 32-byte or 64-byte
blocks of memory. Whenever a cache is full then some part of the block should be chosen for removal. This
problem is precisely the same page replacement except on a shorter time scale.
Describe about preemptive scheduling algorithm?
A preemptive scheduling algorithm picks a process and lets it run for a maximum of some fixed time.
If it is still running at the end of the time interval, it is suspended and the scheduler picks another process to
run. Doing preemptive scheduling requires having a clock interrupt occur at the end of the time interval to
give control of the CPU back to the scheduler.

Give a case situation where a scheduling is needed?


When an interrupt occurs, a scheduling decision may be made. If the interrupt came from an I/O
device that has now completed its work, some process that was blocked waiting for the I/O may now be
ready to run. It is up to the scheduler to decide if the newly ready process should run or not.
Describe about checkpointing?
Checkpointing a process means that its state is written to a file so that it can be restarted later. The
checkpoint contains not only the memory image, but also the resource state, that is, which resources are
currently assigned to the process. To be most effective, new checkpoints should not overwrite old ones but

Page 7
should be written to new files. Whole sequences of checkpoint files are accumulated due to the process
getting executed.
Describe the two phases of two-phase locking?
In the first phase, the process tries to lock all the records it needs, one at a time. If it succeeds, it
begins the second phase, performing its updates and releasing the locks. No real work is done in the first
phase.
Explain about deadlocks in operating systems?
Deadlock is a potential problem in any operating system. It occurs when a group of processes each
have been granted exclusive access to some resources, and each one wants yet another resource that belongs
to another process in the group. All of then are blocked and non will ever run again.
Explain about the paging systems?
Paging systems can be modeled by abstracting the page reference string from the program and using
the same reference string with different algorithms. These models can be used to make some predictions
about paging behavior. Attention to issues such as determining the working set, memory allocations, and
page size are required.
***********************************************************************************
Explain the concept of Reentrancy.
It is a useful, memory-saving technique for multiprogrammed timesharing systems. A Reentrant
Procedure is one in which multiple users can share a single copy of a program during the same period.
Reentrancy has 2 key aspects: The program code cannot modify itself, and the local data for each user
process must be stored separately. Thus, the permanent part is the code, and the temporary part is the pointer
back to the calling program and local variables used by that program. Each execution instance is called
activation. It executes the code in the permanent part, but has its own copy of local variables/parameters. The
temporary part associated with each activation is the activation record. Generally, the activation record is
kept on the stack. Note: A reentrant procedure can be interrupted and called by an interrupting program, and
still execute correctly on returning to the procedure.
Explain Belady's Anomaly.
Also called FIFO anomaly. Usually, on increasing the number of frames allocated to a process'
virtual memory, the process execution is faster, because fewer page faults occur. Sometimes, the reverse
happens, i.e., the execution time increases even when more frames are allocated to the process. This is
Belady's Anomaly. This is true for certain page reference patterns.
What is a binary semaphore? What is its use?
A binary semaphore is one, which takes only 0 and 1 as values. They are used to implement mutual
exclusion and synchronize concurrent processes.

What is thrashing?
It is a phenomenon in virtual memory schemes when the processor spends most of its time swapping
pages, rather than executing instructions. This is due to an inordinate number of page faults.
List the Coffman's conditions that lead to a deadlock.
• Mutual Exclusion: Only one process may use a critical resource at a time.
• Hold & Wait: A process may be allocated some resources while waiting for others.
• No Pre-emption: No resource can be forcible removed from a process holding it.

Page 8
• Circular Wait: A closed chain of processes exist such that each process holds at least one resource
needed by another process in the chain.
What are short-, long- and medium-term scheduling?
Long term scheduler determines which programs are admitted to the system for processing. It
controls the degree of multiprogramming. Once admitted, a job becomes a process. Medium term scheduling
is part of the swapping function. This relates to processes that are in a blocked or suspended state. They are
swapped out of real-memory until they are ready to execute. The swapping-in decision is based on memory-
management criteria.
Short term scheduler, also know as a dispatcher executes most frequently, and makes the finest-
grained decision of which process should execute next. This scheduler is invoked whenever an event occurs.
It may lead to interruption of one process by preemption.
What are turnaround time and response time?
Turnaround time is the interval between the submission of a job and its completion. Response time is
the interval between submission of a request, and the first response to that request.
What are the typical elements of a process image?
• User data: Modifiable part of user space. May include program data, user stack area, and
• programs that may be modified.
• User program: The instructions to be executed.
• System Stack: Each process has one or more LIFO stacks associated with it. Used to store
• parameters and calling addresses for procedure and system calls.
• Process control Block (PCB): Info needed by the OS to control processes.
What is the Translation Lookaside Buffer (TLB)?
In a cached system, the base addresses of the last few referenced pages is maintained in registers
called the TLB that aids in faster lookup. TLB contains those page-table entries that have been most recently
used. Normally, each virtual memory reference causes 2 physical memory accesses-- one to fetch appropriate
page-table entry, and one to fetch the desired data. Using TLB in-between, this is reduced to just one
physical memory access in cases of TLB-hit.
What is the resident set and working set of a process?
Resident set is that portion of the process image that is actually in real-memory at a particular instant.
Working set is that subset of resident set that is actually needed for execution. (Relate this to the variable-
window size method for swapping techniques.
When is a system in safe state?
The set of dispatchable processes is in a safe state if there exists at least one temporal order in which
all processes can be run to completion without resulting in a deadlock.
What is cycle stealing?
We encounter cycle stealing in the context of Direct Memory Access (DMA). Either the DMA
controller can use the data bus when the CPU does not need it, or it may force the CPU to temporarily
suspend operation. The latter technique is called cycle stealing. Note that cycle stealing can be done only at
specific break points in an instruction cycle.
What is meant by arm-stickiness?
Page 9
If one or a few processes have a high access rate to data on one track of a storage disk, then they may
monopolize the device by repeated requests to that track. This generally happens with most common device
scheduling algorithms (LIFO, SSTF, C-SCAN, etc). High-density multisurface disks are more likely to be
affected by this than low density ones.
What are the stipulations of C2 level security?
C2 level security provides for:
• Discretionary Access Control
• Identification and Authentication
• Auditing
• Resource reuse
What is busy waiting?
The repeated execution of a loop of code while waiting for an event to occur is called busy-waiting.
The CPU is not engaged in any real productive activity during this period, and the process does not progress
toward completion.
Explain the popular multiprocessor thread-scheduling strategies.
• Load Sharing: Processes are not assigned to a particular processor. A global queue of threads is
maintained. Each processor, when idle, selects a thread from this queue. Note that load balancing
refers to a scheme where work is allocated to processors on a more permanent basis.
• Gang Scheduling: A set of related threads is scheduled to run on a set of processors at the same time,
on a 1-to-1 basis. Closely related threads / processes may be scheduled this way to reduce
synchronization blocking, and minimize process switching. Group scheduling predated this strategy.
• Dedicated processor assignment: Provides implicit scheduling defined by assignment of threads to
processors. For the duration of program execution, each program is allocated a set of processors equal
in number to the number of threads in the program. Processors are chosen from the available pool.
• Dynamic scheduling: The number of thread in a program can be altered during the course of
execution.
When does the condition 'rendezvous' arise?
In message passing, it is the condition in which, both, the sender and receiver are blocked until the
message is delivered.
What is a trap and trapdoor?
Trapdoor is a secret undocumented entry point into a program used to grant access without normal
methods of access authentication. A trap is a software interrupt, usually the result of an error condition.
What are local and global page replacements?
Local replacement means that an incoming page is brought in only to the relevant process' address
space. Global replacement policy allows any page frame from any process to be replaced. The latter is
applicable to variable partitions model only.
Define latency, transfer and seek time with respect to disk I/O.
Seek time is the time required to move the disk arm to the required track. Rotational delay or latency
is the time it takes for the beginning of the required sector to reach the head. Sum of seek time (if any) and
latency is the access time. Time taken to actually transfer a span of data is transfer time.
Page 10
Describe the Buddy system of memory allocation.
Free memory is maintained in linked lists, each of equal sized blocks. Any such block is of size 2^k.
When some memory is required by a process, the block size of next higher order is chosen, and broken into
two. Note that the two such pieces differ in address only in their kth bit. Such pieces are called buddies.
When any used block is freed, the OS checks to see if its buddy is also free. If so, it is rejoined, and put into
the original free-block linked-list.
What is time-stamping?
It is a technique proposed by Lamport, used to order events in a distributed system without the use of
clocks. This scheme is intended to order events consisting of the transmission of messages. Each system 'i' in
the network maintains a counter Ci. Every time a system transmits a message, it increments its counter by 1
and attaches the time-stamp Ti to the message. When a message is received, the receiving system 'j' sets its
counter Cj to 1 more than the maximum of its current value and the incoming time-stamp Ti. At each site,
the ordering of messages is determined by the following rules: For messages x from site i and y from site j, x
precedes y if one of the following conditions holds....(a) if Ti
How are the wait/signal operations for monitor different from those for semaphores?
If a process in a monitor signal and no task is waiting on the condition variable, the signal is lost. So
this allows easier program design. Whereas in semaphores, every operation affects the value of the
semaphore, so the wait and signal operations should be perfectly balanced in the program.
In the context of memory management, what are placement and replacement algorithms?
Placement algorithms determine where in available real-memory to load a program. Common
methods are first-fit, next-fit, best-fit. Replacement algorithms are used when memory is full, and one
process (or part of a process) needs to be swapped out to accommodate a new program. The replacement
algorithm determines which are the partitions to be swapped out.
In loading programs into memory, what is the difference between load-time dynamic linking and run-
time dynamic linking?
For load-time dynamic linking: Load module to be loaded is read into memory. Any reference to a
target external module causes that module to be loaded and the references are updated to a relative address
from the start base address of the application module. With run-time dynamic loading: Some of the linking is
postponed until actual reference during execution. Then the correct module is loaded and linked.
What are demand- and pre-paging?
With demand paging, a page is brought into memory only when a location on that page is actually
referenced during execution. With pre-paging, pages other than the one demanded by a page fault are
brought in. The selection of such pages is done based on common access patterns, especially for secondary
memory devices.
Paging a memory management function, while multiprogramming a processor management function,
are the two interdependent?
Yes.
What is page cannibalizing?
Page swapping or page replacements are called page cannibalizing.
What has triggered the need for multitasking in PCs?
Increased speed and memory capacity of microprocessors together with the support fir virtual
memory and Growth of client server computing
Page 11
What are the four layers that Windows NT have in order to achieve independence?
Hardware abstraction layer
• Kernel
• Subsystems
• System Services.
What is SMP?
To achieve maximum efficiency and reliability a mode of operation known as symmetric
multiprocessing is used. In essence, with SMP any process or threads can be assigned to any processor.
What are the key object oriented concepts used by Windows NT?
Encapsulation
Object class and instance
Is Windows NT a full blown object oriented operating system? Give reasons.
No Windows NT is not so, because its not implemented in object oriented language and the data
structures reside within one executive component and are not represented as objects and it does not support
object oriented capabilities .
What is a drawback of MVT?
It does not have the features like
• ability to support multiple processors
• virtual storage
• source level debugging
What is process spawning?
When the OS at the explicit request of another process creates a process, this action is called process
spawning.
How many jobs can be run concurrently on MVT?
15 jobs

List out some reasons for process termination.


• Normal completion
• Time limit exceeded
• Memory unavailable
• Bounds violation
• Protection error
• Arithmetic error
• Time overrun
• I/O failure
• Invalid instruction
Page 12
• Privileged instruction
• Data misuse
• Operator or OS intervention
• Parent termination.
What are the reasons for process suspension?
• swapping
• interactive user request
• timing
• parent process request
What is process migration?
It is the transfer of sufficient amount of the state of process from one machine to the target machine.
What is mutant?
In Windows NT a mutant provides kernel mode or user mode mutual exclusion with the notion of
ownership.
What is an idle thread?
The special thread a dispatcher will execute when no ready thread is found.
What is FtDisk?
It is a fault tolerance disk driver for Windows NT.
What are the possible threads a thread can have?
• Ready
• Standby
• Running
• Waiting
• Transition
• Terminated.
What are rings in Windows NT?
Windows NT uses protection mechanism called rings provides by the process to implement
separation between the user mode and kernel mode.
What is Executive in Windows NT?
In Windows NT, executive refers to the operating system code that runs in kernel mode.
What are the sub-components of I/O manager in Windows NT?
• Network redirector/ Server
• Cache manager.
• File systems
• Network driver

Page 13
• Device driver
What are DDks? Name an operating system that includes this feature.
DDks are device driver kits, which are equivalent to SDKs for writing device drivers. Windows NT
includes Ddks.
What level of security does Windows NT meets?
C2 level security.
***********************************************************************************

Page 14

Вам также может понравиться