Вы находитесь на странице: 1из 20

Operating System

1.)What is system call? Ans:- A system call provide an interface between the process and the operating system.it is a request made by any program to the operating system for performing task.it as used whenever a program needs to access a restricted source eg a file on the hard disc,any hardware device. System calls provide the interface between a process and the operating system. 2.)What is time sharing operating system? Ans:- In time sharing operating system, equal time slots are provided to processor for execution of programs which also leads to the context switching, in which process shifts control from one to another. In case any process does not complete its working within time slot then extra time slot will not be given to it. For example- you are running a process with a time slot of 5 seconds and in case process does not complete in 5 second and requires 1 extra second then it will be executed in next execution cycle but time slot will not be extended. In this environment a computer provides computing services to several or many users concurrently on-line. Here, the various users are sharing the central processor, the memory, and other resources of the computer system in a manner facilitated, controlled, and monitored by the operating system. 3.)What is real time operating system? Ans:- The real time sharing operating system is used in which different processes are executed for some time slot but in some process execution, time slot can be extended for process execution but context switching can also take place in same manner. For examplein case you are executing a process which requires 7 seconds for its completion but process requires 8 seconds then 1 second time period will be extended and after that another time switching take place. The systems, in this case, are designed to be interrupted by external signals that require the immediate attention of the computer system.These real time operating systems are used to control machinery, scientific instruments and industrial systems. An RTOS typically has very little user-interface capability, and no end-user utilities. A very important part of an RTOS is managing the resources of the computer so that a particular operation executes in precisely the same amount of time every time it occurs. 4.)What is response time? Ans:- Response time is amount of time it takes from when a request was submitted until the first response is produced. It is most frequently considered in time sharing and real time operating system. Response time = t(first response) t(submission of request)

5.) What is turnaround time? Ans:- Turnaround is total time between submission of a process and its completion. It is the sum of the periods spent waiting to get into memory, waiting in the ready queue, CPU time and I/O operations.Turnaround Time = t(Process completed) t(Process submitted). 6.) What is throughput? Ans:- Throughput is number of processes that complete their execution per time unit. One way to measure throughput is by means of the number of processes that are completed in a unit of time. The higher the number of processes, the more work apparently is being done by the system. But this approach is not very useful for comparison because this is dependent on the characteristics and resource requirement of the process being executed.Throughput = (No. of processes completed) / (Time unit) 7.)What is waiting time? Ans:- This is the time spent in the ready queue. In multiprogramming operating system several jobs reside at a time in memory. CPU executes only one job at a time. The rest of jobs wait for the CPU. The waiting time may be expressed as turnaround time, less than the actual processing time.Waiting time = Turnaround Time - Processing Time. 8.)What is CPU utilization? Ans:- The key idea is that if the CPU is busy all the time, the utilization factor of all the components of the system will be also high. CPU utilization is the ratio of busy time of the processor to the total time passes for processes to finish. Processor Utilization = (Processor busy time) / (Processor busy time + Processor idle time). 9)What is swapping? Ans:- When we load a file or program, the file is stored in the random access memory (RAM). Since RAM is finite, some files cannot fit on it. These files are stored in a special section of the hard drive called the "swap file". "Swapping" is the act of using this swap file.As wapping is a mechanism in which a process can be swapped temporarily out of memory to a backing store and then brought back into memory for continued execution. It selects the least busy process and moves it in its entirety (meaning the program's in-RAM text, stack, and data segments) to disk. As more RAM becomes available, it swaps the process back in from disk into RAM. While this use of the virtual memory system makes it possible for you to continue to use the machine.

10)What is virtual memory? Ans:- Virtual memory is a computer system technique which gives an application program the impression that it has contiguous working memory (an address space), while in fact it may be physically fragmented and may even overflow on to disk storage. Developed for multitasking kernels, virtual memory provides two primary functions: 1. Each process has its own address space, thereby not required to be relocated nor required to use relative addressing mode. 2. Each process sees one contiguous block of free memory upon launch. Fragmentation is hidden. 11.)What is physical & logical memory? Ans:- Physical Memory:: Physical Memory is an abstract block of memory stored within the computers Random Access Memory (RAM). The way physical memory is "stored" within the RAM depends on the type of RAM the system uses. For example, Dynamic Random Access Memory (DRAM) stores each bit of data in its own capacitor that must be refreshed periodically. A capacitor is an electronic device that stores a current for a limited time. This allows it to either store a current (a binary 1), or no current (a binary 0). This is how DRAM chips store individual bits of data in a computer. Logical memory:: Logical memory is the address space, assigned to a logical partition, that the operating system perceives as its main storage. For a logical partition that uses shared memory (hereafter referred to as a shared memory partition), a subset of the logical memory is backed up by physical main storage and the remaining logical memory is kept in auxiliary storage. 12)What is trap? Ans:-A trap, also known as an exception or a fault, is typically a type of synchronous interrupt typically caused by an exceptional condition (e.g., breakpoint, division by zero, invalid memory access). A trap usually results in a switch to kernel mode, wherein the operating system performs some action before returning control to the originating process. A trap in a system process is more serious than a trap in a user process, and in some systems is fatal. 13.)What is kernel? Ans:- The kernel is the central part of an operating system, that directly controls the computer hardware. the kernel is the main component of most computer operating systems; it is a bridge between applications and the actual data processing done at the

hardware level. The kernel's responsibilities include managing the system's resources (the communication between hardware and software components). 14)Define process management. Ans:- Process management is an integral part of any modern day operating system (OS). The OS must allocate resources to processes, enable processes to share and exchange information, protect the resources of each process from other processes and enable synchronisation among processes. To meet these requirements, the OS must maintain a data structure for each process, which describes the state and resource ownership of that process, and which enables the OS to exert control over each process. 15.)What is memory management? Ans:- Memory management is the act of managing computer memory. The essential requirement of memory management is to provide ways to dynamically allocate portions of memory to programs at their request, and freeing it for reuse when no longer needed. This is critical to the computer system. Memory is a large array of words or bytes, each with its own address. Interaction is achieved through a sequence of reads or writes of specific memory address. The CPU fetches from and stores in memory. 16.)Define storage management. Ans:- The computer system must provide secondary storage to backup main memory. Most modem computer systems use disks as the primary on-line storage of information, of both programs and data. Most programs, like compilers, assemblers, sort routines, editors, formatters, and so on, are stored on the disk until loaded into memory, and then use the disk as both the source and destination of their processing. Hence the proper management of disk storage is of central importance to a computer system.There are few alternatives. Magnetic tape systems are generally too slow. In addition,they are limited to sequential access. Thus tapes are more suited for storing infrequently used files, where speed is not a primary concern. 17.)What is protection? Ans:- Protection refers to a mechanism for controlling the access of programs, processes, or users to the resources defined by a computer controls to be imposed, together with some means of enforcement. 18.)What is file management? Ans:- The system that an operating system or program uses to organize and keep track of files. The operating system is responsible for the following activities in connection to the file management: The creation and deletion of files. The creation and deletion of directory. The support of primitives for manipulating files and directories.

The mapping of files onto disk storage. Backup of files on stable (non volatile) storage. Protection and security of the files. 19.)Define device management. Ans:- Device management controls peripheral devices by sending them commands in their own proprietary language. The software routine that knows how to deal with each device is called a "driver," and the OS requires drivers for the peripherals attached to the computer. When a new peripheral is added, that device's driver is installed into the operating system 20.)What is microkernel? Ans:- A microkernel is the near-minimum amount of software that can provide the mechanisms needed to implement an operating system (OS). These mechanisms include low-level address space management, thread management, and inter-process communication (IPC). The microkernel was designed to address the increasing growth of kernels and the difficulties that came with them 21.)What is JVM? Ans:- The heart of the Java platform is the concept of a "virtual machine" that executes Java bytecode programs. This bytecode is the same no matter what hardware or operating system the program is running under. There is a JIT compiler within the Java Virtual Machine, or JVM. The JIT compiler translates the Java bytecode into native processor instructions at run-time and caches the native code in memory during execution. 22.)What is bootstrap loader? Ans:- A bootstrap loader is a program that resides in the computers EPROM, ROM, or other non-volatile memory that automatically executed by the processor when the computer is turned on. The bootstrap loader reads the hard disk drives boot sector to continue the process of loading the computers operating system.The boot loader is now part of the EFI BIOS. 23.)What is PCB? Ans:- Process Control Block (PCB, also called Task Controlling Block or Task Struct) is a data structure in the operating system kernel containing the information needed to manage a particular process. The PCB is "the manifestation of a process in an operating system". 24.)What do you mean by scheduling? Ans:- Scheduling refers to a set of policies and mechanisms supported by operating system that controls the order in which the work to be done is completed. All computer resources are scheduled before use. Since CPU is one of the primary computer resources, its scheduling is central to operating system design.

25.)What is scheduler? Ans:- A scheduler is an operating system program (module)that selects the next job to be admitted for execution. The main objective of scheduling is to increase CPU utilization and higher throughputThere are three types of schedulers. They are: Short term scheduler Long term scheduler Medium term scheduler 26.)What is cascading termination? Ans:- If a parent process terminates, then all its children processes must also be terminated. This phenomena is referred to as cascading termination. 27.)What is interprocess communication? Ans:- Interprocess communication (IPC) is a capability supported by operating system that allows one process to communicate with another process. The processes can be running on the same computer or on different computers connected through a network. IPC enables one application to control another application, and for several applications to share the same data without interfering with one another. IPC is required in all multiprogramming systems, but it is not generally supported by singleprocess operating systems such as DOS. OS/2 and MS-Windows support an IPC mechanism called Dynamic Data Exchange. 28.)What do you mean by shared memory system? Ans:- Shared-memory systems require communication processes to share some variables. The processes are expected to exchange information through the use of these shared variables. In a shared-memory scheme, the responsibility for providing communication rests with the application programmers. The operating system only needs to provide shared memory. 29.)What is message passing? Ans:- Message passing systems allow communication processes to exchange messages. In this scheme, the responsibility rests with the operating system itself.The function of a message-passing system is to allow processes to communicate with each other without the need to resort to shared variable. An interprocess communication facility basically provides two operations: send (message) and receive(message). In order to send and to receive messages, a communication link must existbetween two involved processes. 30.)What is CPU burst? Ans:- Burst time is an assumption of how long a process requires the CPU between I/O waits.It can not be predicted exactly, before a process starts.

It means the amount of time a process uses the CPU for a single time. (A process can use the CPU several times before complete the job) 31.)What is non pre-emptive scheduling? Ans:- A scheduling discipline is non-preemptive if once a process has been allotted to the CPU, the CPU cannot be taken away from the process.A non-preemptible discipline always processes a scheduled request to its completion. 32.)What is pre-emptive scheduling? Ans:- Preemption means the operating system moves a process from running to ready without the process requesting it. Preemptive scheduling is more useful in high priority process which requires immediate response. 33.)Define Spooling and Buffering. Ans:- Spooling ::: Spool refers to the process of placing data in a temporary working area for another program to process. The most common use is in writing files on a magnetic tape or disk and entering them in the work queue (possibly just linking it to a designated folder in the file system) for another process. Spooling is useful because devices access data at different rates. Spooling allows one program to assign work to another without directly communicating with it. Buffering::: Buffering is a method of overlapping the computation of a job with its execution. It temporarily stores input or output data in an attempt to better match the speeds of two devices such as a fast CPU and a slow disk drive. If, for example, the CPU writes information to the buffer, it can continue in its computation while the disk drive stores the information. Difference::: Buffering overlaps input, output and processing of a single job whereas Spooling allows CPU to overlap the input of one job with the computation and output of other jobs.Therefore this approach is better than buffering.Even in a simple system, the spooler may be reading the input of one job while printing the output of a different job. 34)What is processor? Ans:- A processor is a primary chip inside a computer and it contains the digital circuitry. A processor executes all the programs and the instructions inside the computer. The processor is also embedded in the small devices and in the personal computers and is known as microprocessor. Its speed is measured in the Gigahertz. Higher the processors

speed, the more instructions it can process in less time. It is also known as the central processing unit (CPU). 35.)What is page fault? Ans:- A page fault is a trap to the software raised by the hardware when a program accesses a page that is mapped in the virtual address space, but not loaded in physical memory. In the typical case the operating system tries to handle the page fault by making the required page accessible at a location in physical memory or kills the program in the case of an illegal access. The hardware that detects a page fault is the memory management unit in a processor. The exception handling software that handles the page fault is generally part of the operating system. 36.)What is thrasing? Ans:- This is when you're trying to manipulate large amounts of data and the computer cannot fit them all into memory - it has to keep dropping some of the data out to the paging file (a virtual memory area which is actually just a file on disk) and then loading other bits back into memory.....if this movement is required a great deal, because you've got a lot of data, then the computer ends up spending more time moving data than actually processing it = thrashing. Causes of Thrashing :: It result in severe performance problems. 1) If CPU utilization is too low then we increase the degree of multiprogramming by introducing a new process to the system. A global page replacement algorithm is used. The CPU scheduler sees the decreasing CPU utilization and increases the degree of multiprogramming. 2) CPU utilization is plotted against the degree of multiprogramming. 3) As the degree of multiprogramming increases, CPU utilization also increases. 4) If the degree of multiprogramming is increased further, thrashing sets in and CPU utilization drops sharply. 5) So, at this point, to increase CPU utilization and to stop thrashing, we must decrease the degree of multiprogramming. How to prevent Thrashing :: We must provide a process with as many frames as it needs. Several techniques are used. The Working of Set Model (Strategy) It starts by looking at how many frames a process

is actually using. This defines the locality model. Locality Model It states that as a process executes, it moves from locality to locality. A locality is a set of pages that are actively used together. A program is generally composed of several different localities which overlap. 37.)What is page replacement? Ans:- Page replacement algorithms decide which memory pages to page out (swap out, write to disk) when a page of memory needs to be allocated. Paging happens when a page fault occurs and a free page cannot be used to satisfy the allocation, either because there are none, or because the number of free pages is lower than some threshold. 38.)What is demand paging? Ans:-Demand paging (as opposed to anticipatory paging) is an application of virtual memory. In a system that uses demand paging, the operating system copies a disk page into physical memory only if an attempt is made to access it (i.e., if a page fault occurs). It follows that a process begins execution with none of its pages in physical memory, and many page faults will occur until most of a process's working set of pages is located in physical memory. Demand paging follows that pages should only be brought into memory if the executing process demands them. Advantages Demand paging, as opposed to loading all pages immediately:

Only loads pages that are demanded by the executing process. As there is more space in main memory, more processes can be loaded reducing context switching time which utilizes large amounts of resources. Less loading latency occurs at program startup, as less information is accessed from secondary storage and less information is brought into main memory.

Disadvantages

Individual programs face extra latency when they access a page for the first time. So demand paging may have lower performance than anticipatory paging algorithms. Programs running on low-cost, low-power embedded systems may not have a memory management unit that supports page replacement. Memory management with page replacement algorithms becomes slightly more complex. Possible security risks, including vulnerability to timing attacks; see Percival 2005 Cache Missing for Fun and Profit (specifically the virtual memory attack in section 2).

39.)What is segmentation? Ans:- Memory segmentation is the division of computer memory into segments or sections. Segments or sections are also used in object files of compiled programs when they are linked together into a program image, or when the image is loaded into memory. In a computer system using segmentation, a reference to a memory location includes a value that identifies a segment and an offset within that segment. Different segments may be created for different program modules, or for different classes of memory usage such as code and data segments. Certain segments may even be shared between programs. 40.)What is paging? Ans:- Paging is a memory management technique in which the memory is divided into fixed size pages. Paging is used for faster access to data. When a program needs a page, it is available in the main memory as the OS copies a certain number of pages from your storage device to main memory. Paging allows the physical address space of a process to be noncontiguous. 41.)What is page no.? Ans:- Paging is one of the memory-management schemes by which a computer can store and retrieve data from secondary storage for use in main memory. In the paging memorymanagement scheme, the operating system retrieves data from secondary storage in samesize blocks called pages. The main advantage of paging over memory segmentation is that it allows the physical address space of a process to be noncontiguous. Before the time paging was used, systems had to fit whole programs into storage contiguously, which caused various storage and fragmentation problems. 42.)What is frames? Ans:- In a paged system, logical memory is divided into a number of fixed sizes chunks called pages. The physical memory is also predivided into same fixed sized blocks(as is the size of pages) called page frames. 43)What is page table? Ans:- A page table is the data structure used by a virtual memory system in a operating system to store the mapping between virtual addresses and physical addresses. Virtual addresses are those unique to the accessing process. Physical addresses are those unique to the hardware, i.e., RAM.

44)What is TLB? Ans:- A translation lookaside buffer (TLB) is a cache that memory management hardware uses to improve virtual address translation speed. All current desktop, notebook, and server processors use a TLB to map virtual and physical address spaces, and it is nearly always present in any hardware which utilizes virtual memory. A TLB has a fixed number of slots that contain page table entries, which map virtual addresses to physical addresses. The virtual memory is the space seen from a process. This space is segmented in pages of a prefixed size. The page table (generally loaded in memory) keeps track of where the virtual pages are loaded in the physical memory. The TLB is a cache of the page table; that is, only a subset of its content are stored. 45.)What is fragmentation?Types of it. Ans:- Fragmentation is as a process which are loaded or removed from memory. the free memory space is broken into Little pieces, such types of pieces may or may not be of any use to be allocated individually to any process. this may give rise to tern memory waste or fragmentation. External fragmentation External fragmentation refers to the division of free storage into small pieces over a period of time, due to an inefficient memory allocation algorithm, resulting in the lack of sufficient storage for another program because these small pieces are not contiguous. Internal Fragmentation: Internal fragmentation is the space wasted inside of allocated memory blocks because of restriction on the allowed sizes of allocated blocks. Allocated memory may be slightly larger than requested memory; this size difference is memory internal to a partition, but not being used 46.)What do you mean by address binding? Ans:- A program, to be executed, must be brought to main memory. The instructions that use addresses in a program, must be bound to proper address space in main memory. Address binding is a scheme that performs this job. It can be thought as a mapping from one address space to another. There are bindings available as follows: Compile time binding Load time binding Execution time binding

47.)What is logical address? Ans:- Logical address Logical address is the address at which an item (memory cell, storage element, network host) appears to reside from the perspective of an executing application program. 48.)What is bankers algorithm? Ans:- The Banker's algorithm is a resource allocation and deadlock avoidance algorithm that tests for safety by simulating the allocation of predetermined maximum possible amounts of all resources, and then makes a "safe-state" check to test for possible deadlock conditions for all other pending activities, before deciding whether allocation should be allowed to continue. 49.)What is deadlock? Ans:- A deadlock is a situation which occurs when a process enters a waiting state because a resource requested by it is being held by another waiting process, which in turn is waiting for another resource. If a process is unable to change its state indefinitely because the resources requested by it are being used by other waiting process, then the system is said to be in a deadlock. Deadlock is a common problem in multiprocessing systems, parallel computing and distributed systems, where software and hardware locks are used to handle shared resources and implement process synchronization. The Banker's algorithm is run by the operating system whenever a process requests resources.The algorithm avoids deadlock by denying or postponing the request if it determines that accepting the request could put the system in an unsafe state (one where deadlock could occur). When a new process enters a system, it must declare the maximum number of instances of each resource type that may not exceed the total number of resources in the system. Also, when a process gets all its requested resources it must return them in a finite amount of time. For the Banker's algorithm to work, it needs to know three things:

How much of each resource each process could possibly request How much of each resource each process is currently holding How much of each resource the system currently has available

50.) Difference Between Deadlock Prevention and Deadlock Avoidance. Ans:Deadlock Prevention:
o o

Preventing deadlocks by constraining how requests for resources can be made in the system and how they are handled (system design). The goal is to ensure that at least one of the necessary conditions for deadlock can never hold.

Deadlock Avoidance
o o o

The system dynamically considers every request and decides whether it is safe to grant it at this point, The system requires additional information regarding the overall potential use of each resource for each process. Allows more concurrency.

51.)What is dinning philosopher problem? Ans:- Problem statement ::

Illustration of the dining philosophers problem Five silent philosophers sit at a table around a bowl of spaghetti. A fork is placed between each pair of adjacent philosophers. Each philosopher must alternately think and eat. Eating is not limited by the amount of spaghetti left: assume an infinite supply. However, a philosopher can only eat while

holding both the fork to the left and the fork to the right (an alternative problem formulation uses rice and chopsticks instead of spaghetti and forks). Each philosopher can pick up an adjacent fork, when available, and put it down, when holding it. These are separate actions: forks must be picked up and put down one by one. The problem is how to design a discipline of behavior (a concurrent algorithm) such that each philosopher won't starve, i.e. can forever continue to alternate between eating and thinking.

Issues
The problem was designed to illustrate the problem of avoiding deadlock, a system state in which no progress is possible. One idea is to instruct each philosopher to behave as follows:

think until the left fork is available; when it is, pick it up think until the right fork is available; when it is, pick it up eat put the left fork down put the right fork down repeat from the start

This solution is incorrect: it allows the system to reach deadlock, namely, the state in which each philosopher has picked up the fork to the left, waiting for the fork to the right to be put down. Resource starvation might also occur independently of deadlock if a particular philosopher is unable to acquire both forks because of a timing problem. For example there might be a rule that the philosophers put down a fork after waiting five minutes for the other fork to become available and wait a further five minutes before making their next attempt. This scheme eliminates the possibility of deadlock (the system can always advance to a different state) but still suffers from the problem of livelock. If all five philosophers appear in the dining room at exactly the same time and each picks up the left fork at the same time the philosophers will wait five minutes until they all put their forks down and then wait a further five minutes before they all pick them up again. Mutual exclusion is the core idea of the problem, and the dining philosophers create a generic and abstract scenario useful for explaining issues of this type. The failures these philosophers may experience are analogous to the difficulties that arise in real computer programming when multiple programs need exclusive access to shared resources. These issues are studied in the branch of Concurrent Programming. The original problems of Dijkstra were related to external devices like tape drives. However, the difficulties studied in the Dining philosophers problem arise far more often when multiple processes access sets of data that are being updated. Systems that must deal with a large number of parallel

processes, such as operating system kernels, use thousands of locks and synchronizations that require strict adherence to methods and protocols if such problems as deadlock, starvation, or data corruption are to be avoided. 52.)What is monitor? Ans:-A monitor is an object or module intended to be used safely by more than one thread. The defining characteristic of a monitor is that its methods are executed with mutual exclusion. That is, at each point in time, at most one thread may be executing any of its methods. This mutual exclusion greatly simplifies reasoning about the implementation of monitors compared to reasoning about parallel code that updates a data structure. Monitors also provide a mechanism for threads to temporarily give up exclusive access, in order to wait for some condition to be met, before regaining exclusive access and resuming their task. Monitors also have a mechanism for signaling other threads that such conditions have been met. 53.)What is problem)? producer-consumer problem(Bounded buffer

Ans:-The consumer producer problem (also known as the bounded-buffer problem) is a classical example of a multi-process synchronization problem. The problem describes two processes, the producer and the consumer, who share a common, fixed-size buffer used as a queue. The producer's job is to generate a piece of data, put it into the buffer and start again. At the same time, the consumer is consuming the data (i.e., removing it from the buffer) one piece at a time. The problem is to make sure that the producer won't try to add data into the buffer if it's full and that the consumer won't try to remove data from an empty buffer. The solution for the producer is to either go to sleep or discard data if the buffer is full. The next time the consumer removes an item from the buffer, it notifies the producer, who starts to fill the buffer again. In the same way, the consumer can go to sleep if it finds the buffer to be empty. The next time the producer puts data into the buffer, it wakes up the sleeping consumer. The solution can be reached by means of inter-process communication, typically using semaphores. An inadequate solution could result in a deadlock where both processes are waiting to be awakened. The problem can also be generalized to have multiple producers and consumers. 54.)What do you mean by semaphore? Ans:-Semaphores are devices used to help with synchronization. If multiple processes share a common resource, they need a way to be able to use that resource without disrupting each other. You want each process to be able to read from and write to that resource uninterrupted. A semaphore will either allow or disallow access to the resource, depending on how it is

set up. One example setup would be a semaphore which allowed any number of processes to read from the resource, but only one could ever be in the process of writing to that resource at a time. Binary Semaphores can assume only the value 0 or the value 1 counting semaphores also called general semaphores can assume only nonnegative values. The P (or wait or sleep or down) operation on semaphores S, written as P(S) or wait (S), operates as follows:

P(S): IF S > 0
THEN S := S - 1 ELSE (wait on S)

The V (or signal or wakeup or up) operation on semaphore S, written as V(S) or signal (S), operates as follows:

V(S): IF (one or more process are waiting on S)


THEN (let one of these processes proceed) ELSE S := S +1 Operations P and V are done as single, indivisible, atomic action. It is guaranteed that once a semaphore operations has stared, no other process can access the semaphore until operation has completed. Mutual exclusion on the semaphore, S, is enforced within P(S) and V(S). If several processes attempt a P(S) simultaneously, only process will be allowed to proceed. The other processes will be kept waiting, but the implementation of P and V guarantees that processes will not suffer indefinite postponement. Semaphores solve the lost-wakeup problem. 55.)What is starvation? Ans:- Starvation occurs when a scheduler process (i.e. the operating system) refuses to give a particular thread any quantity of a particular resource (generally CPU). If there are too many high-priority threads, a lower priority thread may be starved. This can have negative impacts, though, particularly when the lower-priority thread has a lock on some resource. 56.)What do you mean by critical section problem? Ans:-A section of code or collection of operations in which only one process may be executing at a given time, is called critical section. Consider a system containing n

processes {P0, P1, 2, ..., Pn }. Each process has a segment of code called a critical section in which the process may be changing common variables, updating a table, writing into files etc. When such a system works, only one process may be allowed to execute within a critical section. The execution of critical sections by the processes is mutually exclusive in time. 57.)What do you mean by mutual exclusion? Ans:- If we could arrange matters such that no two processes were ever in their critical sections simultaneously, we could avoid race conditions. We need four conditions to hold to have a good solution for the critical section problem (mutual exclusion).

No two processes may at the same moment inside their critical sections. No assumptions are made about relative speeds of processes or number of CPUs. No process should outside its critical section should block other processes. No process should wait arbitrary long to enter its critical section.

Mutual exclusion, refers to the problem of ensuring that no two processes or threads can be in their critical section at the same time. Here, a critical section refers to a period of time when the process accesses a shared resource, such as shared memory. 58.)What is multilevel feedback queue scheduling? Ans:- Multilevel feedback queue-scheduling algorithm allows a process to move between queues. It uses many ready queues and associate a different priority with each queue. The Algorithm chooses to process with highest priority from the occupied queue and run that process either preemptively or unpreemptively. If the process uses too much CPU time it will moved to a lower-priority queue. Similarly, a process that wait too long in the lowerpriority queue may be moved to a higher-priority queue may be moved to a highest-priority queue. 59.)What is multilevel queue scheduling? Ans:- This is used for situations in which processes are easily divided into different groups. For example, a common division is made between foreground (interactive) processes and background (batch) processes. These two types of processes have different response-time requirements and so may have different scheduling needs. it is very useful for shared memory problem 60.)What is FCFS? Ans:-The simplest scheduling algorithm is First Come First Serve (FCFS). Jobs are scheduled in the order they are received. FCFS is non-preemptive. Implementation is easily accomplished by implementing a queue of the processes to be scheduled or by storing the time the process was received and selecting the process with the earliest

time.

61.)What is SJF? Ans:- With this strategy the scheduler arranges processes with the least estimated processing time remaining to be next in the queue. This requires advanced knowledge or estimations about the time required for a process to complete.

If a shorter process arrives during another process' execution, the currently running process may be interrupted (known as preemption), dividing that process into two separate computing blocks. This creates excess overhead through additional context switching. The scheduler must also place each incoming process into a specific place in the queue, creating additional overhead. This algorithm is designed for maximum throughput in most scenarios. Waiting time and response time increase as the process' computational requirements increase. Since turnaround time is based on waiting time plus processing time, longer processes are significantly affected by this. Overall waiting time is smaller than FIFO, however since no process has to wait for the termination of the longest process. No particular attention is given to deadlines, the programmer can only attempt to make processes with deadlines as short as possible. Starvation is possible, especially in a busy system with many small processes being run.

62.)What is RR? Ans:- Round-robin (RR) is one of the simplest scheduling algorithms for processes in an operating system. As the term is generally used, time slices are assigned to each process in equal portions and in circular order, handling all processes without priority (also known as cyclic executive). Round-robin scheduling is simple, easy to implement, and starvationfree. Round-robin scheduling can also be applied to other scheduling problems, such as data packet scheduling in computer networks. 63.)What is mutex? Ans:- Short for mutual exclusion object. In computer programming, a mutex is a program object that allows multiple program threads to share the same resource, such as file access, but not simultaneously. When a program is started, a mutex is created with a unique name. After this stage, any thread that needs the resource must lock the mutex from other threads while it is using the resource. The mutex is set to unlock when the data is no longer needed or the routine is finished. 64.)What is readers-writers problem? Ans:- Readers-Writers Problem:: Approach 1: no reader should be waiting unless a writer has the permission to

write. Approach 2: if a writer is waiting to write, then no new reader can start reading. Solution to either variant will lead to starvation. 65.)What is long term scheduler? Ans:- The long-term, or admission scheduler, decides which jobs or processes are to be admitted to the ready queue (in the Main Memory); that is, when an attempt is made to execute a program, its admission to the set of currently executing processes is either authorized or delayed by the long-term scheduler. Thus, this scheduler dictates what processes are to run on a system, and the degree of concurrency to be supported at any one time - i.e.: whether a high or low amount of processes are to be executed concurrently, and how the split between IO intensive and CPU intensive processes is to be handled. In modern operating systems, this is used to make sure that real time processes get enough CPU time to finish their tasks. Without proper real time scheduling, modern GUI interfaces would seem sluggish. The long term queue exists in the Hard Disk or the "Virtual Memory". 66.)What is mid term scheduler? Ans:- The medium-term scheduler temporarily removes processes from main memory and places them on secondary memory (such as a disk drive) or vice versa. This is commonly referred to as "swapping out" or "swapping in" (also incorrectly as "paging out" or "paging in"). The medium-term scheduler may decide to swap out a process which has not been active for some time, or a process which has a low priority, or a process which is page faulting frequently, or a process which is taking up a large amount of memory in order to free up main memory for other processes, swapping the process back in later when more memory is available, or when the process has been unblocked and is no longer waiting for a resource. 67.)What is short term scheduler? Ans:- The short-term scheduler (also known as the CPU scheduler) decides which of the ready, in-memory processes are to be executed (allocated a CPU) next following a clock interrupt, an IO interrupt, an operating system call or another form of signal. Thus the short-term scheduler makes scheduling decisions much more frequently than the long-term or mid-term schedulers - a scheduling decision will at a minimum have to be made after every time slice, and these are very short. This scheduler can be preemptive, implying that it is capable of forcibly removing processes from a CPU when it decides to allocate that CPU to another process, or non-preemptive (also known as "voluntary" or "co-operative"), in which case the scheduler is unable to "force" processes off the CPU.

Вам также может понравиться