Вы находитесь на странице: 1из 12

July 2011 Master of Computer Science (MSCCS) Semester 1 MC0070 Operating Systems with Unix 4 Credits (Book ID:

ID: B0682 & B0683) Assignment Set 1 (60 Marks)


Answer all Questions Book ID: B0682 Q.1) Describe the concept of process control in Operating systems. Each question carries TEN marks

Answer: 1)Process management is an integral part of any modern day operating system (OS). The OS must allocate resources to processes, enable processes to share and exchange information, protect the resources of each process from other processes and enable synchronization among processes. To meet these requirements, the OS must maintain a data structure for each process, which describes the state and resource ownership of that process and which enables the OS to exert control over each process. The process manager is of the four major parts of the operating system. It implements the process abstraction. It does this by creating a model for the way the process uses CPU and any system resources. Much of the complexity of the operating system stems from the need for multiple processes to share the hardware at the same time. As a consequence of this goal, the process manager implements CPU sharing (called scheduling), process synchronization mechanisms, and a deadlock strategy. In addition, the process manager implements part of the operating system's protection and security. Each process in the system is represented by a data structure called a Process Control Block (PCB), or Process Descriptor in Linux/Unix, which performs the same function as a travelers passport. The PCB contains the basic information about the job including: What it is Where it is going How much of its processing has been completed Where it is stored How much it has spent in using resources

Process Identification: Each process is uniquely identified by the users identification and a pointer connecting it to its descriptor. Process Status: This indicates the current status of the process; READY, RUNNING, BLOCKED, READY SUSPEND, BLOCKED SUSPEND. Process States: This contains all of the information needed to indicate the current state of the job. During the lifespan of a process, its execution status may be in one of four states: (associated with each state is usually a queue on which the process resides) Executing: the process is currently running and has control of a CPU Waiting: the process is currently able to run, but must wait until a CPU becomes available Blocked: the process is currently waiting on I/O, either for input to arrive or output to be sent Suspended: the process is currently able to run, but for some reason the OS has not placed the process on the ready queue Ready: the process is in memory, will execute given CPU time Accounting: This contains information used mainly for billing purposes and for performance measurement. It indicates what kind of resources the process has used and for how long

Page 1 of 12

Roll No. 521126647

July 2011 Master of Computer Science (MSCCS) Semester 1 MC0070 Operating Systems with Unix 4 Credits (Book ID: B0682 & B0683) Assignment Set 1 (60 Marks)
In many modern operating systems, there can be more than one instance of a program loaded in memory at the same time. A number of processes being executed over a period of time instead of at the same time are called concurrent execution. There are two possible ways for an OS to regain control of the processor during a programs execution in order for the OS to perform de-allocation or allocation:

1. The process issues a system call (sometimes called a software interrupt); for example, an I/O request
occurs requesting to access a file on hard disk. 2. A hardware interrupt occurs; for example, a key was pressed on the keyboard, or a timer runs out (used in pre-emptive multitasking). The stopping of one process and starting (or restarting) of another process is called a context switch or context change. In many modern operating systems, processes can consist of many sub-processes. This introduces the concept of a thread. A thread may be viewed as a sub-process; that is, a separate, independent sequence of execution within the code of one process. Threads are becoming increasingly important in the design of distributed and clientserver systems and in software run on multi-processor systems Process Control Block (PCB, also called Task Controlling Block or Task Struct) is a data structure in the operating system kernel containing the information needed to manage a particular process. Since PCB contains the critical information for the process, it must be kept in an area of memory protected from normal user access. In some operating systems the PCB is placed in the beginning of the kernel stack of the process since that is a convenient protected location. The PCB is "the manifestation of a process in an operating system" includes information but implementations differ from OS to OS, but in general a PCB will include, directly or indirectly: The identifier of the process (a process identifier, or PID) Register values for the process including, notably, the program counter and stack pointer values for the process. The address space for the process Priority (in which higher priority process gets first preference. eg., nice value on Unix operating systems) Process accounting information, such as when the process was last run, how much CPU time it has accumulated, etc. Pointer to the next PCB i.e. pointer to the PCB of the next process to run I/O Information (i.e. I/O devices allocated to this process, list of opened files, etc) During context switch, the running process is stopped and another process is given a chance to run. The kernel must stop the execution of the running process, copy out the values in hardware registers to its PCB, and update the hardware registers with the values from the PCB of the new process. Process creation: Operating systems need some ways to create processes. In a very simple system designed for running only a single application (e.g., the controller in a microwave oven), it may be possible to have all the processes that will ever be needed be present when the system comes up. In general-purpose systems, however, some way is needed to create and terminate processes as needed during operation.

Page 2 of 12

Roll No. 521126647

July 2011 Master of Computer Science (MSCCS) Semester 1 MC0070 Operating Systems with Unix 4 Credits (Book ID: B0682 & B0683) Assignment Set 1 (60 Marks)
There are four principal events that cause a process to be created: System initialization. Execution of process creation system call by running a process. A user request to create a new process. Initiation of a batch job.

When an operating system is booted, typically several processes are created. Some of these are foreground processes, that interacts with a (human) user and perform work for them. Other are background processes, which are not associated with particular users, but instead have some specific function. For example, one background process may be designed to accept incoming e-mails, sleeping most of the day but suddenly springing to life when an incoming e-mail arrives. Another background process may be designed to accept an incoming request for web pages hosted on the machine, waking up when a request arrives to service that request. Process creation in UNIX and Linux are done through fork () or clone () system calls. There are several steps involved in process creation. The first step is the validation of whether the parent process has sufficient authorization to create a process. Upon successful validation, the parent process is copied almost entirely, with changes only to the unique process id, parent process, and user-space. Each new process gets its own user space. Process termination: There are many reasons for process termination: Batch job issues halt instruction User logs off Process executes a service request to terminate Error and fault conditions Normal completion Time limit exceeded Memory unavailable Bounds violation; for example: attempted access of (non-existent) 11th element of a 10-element array Protection error; for example: attempted write to read-only file Arithmetic error; for example: attempted division by zero Time overrun; for example: process waited longer than a specified maximum for an event I/O failure Invalid instruction; for example: when a process tries to execute data (text) Privileged instruction Data misuse Operating system intervention; for example: to resolve a deadlock Parent terminates so child processes terminate (cascading termination)

Page 3 of 12

Roll No. 521126647

July 2011 Master of Computer Science (MSCCS) Semester 1 MC0070 Operating Systems with Unix 4 Credits (Book ID: B0682 & B0683) Assignment Set 1 (60 Marks)
Q.2) Parent request Describe the following: a. Layered Approach Answer: 2(a) Layered approach: An operating system layer is an implementation of an abstract object made up of data and the operations manipulating that data. In layered approach operating system is to be organized as a hierarchy of layers, each one constructed upon the one below it. The bottom layer (layer 0) is the hardware layer and top most (layer N) is the user interface. The system had 6 layers, as shown in Figure Layer 0 dealt with allocation of the processor, switching between processes when interrupts occurred or timers expired. It also provided the basic multiprogramming of the CPU. Layer 1 did the memory management. It allocated space for processes in main memory and on a 512K word drum used for holding parts of processes (pages) for which there was no room in main memory. Above layer 1, processes did not have to worry about whether they were in memory or on the drum; the layer 1 software took care of making sure pages were brought into memory whenever they were needed. Layer 2 handled communication between each process and the operator console. Above this layer each process effectively hats its own operator console. Layer 3 took care of managing the I/O devices and buffering the information streams to and from them. Above layer 3 each process could deal with abstract I/O devices with nice properties, instead of real devices with many peculiarities. Layer 4 was where the user programs were found. They did not have to worry about process, memory, console or I/O management. The system operator process was located in layer5. The main advantage of layered approach is simplicity of construction and debugging. The layers are selected so that each uses operations and services of lower level layers, therefore simplifying debugging and system verification. Each layer is implemented with only those operations provided by lower layers. A layer does not need to know how these operations are implemented; it needs to know only what these operations do. Hence each layer hides the existence of certain data structures, operations and hardware from higher layers. The

Page 4 of 12

Roll No. 521126647

July 2011 Master of Computer Science (MSCCS) Semester 1 MC0070 Operating Systems with Unix 4 Credits (Book ID: B0682 & B0683) Assignment Set 1 (60 Marks)
major difficulty with the layered approach involves appropriately defining the various layers, because layers can use only lower level layers, careful planning is required. b. Micro Kernels Answer: 2(b): A microkernel refers to structure of an operating system, in which all non essential components from the kernel are removed and implemented as system level and user level programs. A microkernel is the near-minimum amount of software that can provide the mechanisms needed to implement an operating system (OS). These mechanisms include low-level address space management, thread management, and interprocess communication (IPC). Communication is provided by message passing. If the hardware provides multiple CPU modes, the microkernel is the only software executing at the most privileged level (generally referred to as supervisor or kernel mode). Traditional operating system functions, such as device drivers, protocol stacks and files systems, are removed from the microkernel to run in user space.] In source code size, microkernels tend to be under 10,000 lines of code, as a general rule. MINIX for example has around 4,000 lines of code. Kernels larger than 20,000 lines are generally not considered microkernels. The resulting operating system is easier to port from one hardware design to another. The microkernel also provide more security and reliability, since most services are running as user level rather than kernel level processes, if any service fails the rest operating system remains untouched. c. Virtual Machines Answer: 2(c) A virtual machine (VM) is a "completely isolated guest operating system installation within a normal host operating system". Modern virtual machines are implemented with either software emulation or hardware virtualization or (in the most cases) both together. A virtual machine (VM) is a software implementation of a machine (i.e. a computer) that executes programs like a physical machine. Virtual machines are separated into two major categories, based on their use and degree of correspondence to any real machine. A system virtual machine provides a complete system platform which supports the execution of a complete operating system (OS). In contrast, a process virtual machine is designed to run a single program, which means that it supports a single process. An essential characteristic of a virtual machine is that the software running inside is limited to the resources and abstractions provided by the virtual machineit cannot break out of its virtual world. Hardware virtualization is the virtualization of computers or operating systems. It hides the physical characteristics of a computing platform from users, instead showing another abstract computing platform. At its origins, the software that controlled virtualization was called a "control program", but nowadays the terms "hypervisor" or "virtual machine monitor" are preferred. Full virtualization is a virtualization technique used to provide a certain kind of virtual machine environment, namely, one that is a complete simulation of the underlying hardware. Full virtualization requires that every salient feature of the hardware be reflected into one of several virtual machines including the full instruction

Page 5 of 12

Roll No. 521126647

July 2011 Master of Computer Science (MSCCS) Semester 1 MC0070 Operating Systems with Unix 4 Credits (Book ID: B0682 & B0683) Assignment Set 1 (60 Marks)
set, input/output operations, interrupts, memory access, and whatever other elements are used by the software that runs on the bare machine, and that is intended to run in a virtual machine. In such an environment, any software capable of execution on the raw hardware can be run in the virtual machine and, in particular, any operating systems. The obvious test of virtualization is whether an operating system intended for stand-alone use can successfully run inside a virtual machine Paravirtualization is a virtualization technique that presents a software interface to virtual machines that is similar but not identical to that of the underlying hardware. The intent of the modified interface is to reduce the portion of the guest's execution time spent performing operations which are substantially more difficult to run in a virtual environment compared to a non-virtualized environment. The paravirtualization provides specially defined 'hooks' to allow the guest(s) and host to request and acknowledge these tasks, which would otherwise be executed in the virtual domain (where execution performance is worse). A successful paravirtualized platform may allow the virtual machine monitor (VMM) to be simpler (by relocating execution of critical tasks from the virtual domain to the host domain), and/or reduce the overall performance degradation of machine-execution inside the virtual-guest. Paravirtualization requires the guest operating system to be explicitly ported for the para-API a conventional OS distribution which is not paravirtualization-aware cannot be run on top of a paravirtualizing VMM. However, even in cases where the operating system cannot be modified, still components may be available that enable many of the significant performance advantages of paravirtualization; Partial virtualization, including address space virtualization, the virtual machine simulates multiple instances of much of an underlying hardware environment, particularly address spaces. Usually, this means that entire operating systems cannot run in the virtual machine which would be the sign of full virtualization but that many applications can run. A key form of partial virtualization is address space virtualization, in which each virtual machine consists of an independent address space. This capability requires address relocation hardware, and has been present in most practical examples of partial virtualization. Partial virtualization is significantly easier to implement than full virtualization. It has often provided useful, robust virtual machines, capable of supporting important applications. Partial virtualization has proven highly successful for sharing computer resources among multiple users. Application virtualization is an umbrella term that describes software technologies that improve portability, manageability and compatibility of applications by encapsulating them from the underlying operating system on which they are executed. A fully virtualized application is not installed in the traditional sense, although it is still executed as if it were. The application is fooled at runtime into believing that it is directly interfacing with the original operating system and all the resources managed by it, when in reality it is not. In this context, the term "virtualization" refers to the artifact being encapsulated (application), which is quite different to its meaning in hardware virtualization, where it refers to the artifact being abstracted (physical hardware). Q.3) Memory management is important in operating systems. Discuss the main problems that can occur if memory is managed poorly.

Answer: 3) The part of the operating system which handles Memory Management responsibility is called the memory manager. Since every process must have some amount of primary memory in order to execute, the performance of the memory manager is crucial to the performance of the entire system. Virtual memory refers to the technology in which some space in hard disk is used as an extension of main memory so that a user program need not worry if its size extends the size of the main memory. For paging memory management, each process is associated with a page table. Each entry in the table contains the frame number of the corresponding page in the virtual address space of the process. This same

Page 6 of 12

Roll No. 521126647

July 2011 Master of Computer Science (MSCCS) Semester 1 MC0070 Operating Systems with Unix 4 Credits (Book ID: B0682 & B0683) Assignment Set 1 (60 Marks)
page table is also the central data structure for virtual memory mechanism based on paging, although more facilities are needed. It covers the Control bits, Multi-level page table etc. Segmentation is another popular method for both memory management and virtual memory Basic Cache Structure: The idea of cache memories is similar to virtual memory in that some active portion of a low-speed memory is stored in duplicate in a higher-speed cache memory. When a memory request is generated, the request is first presented to the cache memory, and if the cache cannot respond, the request is then presented to main memory. Content-Addressable Memory (CAM) is a special type of computer memory used in certain very high speed searching applications. It is also known as associative memory, associative storage, or associative array, although the last term is more often used for a programming data structure. In addition to the responsibility of managing processes, the operating system must efficiently manage the primary memory of the computer. The part of the operating system which handles this responsibility is called the memory manager. Since every process must have some amount of primary memory in order to execute, the performance of the memory manager is crucial to the performance of the entire system. Nutt explains: The memory manager is responsible for allocating primary memory to processes and for assisting the programmer in loading and storing the contents of the primary memory. Managing the sharing of primary memory and minimizing memory access time are the basic goals of the memory manager.

The real challenge of efficiently managing memory is seen in the case of a system which has multiple processes running at the same time. Since primary memory can be space-multiplexed, the memory manager can allocate a portion of primary memory to each process for its own use. However, the memory manager must keep track of which processes are running in which memory locations, and it must also determine how to allocate and de-allocate available memory when new processes are created and when old processes complete execution. While various different strategies are used to allocate space to processes competing for memory, three of the most popular are Best fit, Worst fit, and First fit. Each of these strategies is described below: Best fit: The allocator places a process in the smallest block of unallocated memory in which it will fit. For example, suppose a process requests 12KB of memory and the memory manager currently has a list of unallocated blocks of 6KB, 14KB, 19KB, 11KB, and 13KB blocks. The best-fit strategy will allocate 12KB of the 13KB block to the process. Worst fit: The memory manager places a process in the largest block of unallocated memory available. The idea is that this placement will create the largest hold after the allocations, thus increasing the possibility that compared to best fit; another process can use the remaining space. Using the same example as above, worst fit will allocate 12KB of the 19KB block to the process, leaving a 7KB block for future use. First fit: There may be many holes in the memory, so the operating system, to reduce the amount of time it spends analyzing the available spaces, begins at the start of primary memory and allocates

Page 7 of 12

Roll No. 521126647

July 2011 Master of Computer Science (MSCCS) Semester 1 MC0070 Operating Systems with Unix 4 Credits (Book ID: B0682 & B0683) Assignment Set 1 (60 Marks)
memory from the first hole it encounters large enough to satisfy the request. Using the same example as above, first fit will allocate 12KB of the 14KB block to the process.

Notice in the diagram above that the Best fit and first fit strategies both leave a tiny segment of memory unallocated just beyond the new process. Since the amount of memory is small, it is not likely that any new processes can be loaded here. This condition of splitting primary memory into segments as the memory is allocated and de-allocated is known as fragmentation. The Worst fit strategy attempts to reduce the problem of fragmentation by allocating the largest fragments to new processes. Thus, a larger amount of space will be left as seen in the diagram above. Another way in which the memory manager enhances the ability of the operating system to support multiple processes running simultaneously is by the use of virtual memory. According the Nutt, virtual memory strategies allow a process to use the CPU when only part of its address space is loaded in the primary memory. In this approach, each processs address space is partitioned into parts that can be loaded into primary memory when they are needed and written back to secondary memory otherwise. Another consequence of this approach is that the system can run programs which are actually larger than the primary memory of the system, hence the idea of virtual memory. Brookshear explains how this is accomplished: Suppose, for example, that a main memory of 64 megabytes is required but only 32 megabytes is actually available. To create the illusion of the larger memory space, the memory manager would divide the required space into units called pages and store the contents of these pages in mass storage. A typical page size is no more than four kilobytes. As different pages are actually required in main memory, the memory manager would exchange them for pages that are no longer required, and thus the other software units could execute as though there were actually 64 megabytes of main memory in the machine. In order for this system to work, the memory manager must keep track of all the pages that are currently loaded into the primary memory. This information is stored in a page table maintained by the memory manager. A page fault occurs whenever a process requests a page that is not currently loaded into primary memory. To handle page faults, the memory manager takes the following steps: 1. The memory manager locates the missing page in secondary memory. 2. The page is loaded into primary memory, usually causing another page to be unloaded. 3. The page table in the memory manager is adjusted to reflect the new state of the memory. 4. The processor re-executes the instructions which caused the page fault. Book ID: B0683

Page 8 of 12

Roll No. 521126647

July 2011 Master of Computer Science (MSCCS) Semester 1 MC0070 Operating Systems with Unix 4 Credits (Book ID: B0682 & B0683) Assignment Set 1 (60 Marks)
Q.4) Discuss the following: a. File Substitution Answer: 4(a) It is important to understand how file substitution actually works. In the previous examples, the ls command doesnt do the work of file substitution the shell does. Even though all the previous examples employ the ls command, any command that accepts filenames on the command line can use file substitution. In fact, using the simple echo command is a good way to experiment with file substitution without having to worry about unexpected results. For example, $ echo p* p10 p101 p11 When a metacharacter is encountered in a UNIX command, the shell looks for patterns in filenames that match the metacharacter. When a match is found, the shell substitutes the actual filename in place of the string containing the metacharacter so that the command sees only a list of valid filenames. If the shell finds no filenames that match the pattern, it passes an empty string to the command. The shell can expand more than one pattern on a single line. Therefore, the shell interprets the command $ ls LINES.* PAGES.* as $ ls LINES.dat LINES.idx PAGES.dat PAGES.idx There are file substitution situations that you should be wary of. You should be careful about the use of whitespace (extra blanks) in a command line. If you enter the following command, for example, the results might surprise you: What has happened is that the shell interpreted the first parameter as the filename LINES. With no metacharacters and passed it directly on to ls. Next, the shell saw the single asterisk (*), and matched it to character string, which matches every file in the directory. This is not a big problem if you are simply listing the files, but it could mean disaster if you were using the command to delete data files! Unusual results can also occur if you use the period (.) in a shell command. Suppose that you are using the $ ls .* command to view the hidden files. What the shell would see after it finishes interpreting the metacharacter is $ ls profile, which gives you a complete directory listing of both the current and parent directories. When you think about how filename substitution works, you might assume that the default form of the ls command is actually $ ls * However, in this case the shell passes to ls the names of directories, which causes ls to list all the files in the subdirectories. The actual form of the default ls command is $ ls . b. I/O Control Answer: 4(b) Several I/O strategies are used between the computer system and I/O devices, depending on the relative speeds of the computer system and the I/O devices. The simplest strategy is to use the processor itself as the I/O controller, and to require that the device follow a strict order of events under direct program control, with the processor waiting for the I/O device at each step. Another strategy is to allow the processor to be ``interrupted'' by the I/O devices, and to have a (possibly different) ``interrupt handling routine'' for each device. This allows for more flexible scheduling of I/O events, as well as more efficient use of the processor. (Interrupt handling is an important component of the operating system.)

Page 9 of 12

Roll No. 521126647

July 2011 Master of Computer Science (MSCCS) Semester 1 MC0070 Operating Systems with Unix 4 Credits (Book ID: B0682 & B0683) Assignment Set 1 (60 Marks)
A third general I/O strategy is to allow the I/O device, or the controller for the device, access to the main memory. The device would write a block of information in main memory, without intervention from the CPU, and then inform the CPU in some way that that block of memory had been overwritten or read. This might be done by leaving a message in memory, or by interrupting the processor. (This is generally the I/O strategy used by the highest speed devices -- hard disks and the video controller.) Q.5) Discuss the concept of File substitution with respect to managing data files in UNIX. Answer: 5) File Substitution Works It is important to understand how file substitution actually works. In the previous examples, the ls command doesnt do the work of file substitution the shell does. Even though all the previous examples employ the ls command, any command that accepts filenames on the command line can use file substitution. In fact, using the simple echo command is a good way to experiment with file substitution without having to worry about unexpected results. For example, $ echo p* p10 p101 p11 When a metacharacter is encountered in a UNIX command, the shell looks for patterns in filenames that match the metacharacter. When a match is found, the shell substitutes the actual filename in place of the string containing the metacharacter so that the command sees only a list of valid filenames. If the shell finds no filenames that match the pattern, it passes an empty string to the command. The shell can expand more than one pattern on a single line. Therefore, the shell interprets the command $ ls LINES.* PAGES.* as $ ls LINES.dat LINES.idx PAGES.dat PAGES.idx There are file substitution situations that you should be wary of. You should be careful about the use of whitespace (extra blanks) in a command line. If you enter the following command, for example, the results might surprise you: $ls LINES.* LINES.: not found 21x LINES.dat LINES.idx p10 PAGES.dat p101 p11 PAGES.idx

Acct.pds t11 z11

marsha.pds

What has happened is that the shell interpreted the first parameter as the filename LINES. With no metacharacters and passed it directly on to ls. Next, the shell saw the single asterisk (*), and matched it to any character string, which matches every file in the directory. This is not a big problem if you are simply listing the files, but it could mean disaster if you were using the command to delete data files! Unusual results can also occur if you use the period (.) in a shell command. Suppose that you are using the $ ls .* command to view the hidden files. What the shell would see after it finishes interpreting the metacharacter is $ ls profile, which gives you a complete directory listing of both the current and parent directories. When you think about how filename substitution works, you might assume that the default form of the ls command is actually $ ls *

Page 10 of 12

Roll No. 521126647

July 2011 Master of Computer Science (MSCCS) Semester 1 MC0070 Operating Systems with Unix 4 Credits (Book ID: B0682 & B0683) Assignment Set 1 (60 Marks)
However, in this case the shell passes to ls the names of directories, which causes ls to list all the files in the subdirectories. The actual form of the default ls command is $ ls .

Q.6)

How do we make calculations using dc and bc utilities? Describe with at least two examples in each case.

Answer: 6) UNIX has two calculator programs that you can use from the command line: dc and bc. The dc (desk calculator) program uses Reverse Polish Notation (RPN), familiar to everyone who has used HewlettPackard pocket calculators, and the bc (basic calculator) program uses the more familiar algebraic notation. Both programs perform essentially the same calculations. Calculating with bc The basic calculator, bc, can do calculations to any precision that you specify. Therefore, if you know how to calculate pi and want to know its value to 20, 50, or 200 places, for example, use bc. This tool can add, subtract, multiply, divide, and raise a number to a power. It can take square roots, compute sines and cosines of angles, calculate exponentials and logarithms, and handle arctangents and Bessel functions. In addition, it contains a programming language whose syntax looks much like that of the C programming language. To exit from bc, just type Ctrl+d. This means that you can use the following: Simple and array variables Expressions Tests and loops Functions that you define Also, bc can take input from the keyboard, from a file, or from both. Here are some examples of bc receiving input from the keyboard: $ bc 2*3 Result: 6 To do multiplication, all you have to do is enter the two values with an asterisk between them. However, you can also continue giving bc more calculations to do. Heres a simple square root calculation (as a continuation of the original bc command): sqrt(11) Result: 3 Calculating with dc As mentioned earlier, the desk calculator, dc, uses RPN, so unless youre comfortable with that notation, you should stick with bc. Also, dc does not provide a built-in programming language, built-in math functions, or the capability to define functions. It can, however, take its input from a file. If you are familiar with stack-oriented calculators, youll find that dc is an excellent tool. It can do all the calculations that bc can and it also lets you manipulate the stack directly. To display values, you must enter the p command. For example, to add and print the sum of 5 and 9, enter 5

Page 11 of 12

Roll No. 521126647

July 2011 Master of Computer Science (MSCCS) Semester 1 MC0070 Operating Systems with Unix 4 Credits (Book ID: B0682 & B0683) Assignment Set 1 (60 Marks)
9 +p Result: 14

Page 12 of 12

Roll No. 521126647

Вам также может понравиться