Вы находитесь на странице: 1из 89

Mahesh - RSREC

INTRODUCTION TO COMPUTERS
TRENDS IN COMPUTER SYSTEMS Todays computer systems come in a variety of sizes, shapes, and computing capabilities. Rapid hardware and software developments and changing end user needs continue to drive the emergence of new models of computers, from the smallest hand-held personal digital assistant for end users, to the largest multiple-CPU mainframe for the enterprise. Categories such as mainframes, midrange computers, and microcomputers are still used to help us express the relative processing power and number of end users that can be supported by different types of computers. In addition, experts continue to predict the merging or disappearance of several computer categories. They feel, for example, that many midrange and mainframe systems have been made obsolete by the power and versatility of client/server networks of end user microcomputers and servers. COMPUTER GENERATIONS It is important to realize that major changes and trends in computer systems have occurred during the major stages-or generations-of computing, and will continue into the future. The first generation of computers developed in the early 1950s, the second generation blossomed during the late 1960s, the third generation took computing into the 1970s, and the fourth generation has been the computer technology of the 1980s and 1990s. A fifth generation of computers that accelerates the trends of the previous generations is expected to evolve as we enter the 21st century. Notice that computers continue to become smaller, faster, more reliable, less costly to purchase and maintain, and more interconnected within computer networks. First-generation computing involved massive computers using hundreds or thousands of vacuum tubes for their processing and memory circuitry. These large computers generated enormous amounts of heat; their vacuum tubes had to be replaced frequently. Thus, they had large electrical power, air conditioning, and maintenance requirements. First-generation computers had main memories of only a few thousand characters and millisecond processing speeds. They used magnetic drums or tape for secondary storage and punched cards or paper tape as input and output media. Second-generation computing used transistors and other solid-state, semiconductor devices that were wired to circuit boards in the computers. Transistorized circuits were much smaller and much more reliable, generated little heat, were less expensive, and required less power than vacuum tubes. Tiny magnetic cores were used for the computers memory, or internal storage. Many second-generation computers had main memory capacities of less than 100 kilobytes and microsecond processing, speeds. Removable magnetic disk packs were introduced, and magnetic tape merged as the major input, output, and secondary storage medium for large computer installations. Third-generation computing saw the development of computers that used integrated circuits, in which thousands of transistors and other circuit elements are etched on tiny chips of silicon. Main memory capacities increased to several megabytes and processing speeds jumped to millions of instructions per second (MIPS) as telecommunications capabilities became common. This made it possible for operating system programs to come into widespread use that automated and supervised the activities of many types of peripheral devices and processing by mainframe computers of several programs at the same time, frequently involving networks of users at remote terminals. Integrated circuit technology also made possible the development and widespread use of small computers called minicomputers in the third computer generation. Fourth-generation computing relies on the use of LSI (large-scale integration) and VLSI (very-largescale integration) technologies that cram hundreds of thousands or millions of transistors and other circuit elements on each chip. This enabled the development of microprocessors, in which all of the circuits of a CP are contained on a single chip with processing speeds of millions of instructions per second. Main memory capacities ranging from a few megabytes to several gigabytes can also be achieved by memory chips that replaced magnetic core memories. Microcomputers, which use microprocessor CPUs and a variety of peripheral
1

Mahesh - RSREC

devices and easy-to-use software packages to form small personal computer (PC), systems or client/server networks of linked PCs and servers, are a hallmark of the fourth generation of computing, which accelerated the downsizing of computing systems. Whether we are moving into a fifth generation of computing is a subject of debated since the concept of generations may no longer fit the continual, rapid changes occurring in computer hardware, software, data, and networking technologies. But in any case, we can be sure that progress in computing will continue to accelerate, and that the development of Internet-based technologies and applications will be one of the major forces driving computing into the 21st century. TYPES OF COMPUTER SYSTEMS Microcomputer Systems Microcomputers are the most important category of computer systems for end users. Though usually called a personal computer, or PC, a microcomputer is much more than a small computer for use by an individual. The computing power of microcomputers now exceeds that of the mainframes of previous computer generations at a fraction of their cost. Thus, they have become powerful networked professional work stations for end users in business. Microcomputers come in a variety of sizes and shapes for a variety of purposes. For example, PCs are available as handheld, notebook, laptop, portable, desktop, and floor-standing models. Or, based on their use, they include home, personal, professional, workstation, and multi-user systems. Most microcomputers are desktops designed to fit on an office desk, or notebooks for those who want a small, portable PC for their work activities. Some microcomputers are powerful workstation computers (technical work-stations) that support applications with heavy mathematical computing and graphics display demands such as computer-aided design (CAD) in engineering, or investment and portfolio analysis in the securities industry. Other microcomputers are used as network servers. They are usually more powerful microcomputers that coordinate telecommunications and resource sharing in small local area networks (LANs), and Internet and intranet Web sites. Another important microcomputer category includes handheld microcomputer devices known as personal digital assistants (PDAs), designed for convenient mobile communications and computing. PDAs use touch-screens, pen-based handwriting recognition of keyboards to help mobile workers send and receive E-mail and exchange information such as appointments, to do lists, and scales contacts with their desktop PCs or Web servers. Multimedia Systems Multimedia PCs are designed to present you with information in a variety of media, including text and graphics displays, voice and other digitized audio, photographs, animation, and video clips. Mention multimedia, and many people think of computer video games, multimedia encyclopedias, educational videos, and multimedia home pages on the World Wide Web. However, multimedia systems are widely used in business for training employees, educating customers, making sales presentations, and adding impact to other business presentations. The basic hardware and software requirements of a multimedia computer system depend on whether you wish to create as well as enjoy multimedia presentations. Owners of low-cost multimedia PCs marketed for home used do not need authoring software or high-powered hardware capacities in order to enjoy multimedia games and other entertainment and educational multimedia products. These computers come equipped with a CD-ROM drive, stereo speakers, additional memory, a high-performance processor, and other multimedia processing capabilities. People who want to create their own multimedia production may have to spend several thousand dollars to put together a high-performance multimedia authoring system. This includes a high-resolution color graphics monitor, sound and video capture boards, a high-performance microprocessor with multimedia capabilities, additional megabytes of memory, and several gigabytes of hard disk capacity. Sound cards and video capture boards are circuit boards that contain digital signal processors (DSPs) and additional megabytes of memory for digital processing of sound and video. A digital camera, digital video camcorder, optical scanner, and software
2

Mahesh - RSREC

such as authoring tools and programs for image editing and graphics creation can add several thousand dollars to the star-up costs of a multimedia authoring system. Midrange Computer Systems Midrange Computers, including minicomputers and high-end network servers, are multi-user systems that can manage network of PCs and terminals. Though not as powerful as mainframe computers, they are less costly to buy, operate, and maintain than mainframe systems, and thus meet the computing needs of many organizations. Midrange computers first became popular as minicomputers for scientific research, instrumentation systems, and industrial process monitoring and control. Minicomputers could easily handle such uses because these applications are narrow in scope and do not demand the processing versatility of mainframe systems. Thus, midrange computers serve as industrial process-control and manufacturing plant computers, and they still play a major role in computer-aided manufacturing (CAM). They can also take the form of powerful technical workstations for computer-aided design (CAD) and other computation and graphics-intensive applications. Midrange computers are also used as front-end computers to assist mainframe computers in telecommunication processing and network management. Midrange computers have become popular as powerful network servers to help manage large Internet Web sites, corporate intranets and extranets, and client/server networks. Electronic commerce and other business uses of the Internet are popular high-end server applications, as are integrated enterprise wide manufacturing, distribution and financial applications. Other applications, like data warehouse management, data mining, and online analytical processing. Mainframe Computer Systems Mainframe computers are large, fast, and powerful computer systems. For example, mainframes can process hundreds of million instructions per second (MIPS). Mainframes also have large primary storage capacities. Their main memory capacity can range from hundreds of megabytes to many gigabytes of primary storage. And mainframes have slimmed down drastically in the last few years, dramatically reducing their airconditioning needs, electrical power consumption, and floor space requirements, and thus their acquisition and operating costs. Most of these improvements are the result of a move from water-cooled mainframes to a new CMOS air-cooled technology for mainframe systems. Thus, mainframe computers continue to handle the information processing needs of major corporations and government agencies with many employees and customers or with complex computational problems. For example, major international banks, airlines, oil companies, and other large corporations process millions of sales transactions and customer inquiries each day with the help of large mainframe systems. Mainframes are still used for computation-intensive applications such as analyzing seismic data from oil field explorations or simulating flight conditions in designing aircraft. Mainframes are also widely used as super server for the large client/server network and high-volume Internet Web sites of large companies. Supercomputer Systems The term supercomputer describes a category of extremely powerful computer systems specifically designed for scientific, engineering, and business applications requiring extremely high speeds for massive numeric computations. The market for supercomputers includes government research agencies, large universities, and major corporations. They use supercomputers for applications such as global weather forecasting, military defense systems, computational cosmology and astronomy, microprocessor research and design, large-scale data mining and so on. Supercomputers use parallel processing architectures of interconnected microprocessors (which can execute many instructions at the same time in parallel). They can perform arithmetic calculations at speeds of billions of floating-point operations per second (gigaflops).

Mahesh - RSREC

COMPUTER SYSTEM CONCEPTS AND COMPONENTS The Computer System Concept A computer is more than a high-powered collection of electronic devices performing a variety of information processing chores. A computer is a system, an interrelated combination of components that performs the basic system functions of input, processing, output, storage, and control, thus providing end users with a powerful information processing tool. Understanding the computer as a computer system is vital to the effective use and management of computers. A computer is system of hardware devices organized according to the following system functions. Input: The input devices of a computer system include keyboards, touch screens, pens, electronic mice, optical scanners, and so on. Processing: The central processing unit (CPU) is the main processing component of a computer system. (In microcomputers, it is the main microprocessor). In particular, the electronic circuits of the arithmetic-logic unit one of the CPUs major components perform the arithmetic and logic functions required in computer processing. Output: The output devices of a computer system include video display units, printers, audio response units, and so on, they convert electronic information produced by the computer system into human intelligible form for presentation to end users. Storage: The storage function of a computer system takes place in the storage circuits of the computers primary storage unit, or memory, and in secondary storage devices such as magnetic disk and tape units. These devices store data and program instructions needed for processing. Control: The control unit of the CPU is the control component of a computer system. Its circuits interpret computer program instructions and transmit directions to the other components of the computer system. The Central Processing Unit The central processing unit is the most important hardware component of computer system. It is also known as the CPU, the central processor or instruction processor, and the main microprocessor in a microcomputer. Conceptually, the circuitry of a CPU can be subdivided into two major subunits the arithmeticlogic unit and the control unit. The CPU also includes circuitry for devices such as registers and cache memory for high speed, temporary storage of instruction operations, input/output, and telecommunications support. The control unit obtains instructions from software segments stored in the primary storage unit and interprets them. Then it transmits electronic signals to the other components of the computer system to perform required operations. The arithmetic-logic unit performs required arithmetic and comparison operations .A computer can make logical changes from one set of program instructions to another (e.g., overtime pay versus regular pay calculations) based on the results of comparisons made in the ALU during processing. Main Memory and Primary Storage Unit A computers primary storage unit is commonly called main memory, and holds data and program instructions between processing steps and supplies them to the control unit and arithmetic-logic unit during processing. Most of a computers memory consists of microelectronic semiconductor memory chips known as RAM (random access memory). The contents of these memory chips can be instantly changed to store new data. Other, more permanent memory chips called ROM (read only memory) may also be used. Secondary Storage Devices Secondary storage devices like magnetic disks and optical disks are used to store data and programs and thus greatly enlarge the storage capacities of computer system. Also, since memory circuits typically lose their contents when electric power is turned off, most secondary storage media provide a more permanent type of storage. However the contents of hard disk drives floppy disks, CD-ROM disks, and other secondary storage
4

Mahesh - RSREC

media cannot be processed without first being brought into memory. Thus secondary storage devices play a supporting role to the primary storage of a computer system. INPUT TECHNOLOGIES AND TRENDS: You can now enter data and commands directly and easily into a computer system through pointing devices like electronic mice and touch pads, and technologies like political scanning, handwriting conviction, and voice recognition. These developments have made it unnecessary to always record data on paper source documents (such as sales order forms, for example) and then keyboard the data into a computer in an additional data entry step. Further improvements in voice recognition and other technologies should enable an even more natural user interface in the future. Keyboards: Keyboards are still the most widely used devices for entering data and text into computer systems. However, pointing devices are a better alternative for issuing commands, making choices, and responding to prompts displays on your video screen. They work with you operating systems graphical user interface (GUI), which presents you with icons, menus, windows, buttons, bars, and so on, for your selection. For example, pointing devices such as electronic mice and touch pads allow you to easily choose from menu selections and icon displays using point-and-click or point-and-drag methods. Mouse: The electronic mouse is the most popular pointing device used to move the cursor on the screen, as well as to issue commands and make icon and menu selections. By moving the mouse on a desktop or pad, you can move the cursor onto and icon displayed on the screen. Pressing buttons on the mouse activates various activities representation by the icon selected. Trackball: The trackball, pointing stick, and touch pad are other pointing devices most often used in place of the mouse. A trackball is a stationary device related to the mouse. You turn a roller ball with only its top exposed outside its case to move the cursor on the screen. A pointing stick (also called a track point) is a small button like device, sometimes likened to the eraser head of pencil. It is usually centered one row above the space bar of a keyboard. The cursor moves in the direction of the pressure you place on the stick. The touch pad is a small rectangular touch-sensitive surface usually placed below the keyboard. The cursor moves in the direction your finger moves on the pad. Trackballs, pointing sticks, and touch pads are easier to use than a mouse for portable computer users and are thus built into most notebook computer keyboards. Touch Screen: Touch screens are devices that allow you to use a computer by touching the surface of its video display screen. Some touch screens emit a grid of infrared beams, sound waves, or a slight electric current that is broken when the screen is touched. The computer senses the point in the grid where the break occurs and responds with an appropriate action. For example, you can indicate your selection on a menu display by just touching the screen next to that menu item. Pen-Based Computing: Pen-based computing technologies are being used in many hand-held computers and personal digital assistants. These small PCs and PDAs contain fast processors and software that recognizes and digitizes handwriting, hand printing, and hand drawing. They have a pressure sensitive layer like a graphics pad under their slate like liquid crystal display (LCD) screen. So instead of writing on paper form fastened to a clipboard or using a keyboard device, you can use a pen to make selections, send E-Mail, and enter handwritten data directly into a computer. A variety of other pen like device is available. One example is the digitizer pen and graphics tablet. You can use the digitizer pen as a [pointing device, or use it to draw or write on the pressure-sensitive surface of the graphics table. Your handwriting or drawing is digitized by the computer, accepted as input, displayed on its video screen, and entered into your application. Voice Recognition and Response: Voice recognition promises to be the easiest method for data entry, word processing, and conversational computing, since speech is the easiest, most natural means of human communication. Voice input has now become technologically and economically feasible for a variety of applications. Early voice recognition products used discrete speech recognition, where you have to pause after each spoken word. New continuous speech recognition (CSR) software recognizes continuous, conversationally paced speech.
5

Mahesh - RSREC

Optical Scanning: Optical scanning devices read text or graphics and convent them into digital input for your computer. Thus, optical scanning enables the direct entry of data from source documents into a computer system. For example, you can use a compact desktop scanner to scan pages of text and graphics into your computer for desktop publishing and Web publishing applications. Or you can scan documents of all kinds into your system and organize them into folders as part of a document management library system for fast reference or retrieve. Other Input Technologies: Magnetic stripe technology is a familiar form of data entry that helps computers read credit cards. The dark magnetic stripe on the back of such cards is the same iron oxide coating as on magnetic tape. Customer account numbers can be recorded on the magnetic stripe so it can be read by bank ATMs, credit card authorization terminals, and many other types of magnetic stripe readers. Smart cards that embed a microprocessor chip and several kilobytes of memory into debit, credit, and other cards are popular in Europe, and becoming available in the United States. The smart cards are widely used to make payments in parking meters, vending machines, newsstands, pay telephones, and retail stores. Digital cameras represent another fast growing set of input technologies. Digital still cameras and digital video cameras (digital camcorders) enable you to shoot, store, and download still photos or full motion video with audio into your PC. Then you can use image-editing software to edit and enhance the digitized images and include them in new letters, reports, multimedia presentations, and Web pages. OUTPUT TECHNOLOGIES AND TRENDS Computers provide information to you in a variety of forms. Figure 4.30 shows you the trends in output media and methods that have developed over the generations of computing. As you can see, video displays and printed documents have been, and still are, the most common forms of output from computer systems. But other natural and attractive output technologies such as voice response systems and multimedia output are increasingly found along with video displays in business applications. Video Output: Video displays are the most common type of computer output. Most desktop computers rely on video monitors that use a cathode ray tube (CRT) technology similar to the picture tubes used in home TV sets. Usually, the clarity of the video display depends on the type of video monitor you use and the graphics circuit board installed in your computer. These can provide a variety of graphics modes of increasing capability. A high-resolution, flicker-free monitor is especially important if you spend a lot of time viewing multimedia on CDs, or the Web, or complex graphical displays of many software packages. The biggest use of liquid crystal displays (LCDs) is to provide a visual display capability for portable microcomputers and PDAs. LCD displays need significantly less electric current and provide a thin, flat display. Advances in technology such as active matrix and dual scan capabilities have improved the clarity of LCD displays. Printed Output: Printing information on paper is still the most common form of output after video displays. Thus, most personal computer systems rely on an inkjet or laser printer to produce permanent (hard copy) output in high-quality printed form. Printed output is still a common form of business communications, and is frequently required for legal documentation. Thus, computers can produce printed reports and correspondence, documents such as sales invoices, payroll checks, bank statements, and printed versions of graphics displays. Inkjet printers, which spray ink onto a page one line at a time, have become the most popular, low-cost printers for microcomputer systems. They are quiet, produce several pages per minute of high-quality output, and can print both black-and-white and high-quality color graphics. Laser printers use an electrostatic process similar to a photocopying machine to produce many pages per minute of high-quality black-and-white output. More expensive color laser printers and multifunction inkjet and laser models that print, fax, scan, and copy are other popular choices for business offices.

Mahesh - RSREC

STORAGE TRENDS Data and information must be stored until needed using a variety of storage methods. There are many types of storage media and devices. Direct and Sequential Access: Primary storage media such as semiconductor memory chips are called direct access or random access memories (RAM). Magnetic disk devices are frequently called direct access storage devices (DASDs). On the other hand, media such as magnetic tapes are known as sequential access devices. The term direct access and random access describe the same concept. They mean that an element of data or instructions (such as a byte or word) can be directly stored and retrieved by selecting and using any of the locations on the storage media. Sequential access storage media such as magnetic tape do not have unique storage addresses that can be directly addressed. Instead, data must be stored and retrieved using a sequential or serial process. Data are recorded one after another in a predetermined sequence (such as in numeric order) on a storage medium. Semiconductor Memory: The primary storage (main memory) of your computer consists of microelectronic semiconductor memory chips. Examples include external cache memory of 256 or 512 megabytes, 1 GB to help your microprocessor work faster. Some of the major attractions of semiconductor memory are its small size, great speed, and shock and temperature resistance. One major disadvantage of most semiconductor memory is its volatility. Uninterrupted electric power must be supplied or the contents of memory will be lost. Therefore, emergency transfer to other devices or standby electrical power (through battery packs or emergency generators) is required if data are to be saved. Another alternative is to permanently burn in the contents of semiconductor devices so that they cannot be erased by a loss of power. Thus, there are two basic types of semiconductor memory: random access memory (RAM) and read only memory (ROM). RAM: random access memory. These memory chips are the most widely used primary storage medium. Each memory position can be both sensed (read) and changed (written), so it is also called read/write memory. This is a volatile memory. ROM: read only memory. Nonvolatile random access memory chips are used for permanent storage. ROM can be read but not erased or overwritten. Frequently used control instructions in the control unit and programs in primary storage (such as parts of the operating system) can be permanently burned in to the storage cells during manufacture. This is sometimes called firmware. Variations include PROM (programmable read only memory) and EPROM (erasable programmable read only memory) that can be permanently or temporarily programmed after manufacture. Magnetic Disk Storage Magnetic disks are the most common form of secondary storage for your computer system. Thats because they provide fast access and high storage capacities at a reasonable cost. Magnetic disk drives contain metal disks that are coated on both sides with an iron oxide recording material. Several disks are mounted together on a vertical shaft, which typically rotates the disks at speeds of 3,600 to 7,600 revolutions per minute (rpm). Electromagnetic read/write heads are positioned by access arms between the slightly separated disks to read and write data on concentric, circular tracks. Data are recorded on tracks in the form of tiny magnetized spots to form the binary digits of common computer codes. Thousands of bytes can be recorded on each track, and there are several hundred data tracks on each disk surface, thus providing you with billions of storage positions for your software and data. Types of Magnetic Disks There are several types of magnetic disk arrangements, including removable disk cartridges as well as fixed disk units. Removable disk devices are popular because they are transportable and can be used to store backup copies of your data offline for convenience and security.
7

Mahesh - RSREC

Floppy disks or magnetic diskettes consist of polyester film disks covered with an iron oxide compound. A single disk is mounted and rotates freely inside a protective flexible or hard plastic jacket, which has access openings to accommodate the read/write head of a disk drive unit. The 31/2 inch floppy disk, with capacities of 1.44 megabytes, is the most widely used version, with a newer LS-120 technology offering 120 megabytes of storage. Hard disk drives combine magnetic disks, access arms, and read/write heads into a sealed module. This allows higher speeds, greater data-recording densities, and closer tolerances within a sealed, more stable environment. Fixed or removable disk cartridge versions are available. Capacities of hard drives range from several hundred megabytes to gigabytes of storage.

Magnetic Tape Storage Magnetic tape is still being used as a secondary storage medium in business applications. They read/write heads of magnetic tape drives record data in the form of magnetized spots on the iron oxide coating of the plastic tape. Magnetic tape devices include tape reels and cartridges in mainframes and midrange systems, and small cassettes or cartridges for PCs. Magnetic tape cartridges have replaced tape reels in many applications, and can hold over 200 megabytes. One growing business application of magnetic tape involves the use of 36-track magnetic tape cartridges in robotic automated drive assemblies that can hold hundreds of cartridges. These devices serve as slower, but lower cost, storage to supplement magnetic disks to meet massive data warehouse and other business storage requirements. Other major applications for magnetic tape include long-term archival storage and backup storage for PCs and other systems. Optical Disk Storage Optical disks are a fast-growing storage medium. The version for use with micro computers is called CD-ROM (compact disk- read only memory). CD-ROM technology use 12-centimeter (4.7 inch) compact disks (CDs) similar to those used in stereo music systems. Each disk can store more than 600 megabytes. Thats the equivalent of over 400 1.44 megabyte floppy disks or more than 300,000 double-spaced pages of text. A laser records data by burning permanent microscopic pits in a spiral track on a master disk from which compact disks can be mass produced. Then CD-ROM disk drives use a laser device to read the binary codes formed by those pits. CD-R (compact disk record able) is another optical disk technology. It enables computers with CD-R disk drive units to record their own data once on a CD, and then be able to read the data indefinitely. The major limitation of CD-ROM and CD-R disks is that recorded data cannot be erased. However, CD-RW (CDrewritable) optical disk systems have now become available which record and erase data by using a laser to heat a microscopic point on the disks surface. In CD-RW versions using magneto optical technology, a magnetic coil changes the spots reflective properties from one direction to another, thus recording a binary one or zero. A laser device can then read the binary codes on the disk by sensing the direction of reflected light. Optical disk capacities and capabilities have increased dramatically with the emergence of an optical disk technology called DVD (digital video disk or digital versatile disk), which can hold from 3.0 to 8.5 gigabytes of multimedia data on each side of a compact disk. The large capacities and high quality images and sound of DVD technology are expected to eventually replace CD-ROM and CD-RW technologies for data storage, and promise to accelerate the sue of DVD drives for multimedia products that can be used in both computers and home entertainment systems. SOFTWARE TRENDS First, there has been a major trend away from custom-designed programs developed by the professional programmers of an organization. Instead, the trend is toward the use of off-the-shelf software packages acquired by end users from software vendors. This trend dramatically in creased with the development of relatively inexpensive and easy-to-use application software packages and multipurpose software suites for microcomputers. The trend has accelerated recently, as software packages are designed with networking
8

Mahesh - RSREC

capabilities and collaboration features that optimize their usefulness for end users and work grounds on the Internet and corporate intranets and extranets. Second, there has been a steady trend away from (1) technical, machine-specific programming language using binary-based or symbolic codes, or (2) procedural languages, which use brief statements and mathematical expressions to specify the sequence of instructions a computer must perform. Instead, the trend is toward the use of a visual graphic interface for object-oriented programming, or toward non procedure natural languages for programming that are closer to human conversation. This trend accelerated with the creation of easy-to-use, non procedural forth-generation languages (4GLs). It continues to grow as developments in object technology, graphics, and artificial intelligence produce natural language and graphical user interfaces that make both programming tools and software packages easier to use. In addition, artificial intelligence features are now being built into a new generation of expert-assisted software packages. For example, many software suites provide intelligent help features called wizards that help you perform common software functions like graphing parts of a spreadsheet or generating reports from a database, other software packages use capabilities called intelligent agents to perform activities based on instructions from a user. For example, some electronic mail packages can use an intelligent agent capability to organize, send, and screen E-mail messages fro your. These major trends seem to be converging to produce a fifth generation of powerful, multipurpose, expert-assisted, and network enabled software package with natural language and graphical interfaces to support the productivity and collaboration of both end users IS professionals. Application Software for End Users Application software includes a variety of programs that can be subdivided into general-purpose and application-specific categories. Thousands of application-specific software package are available to support specific applications of end users in business and other fields. For example, application-specific packages in business support managerial, professional, and business uses such as transaction processing, decision support, accounting, sales management, investment analysis, and electronic commerce. Application-specific software for science and engineering plays a major role in the research and development programs of industry and the design of efficient production processes for high-quality product. Other software packages help end users with personal fianc and home management; provide a wide variety of entertainment and educational products. General-purpose application programs are programs that perform common information processing jobs for end users. For example, word processing programs, spreadsheet programs, database management programs, and graphics programs are popular with microcomputer users for home, education, business, scientific, and many other purposes. Because they significantly increase the productivity of end users, they are sometimes known as productivity packages. Other examples include Web browsers, electronic mail, and groupware, which help support communication and collaboration among workgroups and teams. Software Suites and Integrated Packages Lets begin our discussion of popular general-purpose application software by looking at software suites. Thats because the most widely used productivity package come bundled together as software suites such as Microsoft Office, Lotus SmartSuite, and Corel WordPerfect Office. Examining their components gives us an overview of the important software tools that you can use to increase your productivity, collaborate with your colleagues, and access intranets, extranets, and the Internet. Compare the component programs that make up the top three software suites. Notice that each suite integrates software packages for Web browsing, word processing, spreadsheets, presentation graphics, database management, personal in formation management, and more. These packages can be purchased as separate stand-alone products. However, a software suite costs a lot less than the total cost of buying its individual package separately. Another advantage of software suites is that all programs use a similar graphical user interface (GUI) of icons, tool and status bars, menus, and so on, which gives them the same look and feel, and makes them easier to learn and use. Software suites also share common tools, such as spell checkers and help wizards to increase their efficiency. Another big advantage of suites is that their programs are designed to work together seamlessly, and import each others files easily, no matter which program you are using at the time. These capabilities make them more efficient and easier to use than using a variety of individual package versions.
9

Mahesh - RSREC

Programs Web Browser Word Processor Spreadsheet Presentation Database Manager Personal Information Manager

Microsoft Internet Explorer Word Excel PowerPoint Access Outlook

Lotus SmartSuite N/A WordPro 1-2-3 Freelance Approach Organize

Corel WordPerfect Office Netscape Navigator WordPerfect Quattro Pro Presentations Paradox Corel Central

*Access not included in the standard edition. Microsoft Publisher, Bookshelf, etc., are available depending on the suite edition. **Corel Flow, TimeLine, Dashboard, etc., are available in all versions. Of course, putting so many programs and features together in one super-size package does have some disadvantages. Industry critics argue that many software suite features are never used by most end users. The suites take up a lot of disk space, from over 100 megabytes to over 150 megabytes, depending on which version or functions you install. So such software is sometimes derisively called bloat ware by its critics. These drawbacks are one reason for the continued use of integrated packages like Microsoft Works, Lotus Works, Claris Works, and so on. Integrated packages combine some of the functions of several programs word processing, spreadsheets, presentation graphics, database management, and so on into one software package. Because Works programs leave out many features and functions that are in individual packages and software suites, they cannot do as much as those packages do. However, they use a lot less disk space (less than 10 megabytes), and cost less than a hundred dollars So integrated packages have proven that they offer enough functions and features for many computer users, while providing some of the advantages of software suites in a smaller package. Web Browsers and More The most important software component for many computer users today is the once simple and limited, but now powerful and feature rich, Web browser. A browser like Netscape Navigator or Microsoft Explorer is the key software interface you use to point and click your way through the hyperlinked resources of the World Wide Web and the rest of the Internet, as well as corporate intranets and extranets. Once limited to surfing the Web, browsers are becoming the universal software platform on which end users launch into information searches, E-mail, multimedia file transfer, discussion groups, and many other Internet, intranet, and extranet applications. Industry experts are predicting that the Web browser will be the model for how most people will use networked computers into the next century. So now, whether you want to watch a video, make a phone call, download some software, hold a video conference, can check your E-mail, or work on a spreadsheet of your teams business plan, you can use your browser to launch and host such applications. Thats why browsers are being called the universal client, that is, the software component installed on the workstations of all the clients (users) in client/server networks throughout an enterprise. The web browser has also become only one component of a new suite of communications and collaboration software that Netscape and other vendors are assembling in a variety of configurations. Electronic Mail The first thing many people do at work all over the world is check their E-mail. Electronic mail has changed the way people work and communicate. Millions of end users now depend on E-mail software to communicate with each other by sending and receiving electronic messages via the Internet or their organizations intranets or extranets. E-mail is stored on network servers until you are ready. Whenever you want to your can read your E-mail by displaying it on your workstations. So, with only a few minutes of effort
10

Mahesh - RSREC

(and a few microseconds or minutes of transmission time), a message tone or many individuals can be composed, sent, and received. As we mentioned earlier, E-mail software is now a component of top software suites and some Web browsers. E-mail packages like Eudora and Pine are typically provided to Internet users by Internet service providers and educational institutions. Full-featured E-mail software like Microsoft change E-mail or Netscape Messenger can route messages to multiple end users based on predefined mailing lists and provide password security, automatic message forwarding, and remote user access. They also allow you to store messages in folders with provisions for adding attachments to messages files. E-mail packages may also enable you to edit and send graphics and multimedia as well as text, and provide bulletin board and computer conferencing capabilities. Finally, your E-mail software may automatically filter and sort incoming messages (even news items from online services) and route them to appropriate user mailboxes and folders. Word Processing and Desktop Publishing Software for work processing has transformed the process of writing. Word processing packages computerize the creation, editing, revision, and printing of documents (such as letters, memos. And reports) by electronically processing your text data (words, phrases, sentences, and paragraphs). Top word processing packages like Microsoft Word, Lotus WordPro, and Corel WordPerfect can provide a wide variety of attractively printed documents with their desktop publishing capabilities. These packages can also convert all documents to HTML format for publication as Web pages on corporate intranets or the World Wide Web. Word processing packages also provide advanced features. For example, a spelling checker capability can identify and correct spelling errors and a thesaurus feature helps you find a better choice of words to express ideas. Or you can identify and correct grammar and punctuation errors, as well as suggest possible improvements in your writing style, with grammar and style checker functions. Another text productivity tool is an idea processor or outliner function. It helps you organize and outline your thoughts before you prepare a document or develop a presentation. Besides converting documents to HTML format, you can also use the top packages to design and create Web pages from scratch for an Internet or intranet Web site. End users and organizations can use desktop publishing (DTP) software to produce their own printed materials that look professionally published. That is, they can design and print their own newsletters, brochures, manuals, and books with several type styles, graphics, photos, and colors on each page. Word processing packages and desktop publishing packages like Adobe PageMaker and QuarkXPress are used to do desktop publishing. Typically, text material and graphics can be generated by word processing and graphics packages and imported as text and graphics files. Optical scanners may be used to input text and graphics from printed material. You can also use files of clip art, which are predrawn graphic illustrations provided by the software package or available from other sources. The heart of desktop publishing is page design process called page makeup or page composition. Your video screen becomes an electronic paste cup board with rulers, column guides, and other page design aids. Text material and illustrations are then merged into the page format your design. The software will automatically move excess text to another column or page and help size and place illustrations and headings. Most DTP packages provide WYSIWYG (What You See Is What You Get) displays so you can see exactly what the finished document will look like before it is printed. Electronic Spreadsheets Electronic spreadsheet packages like Lotus 1-2-3, Microsoft Excel, and Corel QuattroPro are used for business analysis, planning, and modeling. They help you develop an electronic spreadsheet, which is a worksheet of rows and columns that can be stored on your PC or a network server, or converted to HTML format and stored as a Web page or web sheet on the World Wide Web. Developing a spreadsheet involves designing its format and developing the relationships (formulas) that will be used in the worksheet. In response to your input, the computer performs necessary calculations based on the relationships (formulas) you defined in the spreadsheet, and displays results immediately, whether at your workstation or Web site. Most packages also help you develop graphic displays of spreadsheet results. For example, you could develop a spreadsheet to record and analyze past and present advertising performance for a business. You could also develop hyperlinks to a similar web sheet at your marketing teams intranet Web site. Now you have a decision support tool to help you answer what-if questions you may have
11

Mahesh - RSREC

about advertising. For example, What would happen to market share if advertising expense increased by 10 percent? To answer this question, you would simply change the advertising expense formula on the advertising performance worksheet your developed. The computer would recalculate the affected figures, producing new market share figures and graphics. You would then have a better insight on the effect of advertising decisions on market share. Then you could share this insight with a note on the web sheet at your teams intranet Web site. Database Management Microcomputer versions of database management programs have become so popular that they are now viewed as general-purpose application software packages like work processing and spreadsheet packages. Database management packages such as Microsoft Access, Lotus Approach, or Corel Paradox allow you to set up and manage databases on your PC, network server, or the World Wide Web. Most database managers can perform four primary tasks, which we will discuss further in Chapter 7. Database development: Define and organize the content, relationships, and structure of the data needed to build a database, including any hyperlinks to data on Web pages. Database interrogation: Access the data in database to display information in a variety of formats. End users can selectively retrieve and display information and produce forms, reports, and other documents, including Web pages. Database maintenance: Add, delete, update, and correct the data in a database, including hyperlinked data on Web pages. Application development: Develop prototypes of Web pages, queries, forms, reports, and labels for a proposed business application. Or use a built-in 4GL or application generator to program the application. Presentation Graphics and Multimedia Presentation graphics packages help you convert numeric data into graphics displays such as line charts, bas graphs, pie charts, and many other types of graphics. Most of the top package also helps you prepare multimedia presentations of graphics, photo, animation, and video clips, including publishing to the World Vide Web. Not only are graphics and multimedia displays earlier to comprehend and communicate than numeric data but multiple-color and multiple media displays also can more early emphasize key points, strategic differences, and important trends in the data. Presentation graphics has proved to be much more effective than tabular presentations of numeric data for reporting and communicating in advertising media, management reports, or other business presentations. Presentation graphics software packages like Microsoft PowerPoint, Lotus Freelance, or Corel Presentations give you many easy-to-use capabilities that encourage the use of graphics presentations. For example, most packages help you design and manage computer generated and orchestrated slide shows containing many integrated graphics and multimedia displays. Or you can select from a variety of predesigned templates of business presentations, prepare and edit the outline and notes for a presentation, and manage the use of multimedia files of graphics, photos, sounds, and video clips. And of course, the top packages help you tailor your graphics and multimedia presentation for transfer in HTML format to Web sites on corporate intranets or the World Wide Web. Multimedia Technologies Hypertext and hypermedia are foundation technologies for multimedia presentations. By definition hypertext contains only text and a limited amount of graphics. Hypermedia is electronic documents that contain multiple forms, of media, including text, graphics, video, and so on. Key terms and topics in hypertext or hypermedia documents are indexed by software links so that they can be quickly searched by the reader. For example, if you click your mouse button on an underlined term on a hypermedia document displayed on your computer video screen, the computer instantly brings up a display of a passage of text and graphics related to that term. Once you finish viewing that pop-up display, you can return to what you were reading originally, or jump to another part of the document. Hypertext and hypermedia are developed using specialized programming languages like Java and the Hypertext Markup Language (HTML), which create hyperlinks to other parts of the document, or to other documents and media. Hypertext and hypermedia documents can thus be programmed to let a reader navigate through a multimedia database by following a chain of hyperlinks through various documents. The Web sites on
12

Mahesh - RSREC

the World Wide Web of the Internet are a popular example of this technology. Thus, the use of hypertext and hypermedia provides an environment for online interactive presentations of multimedia. Multimedia technologies allow end users to digitally capture, edit, and combine video with text, picture, and sound into multimedia business and educational presentations. For example, an interactive video session for training airline flight attendants can be produced on CD-ROM disks. It can combine animated graphics displays of different airplane configuration, presentations graphics of airline statistics, lists of major topics and facts, video clips of flight attendants working on various airplanes, and various announcements and sounds helpful in managing emergencies. Personal Information Managers The personal information manager (PIM) is a popular software package for end user productivity and collaboration. PIMs such as Lotus Organizer, Sidekick by Starfish Software, and Microsoft Outlook help end users store, organize, and retrieve information about customers, clients, and prospects, or schedule and manage appointments, meetings, and tasks. The PIM package will organize data you enter and retrieve information in a variety of forms, depending on the style and structure of the PIM and the information you want. For example, information can be retrieved as an electronic calendar or list of appointments, meetings, or other things to do; the timetable for a project; or a display of key facts and financial data about customers, clients, or sales prospects. Personal information managers are sold as independent programs or are included in software suites, and vary widely in their style, structure, and features. For example, Lotus Organizer uses a notebook with tabs format, while Microsoft Outlook organizes data about people as a continuous A-to-Z list. Most PIMs emphasize the maintenance of contact lists, that is, customers, clients, or prospects. Scheduling appointments and meetings and task management are other top PIM applications. PIMs are now changing to include the ability to access the World Wide Web as Sidekick does, or provide E-mail capability, as in Microsoft Outlook. Also, some PIMs use Internet and E-mail features to support team collaboration by sharing information such as contact lists, task lists, and schedules with other networked PIM users. Groupware Groupware is collaboration software, that is, software that helps workgroups and teams work together to accomplish group assignments. Groupware is a fast-growing category of general-purpose application software that combines a variety of software features and functions to facilitate collaboration. For example, groupware products like Lotus Notes, Novell GroupWise, Microsoft Exchange, and Netscape Communicator and Collabra support collaboration through electronic mail, discussion groups and databases, scheduling, task management, data, audio and videoconferencing, and so on. Groupware products are changing in several ways to meet the demand for better tools for collaboration. Groupware is now designed to use the Internet and corporate intranets and extranets to make collaboration possible on a global scale by virtual teams located anywhere in the world. For example, team members might use the Internet for global E-mail, project discussion forums, and joint Web page development. Or they might use corporate intranets to publish project news and progress reports, and work jointly on documents stored on Web servers. Collaborative capabilities are also being added to other software to give them groupware features. For example, in the Microsoft Office software suite, Microsoft Word keeps track of who made revisions to each document, Excel tracks all changes made to spreadsheet, and Outlook lets you keep track of tasks you delegate to other ream members. SYSTEM SOFTWARE: COMPUTER SYSTEM MANAGEMENT System Software Overview System software consists of programs that manage and support a computer system and its information processing activities. These programs serve as a vital software interface between computer system hardware and the application programs of end users. System management programs. Programs that manage the hardware, software, network, and data resources of the computer system during its execution of the various information processing jobs of
13

Mahesh - RSREC

users. Examples of important system management programs are operating systems, network management programs, database management systems, and system utilities. System development programs. Programs that help users develop information system programs and procedures and prepare user programs for computer processing. Major development programs are programming language translators and editors, other programming tools, and CASE (computer-aided software engineering) packages.

Operating System The most important system software package for any computer is its operating system. An operating system is an integrated system of programs that manages the operations of the CPU, controls the input/output and storage resources and activities of the computer system, and provides various support services as the computer executes the application programs of users. The primary purpose of an operating system is to maximize the productivity of a computer system by operating it in the most efficient manner. An operating system minimizes the amount of human intervention required during processing. It helps your application programs perform common operations such as accessing a network, entering data, saving and retrieving files, and printing or displaying output. If you have any hands-on experience on a computer, you know that the operating system must be loaded and activated before you can accomplish other tasks. This emphasized the fact that operating systems are the most indispensable component of the software interface between users and the hardware of their computer systems. Operating System Functions An operating system performs five basic functions in the operation of a computer system: providing a user interface, resource management, task management, file management, and utilities and support services. The User Interface: The user interface is the part of the operating system that allows you to communicate with it so you can load program, access files, and accomplish other tasks. Three main types of user interfaces are the command driven, menu driven, and graphical user interfaces. The trend in user interfaces for operating systems and other software is moving away from the entry of brief end user commands, or even the selection of choices from menus of options. Instead, the trend is toward an easy-to-use graphical user interface (GUI) that uses icons, bars, buttons, boxes, and other images. GUIs rely on pointing devices like the electronic mouse or trackball to make selections that help you get things done. Resource Management: An operating system uses a variety of resource management programs to manage the hardware and networking resources of a computer system, including is CPU, memory, secondary storage devices, telecommunications processors, and input/output peripherals, For example, memory management programs keep track of where data and programs are stored. They may also subdivide memory into a number of sections and swap parts of programs and data between memory and magnetic disks or other secondary storage devices. This can provide a computer system with a virtual memory capability that is significantly larger than the real memory capacity of its primary storage unit. So a computer with a virtual memory capability can process larger programs and greater amounts of data than the capacity of its memory circuits would normally allow. File Management: An operating system contains file management programs that control the creation, deletion, and access of files of data and programs. File management also involves keeping track of the physical location of files on magnetic disks and other secondary storage devices. So operating systems maintain directories of information about the location and characteristics of files stored on a computer systems secondary storage-devices. Task Management: The task management programs of an operating system manage the accomplishment of the computing tasks of end users. They give each task a slice of a CPUs time and interrupt the CPU operations to substitute other tasks. Task management may involve a multitasking capability where several computing tasks can occur at the same time. Multitasking may take the form of multiprogramming, where the CPU can process the tasks of several programs at the same time, or time sharing, where the computing tasks of several users can be processed at the same time. The efficiency of multitasking operations
14

Mahesh - RSREC

depends on the processing power of a CPU and the virtual memory and multitasking capabilities of the operating system it uses. New microcomputer operating systems and most midrange and mainframe operating systems provide a multitasking capability. With multitasking, end users can do two or more operations (e.g., keyboarding and printing) or applications (e.g., word processing and financial analysis) concurrently, that is, at the same time. Multitasking on microcomputers has also been made possible by the development of more powerful microprocessors (like the Intel Pentium-II) and their ability to directly address much larger memory capacities (up to 4 gigabytes). This allows an operating system to subdivide primary storage into several large partitions, each of which can be used by a different application program. Popular Operating Systems: MS-DOS (Microsoft Disk Operating System), along with the Windows operating environment, has been the most widely used microcomputer operating system. It is a single-user, single-tasking operating system, but was given a graphical user interface and limited multitasking capabilities by combining it with Microsoft Windows. Microsoft began replacing its DOS/Windows combination in 1995 with the Windows 95 operating system. Windows 95 is an advanced operating system featuring a graphical user interface, true multitasking, networking, multimedia, and many other capabilities. Microsoft plans to ship a Windows 98 version during 1998. Microsoft introduced another operating system, Windows NT (New Technology), in 1995. Windows NT is a powerful, multitasking, multi-user operating system that is being installed on network servers to manage local area networks and on desktop PCs with high-performance computing requirements. New Server and Workstation versions were introduced in 1997. Some industry experts are predicting that Windows NT Workstation will supplement Windows 95 and 98 in a few years. OS/2 (Operating System/2) is a microcomputer operating system from IBM. Its latest version, OS/2 Warp 4, was introduced in 1996 and provides a graphical user interface, voice recognition, multitasking, virtual memory, telecommunications, and many other capabilities. A version for network servers, OS/2 Warp Server, is also available. Originally developed by AT&T, UNIX now is also offered by other vendors, including Solaris by Sun Microsystems and AIX by IBM. UNIX is a multitasking, multiuse, network-managing operating system whose portability allows it to run on mainframes, midrange computers, and microcomputers. UNIX is a popular choice for network servers in many client/server computing networks. The Macintosh System is an operating system from Apple for Macintosh microcomputers. Now in version 8.0, the system has a popular graphical user interface as well as multitasking and virtual memory capabilities.

15

Mahesh - RSREC

UNIT 1: INTRODUCTION TO MIS


What is a system? An array of components that work together to achieve a common goal by accepting input, processing it, and producing output in an organized manner is called a System. Data Vs Information: The word data is derived from the Latin word datum, literally a given or fact, which might take the form of a number, statement or picture. Data is the raw material in the production of information. Data: Its a stream of raw facts representing events occurring in an organization. Information: Information is data that have been put into a meaningful and useful context. The term data and information are often used interchangeably. However, the relation of data to information is that of raw material to finished goods. A data processing system processes data to generate information. Information is the substance on which business decisions are based. Organizations spend most of their time generating, processing, creating, using and distributing information. Management converts the information into action through the process of decision making. What is an IS? A set of interrelated components that collect, process, store and disseminate information to support decision making and control in an organization. It also help managers and workers analyze problems, visualize complex subjects and creates new products. Purpose of information system: In business, however people and organization seek and use information specifically to make sound decision and to solve problems two closely related practices that form the foundation of every company. Decision Making: The main function of management is decision making. According to decision oriented view, management mainly consists of the following: 1) Planning 2) Organizing 3) Coordinating 4) Directing 5) Control

In the context of organization and management, we may define decision making as a managerial process and function of choosing a particular course of action out of several alternative courses for the purpose of achieving the given goals. Decision making is done at three levels: 1. Strategic Decisions: Top level 2. Tactical Decisions: Middle level 3. Operational level: Low level In the context of business enterprises, the situation which call for managerial decision making are numerous, some of the decision making situations are, 1. Formulation of organizational goals (future direction of growth, diversification, profitability) 2. The ways and means of achieving them (strategies and policies)

16

Mahesh - RSREC

IMPORTANCE OF INFORMATION FOR MANAGEMENT DECISIONS / DECISION MAKING Traditional economists of the pre industrial era considered only land, labour and capital as economic resources. With the emergence of industrial revolution and the availability of powered engines, the importance of labour as a resource changed relatively with men and machines. With increased mechanization, mass manufacturing placed the focus on technology that emerged as the major resource in the latter half of this century. With the recent information revolution, the prized resource actually is information. This can be better appreciated when one considers the success or failure of products that appeared in the market at the right time or the wrong time. The vendor who had the right market information and brought the product at right time wins substantially more market than the competitor. Information being a vital corporate resource, it needs to be managed just as any other organizational resource like money, men, materials or markets. Information systems are resource management agents. Information is necessary to managers for performing the functions of planning and controlling. The task of establishing information requirements for the aforesaid functions is usually performed by system analysts and system designers. They establish information requirement by interviewing and analyzing the functions, which each executing in the organization is required to perform. In general the planning information requirements of executives can be categorized into three broad categories, 1) Environmental 2) Competitive and 3) Internal 1) Environmental information: It comprises of following a. Government Policies: Information about concessions / benefits, government policies in respect of tax concessions or any other aspects, which may be useful to the organization in the future period. b. Factors of production: Information related with source, cost, location, availability, accessibility and productivity of the major factors of production i) labour ii) material and parts iii) capital. c. Technological environment: Forecast of any technological changes in the industry and the probable effect of it on the firm. d. Economic trends: It includes information relating to economic indicators like consumer disposal income, employment, productivity, capital investment, etc. 2) Competitive information: It includes the following information. a. Industry demand: Demand forecast of the industry in respect of the product manufactured and in the area in which the firm would be operating. b. Firm demand: Assessment of the firms product demand in the specified market. It also includes an assessment of firms capability to meet firms demand. c. The competitive data: Data of competing firms for forecasting demand and making decisions and plans to achieve the forecast. 3) Internal Information: Usually includes information concerning organizations. a. Sales forecast b. Financial plan/budget c. Supply factors and d. Policies, which are vital for subsidiary planning at all levels in the organization. IMPORTANCE OF INFORMATION AT DIFFERENT LEVELS OF MANAGEMENT 1. Top level (strategic level): Top management is defined as a set of management positions which are concerned with the overall task designing, directing and managing the organization the organization in an integrated manner. The structure of top level management consists of chairman and members of the board of directors, chief executive officer and the vice presidents.
17

Mahesh - RSREC

Information requirement at top level for making decision

External Competitive activities Customer preferences, style changes Economic trends Technological changes, legal rules

Internal Historical sales, costs Profit, cash flow, divisional income Sales, expenses financial ratios, interests; Credit outstanding, long term debt, Project progress reports, updates.

Top management needs information on the trends in the external environment and on the functioning of internal organizational sub system. Apart from historical information, top management requires ongoing or current information also which is generated through forecasts of the future. Top management requires information for determining overall goals and objectives of the business. It deals mainly with long term plans, policy matters and broad objectives of the company. 2. Middle level (tactical level): Middle management is defined as a group of management position which tends to overlap the top and supervisory management levels in the hierarchy. Middle management consists of heads of functional departments and chiefs of technical staff. Middle level management is responsible for the elaboration and classification of organizational goals, strategies and policies in terms of action and norms of performance. Information required by the middles management is less diverse and complex. Middle management is fed with information both from top management and supervisory management. Much of the information used by middle management is internal in nature. Middle management decisions are not strategic or long range so it does not require much futuristic information. For example, the information needs of a sales manager are: Corporate sales goals and targets, strategies and policies for operationalising them, he also needs information on sales potential and trends in different market segments, geographical territories, and competitive conditions and so on. Further, he needs information on weekly sales turnover from different zones and for different products, customer complaints, delay in dispatches, finished goods position, etc. Information requirement at middle level for making decision

External Price changes, product shortages Demand or supply, credit conditions

Internal Descriptive information: happenings Current performance indicators Over under budgets Historical profits, sales and income

3. Low level (operational level): Supervisory management is defined as a team of management positions at the base of the hierarchy. It consists of section officers, office managers and superintendents, foreman and supervisors who are directly responsible for instructing and supervising the efforts of clerical and blue collar employees and workers. Supervisory management is also called operations management in the sense it is concerned with implementing operational plans, policies and procedures for purposes of conversion of inputs into outputs. At this level,
18

Mahesh - RSREC

managers are responsible for routine, day to day decisions and activities of the organization which do not require much thinking. Information requirement at low level for making decision

External Sensitive changes affecting Material supplies and sales.

Internal Unit sales and expenses Current performances Shortages and bottle necks Operating efficiencies and inefficiencies. Input-output ratios

CHARACTERISTICS OF INFORMATION The important characteristics of useful and effective information are as follows: 1. Timeliness: It is true to say that information, to be of any use, has to be timely. Therefore, it is not provided on a continuous basis as and when desired. 2. Purpose: Information has purposes at the time it is transmitted to a person or machine, otherwise it is simple data. The basic purpose of information is to inform, evaluate, persuade, and organize. It helps in creating new concepts, identifying problems, solving problems, decision making, planning, initiating and controlling. 3. Mode and format: The modes of communicating information to humans are sensory but in business they are either visual, verbal or in written form. The statistical rules of compiling statistical tables and presenting information by means of diagrams, graphs, and curves, etc, The reports should be prepared on basis to save managers time by not overloading with information. 4. Redundancy: It means the excess of information carried per unit of data. However, in some business situation redundancy may some time be necessary to safeguard against error in communication process. But redundancy should be avoided in order to make the information precise and appropriate. 5. Rate: The rate of transmission or reception of information may be represented by the time required to understand a particular situation. 6. Frequency: The frequency with which information is transmitted or received affects it value. Weekly financial reports may show little changes so they have small value, whereas monthly reports may indicate changes big enough to show problems or trends so they have great value. 7. Completeness: The information should be as complete as possible. With the complete information the manager will be in a better position to take right decision at right time. If the information is incomplete then it may lead to confusion or wrong decisions. 8. Reliability: In statistical surveys, the information that is arrived at should have an indication of the confidence level. SYSTEM A system can be simply defined as a group of interrelated or interacting elements forming a unified whole. Many examples of systems can be found in the physical and biological sciences, in modern technology, and in human society. Thus, we can talk of the physical system of the sun and its planets, the biological system of the human body, the technological system of an oil refinery, and the socioeconomic system of a business organization.
19

Mahesh - RSREC

A system is a group of interrelated components working together toward a common goal by accepting inputs and producing outputs in an organized transformation process. Such a system (sometimes called a dynamic system) has three basic interacting components or functions: Input involves capturing and assembling elements that enter the system to be processed. For example, raw materials, energy, data, and human efforts must be secured and organized for processing. Processing involves transformation process that converts input into output. Examples are a manufacturing process, the human breathing process, or mathematical calculations. Output involves transferring elements that have been produced by a transformation process to their ultimate destination. For example, finished products, human services, and management information must be transmitted to their human users. A system with feedback and control components is sometimes called a cybernetic system, that is, a selfmonitoring, self-regulating system. Feedback is data about the performance of a system. For example, data about sales performance is feedback to a sales manager. Control involves monitoring and evaluating feedback to determine whether a system is moving toward the achievement of its goal. The control function then makes necessary adjustments to a systems input and processing components to ensure that it produces proper output. For example, a sales manager exercises control when he or she reassigns salespersons to new sales territories after evaluating feedback about their sales performance. Example A manufacturing system accepts raw materials as input and produces finished goods as output. An information system also is a system that accepts resources (data) as input and processes them into products (information) as output. COMPONENTS OF INFORMATION SYSTEM An information system is a system that accepts data resources as input and processes them into information products as output. An information system depends on the resources of people (end users and IS specialists), hardware (machines and media), software (programs and procedures), data (data and knowledge basis), and networks (communications media and network support) to perform input, processing, output, storage, and control activities that convert data resources into information products. This information system model highlights the relationships among the components and activities of information systems. It provides a framework that emphasizes four major concepts that can be applied to all types of information systems. 1. People Resources People are required for the operation of all information systems. These people resources include end users, IS specialists. End users (also called users or clients) are people who use an information system or the information it produces. They can be accountants, salespersons, engineers, clerks, customers, or managers. Most of us are information system end users. IS Specialists are people who develop and operate information systems. They include systems analysts, programmers, computer operators, and other managerial technical, and clerical IS personnel. Briefly, systems analysts design information systems based on the information requirements of end uses, programmers prepare computer programs based on the specifications of systems analysts, and computer operators operate large computer systems. 2. Hardware Resources The concept of Hardware resources includes all physical devices and materials used in information processing. Specially, it includes not only machines, such as computers and other equipment, but also all data
20

Mahesh - RSREC

media, that is, all tangible objects on which data is recorded, from sheets of paper to magnetic disks. Example of hardware in computer-based information systems are: Computer systems, which consist of central processing units containing microprocessors, and variety of interconnected peripheral devices. Examples are microcomputer systems, midrange computer systems, and large mainframe computer systems. Computer peripherals, which are devices such as a keyboard or electronic mouse for input of data and commands, a video screen or printer for output of information, and magnetic or optical disks for storage of data resources. 3. Software Resources The concept of Software Resources includes all sets of information processing instructions. This generic concept of software includes not only the sets of operating instructions called programs, which direct and control computer hardware, but also the sets of information processing instructions needed by people, called procedures. It is important to understand that even information systems that dont use computers have a software resource component. This is true even for the information systems of ancient times, or the manual and machinesupported information systems still used in the world today. They all require software resources in the form of information processing instructions and procedures in order to properly capture, process, and disseminate information to their users. The following are the examples of software resources: System Software, such as an operating system program, which con controls and supports the operations of a computer system. Application Software, which are programs that direct processing for a particular use of computers by end users. Examples are a sales analysis program, a payroll program, and a work processing program. Procedures, which are operating instructions for the people who will use an information system. Examples are instructions for filling out a paper form or using a software package. 4. Data Resources Data is more than the raw material of information systems. The concept of data resources has been broadened by managers and information systems professionals. They realize that data constitutes a valuable organization resource. Thus, you should view data as data resources that must be managed effectively to benefit all end users in an organization. Database that hold processed and organized data. Knowledge bases that hold knowledge in variety of forms such as facts, rules, and case examples about successful business practices. 5. Network Resources Telecommunications networks like the Internet, intranets, and extranets have become essential to the successful operations of all types of organizations and their computer-based information systems. Telecommunications networks consist of computers, communications processors, and other devices interconnected by communications media and controlled by communications software. The concept of Network resources emphasizes that communications networks are a fundamental resource component of all information systems. Network resources include: Communication media, Examples include twisted pair wire, coaxial cable, fiber-optic cable, microwave systems, and communication satellite systems. Network Support, This generic category includes all of the people, hardware, software, and data resources that directly support the operation and use of a communications network. Examples include communications control software such as network operating systems and Internet packages.

21

Mahesh - RSREC

CHARACTERISTICS OF MIS Management Oriented The system is designed from the top to work downwards. It does not mean that the system is designed to provide information directly to the top management. Other levels of management are also provided with relevant information. For example, in the marketing information system, the activities such as sales order processing, shipment of goods to customers and billing for the goods are basically operational control activities. A salesman can also track this information, to know the sales territory, size of order, geography and product line, provide the system has been designed accordingly. However, if the system is designed keeping in mind the top management, then data on external competition, market and pricing can be created to know the market share of the company's product and to serve as a basis of a new product or market place introduction. Management Directed Because of management orientation of MIS, it is necessary that management should actively direct the system development efforts. In order to ensure the effectiveness of system designed, management should continuously make reviews. Integrated The world "integration" means that the system has to cover all the functional areas of an organization so as to produce more meaningful management information, with a view to achieving the objectives of the organization. It has to consider various sub-system their objectives, information needs, and recognize the interdependence, that these subsystem have amongst themselves, so that common areas of information are identified and processed without repetition and overlapping Common Data Flows Because of the integration concept of MIS, common data flow concept avoids repetition and overlapping in data collection and storage combining similar functions, and simplifying operations wherever possible. Heavy Planning Element A management information system cannot be established overnight. It takes almost 2 to 4 years to establish it successfully in an organization. Hence, long-term planning is required for MIS development in order to fulfill the future needs and objectives of the organization. The designer of an information system should therefore ensure that it will not become obsolete before it actually gets into operation. Flexibility and Ease of Use While building an MIS system all types of possible means, which may occur in future, are added to make it flexible. A feature that often goes with flexibility is the ease of use. The MIS should be able to incorporate all those features that make it readily accessible to the wide range of users with easy usability. INFORMATION SYSTEM ACTIVITIES You should be able to recognize input, processing, output, storage and control activities taking place in any information system you are studying. 1. Input of Data Resource Data about business transactions and other events must be captured and prepared for processing by the input activity. Input typically takes the form of data entry activities such as recording and editing. End uses typically record data about transactions on some type of physical medium such as paper form, or enter it directly into a computer system. This usually includes a variety of editing activities to ensure that they have recorded data correctly. Once entered, data may be transferred onto a machine-readable medium such as a magnetic disk until needed for processing.
22

Mahesh - RSREC

For example, data about sales transactions can be recorded on source documents such as paper sales order forms. (A source document is the original formal record of a transaction). Alternately, salespersons can capture sales data using computer keyboards or optical scanning devices; they are visually prompted to enter data correctly by video displays. This provides them with a more convenient and efficient user interface, that is, methods of end user input and output with a computer system. Methods such as optical scanning and displays of menus, prompts, and fill-in-the-blanks formats make it easier for end users to enter data correctly into an information system. 2. Processing of Data into Information Data is typically subjected to processing activities such as calculating, comparing, sorting, classifying, and summarizing. These activities organize, analyze and manipulate data, thus converting them into information for end users. The quality of any data stored in an information system must also be maintained by a continual process of correcting and updating activities. For example, data received about a purchase can be (1) added to a running total of sales results, (2) compared to a standard to determine eligibility for a sales discount, (3) sorted in numerical order based on product identification numbers, (4) classified into product categories (such as food and nonfood items), (5) summarized to provide a sales manager with information about various product categories, and finally, (6) used to update sales records. 3. Output of Information Products Information in various forms is transmitted to end-users and made available to them in the output activity. The goal of information systems is the production of appropriate information products for end users. Common information products messages, reports, forms, and graphic images, which may be provided by video displays, audio responses, paper products, and multimedia. For example, a sales manager may view a video display to check on the performance of a salesperson, accept a computer-produced voice message by telephone, and receive a printout of monthly sales results. 4. Storage of Data Resource Storage is a basic system component of information systems. Storage is the information system activity in which data and information are retained in an organized manner for later use. For example, just as written text material is organized into words, sentences, paragraphs, and documents; stored data is commonly organized into fields, records, files, and database. This facilitates its later use in processing or its retrieval as output when needed by users of a system. 5. Control of System Performance An important information system activity is the control of its performance. An information system should produce feedback about its input, processing, output, and the system is meeting established performance standards. Then appropriate system activities must be adjusted so that proper information products are produced for end users. For example, a manager may discover that subtotals of sales amounts in a sales report do not add up to total sales. This might mean that data entry or processing procedures need to be corrected. Then changes would have to be made to ensure that all sales transactions would be properly captured and processed by a sales information system. TRENDS IN MANAGEMENT INFORMATION SYSTEMS Until the 1990s, the role of information systems was simple, transaction processing, record-keeping, accounting, and other electronic data processing (EDP) applications. Then another role was added, as the concept of management information system (MIS) was conceived. This new role focused on providing managerial end users with predefined management reports that would give managers the information they needed for decision-making purposes.
23

Mahesh - RSREC

By the 1970s, it was evident that the pre-specified information products produced by such management information systems were not adequately meeting many of the (DSS) was born. The new role for information systems was to provide managerial end users with ad hoc and interactive support of their decision-making processes. In the 1980s, several new roles for information systems appeared. First, the rapid development of microcomputer processing power, application software packages, and telecommunications networks give birth to the phenomenon of end user computing. Now, end users can use their own computing resources to support their job requirements instead of waiting for the indirect support of corporate information services departments. Second, it became evident that most top corporate executives did not directly use either the reports of information reporting systems or the analytical modeling capabilities of decision support systems, so the concept of executive information systems (EIS) was developed. These information systems attempt to give top executives an easy way to get the critical information they want, when they want it, tailored to the formats they prefer. Third, breakthrough is occurred in the development and application of artificial intelligence (AI) techniques to business information systems. Expert systems can serve as consultants to users by providing expert advice in limited subject areas. An important new role for information systems appeared in the 1980s and continues into the 1990s. This is the concept of a strategic role for information systems, sometimes called strategic information systems (SIS). In this concept, information technology becomes an integral component of business processes, products, and services hat help a company gain a competitive advantage in the global marketplace. Finally, the rapid growth of the Internet, intranets, extranets, and other interconnected global networks in the 1990s is dramatically changing the capabilities of information systems in business as we move into the next century. Such enterprise and global internetworking is revolutionizing end user, enterprise, and inter organizational computing, communications, and collaboration that supports the business operations and management of successful global enterprises. SYSTEMS APPROACH The term "systems" is derived from the Greek word "synistanai," which means "to bring together or combine." The term has been used for centuries. Components of the organizational concepts referred to as the "systems approach" has been used to manage armies and governments. The systems approach considers two basic components: elements and processes. Elements are measurable things that can be linked together. They are also called objects, events, patterns, or structures. Processes change elements from one form to another. They may also be called activities, relations, or functions. In a system the elements or processes are grouped in order to reduce the complexity of the system for conceptual or applied purposes. Modern management is based upon a systems approach to the organization. The systems approach views an organization as a set of interrelated sub-systems in which variables are mutually dependent. The organizing system has five basic parts, which are interdependent, they are: 1. The individual 2. The formal and informal organization 3. Patterns of behavior arising out of role demands of the organization 4. Role perception of the individuals 5. Physical environment in individuals work The systems approach in business was an idea born in 1960s on the basis of concept called synergism. Synergism means the output of the total organization can be enhanced if the component parts are integrated. The systems approach to management is designed to utilize scientific analysis in complex organizations for 1. Developing and managing operating systems (eg: money flows, personnel systems) and 2. Designing information systems for decision making
24

Mahesh - RSREC

The fundamental concept of the systems approach to organization and management is the interrelationship of the parts or subsystems the organization. In the organizational information system design we want to achieve synergism, which is the simultaneous action of greater than the sum of the individual parts. NEED FOR SYSTEMS APPROACH There are two major reasons to adopt systems approach 1. The increased complexity of business 2. The increased complexity of management 1. The Increased Complexity of Business: a. The Technological Revolution: Transportation, communications, agriculture and manufacturing are among the many industries undergoing vast changes in products, techniques, output and productivity. Adoption of advanced mechanization and automation techniques lead to information revolution. In order to cope up with these changes, the manager will require large amounts of selective information for the complex tasks and decision making. b. Research and Development: Rapid technological change is taking place due to increasing expenditures for research and development with the day to day change technology the life cycle of products is being shortened. Not only products, supporting operations too are becoming more complex. The firm should be aware of its research impact on their operations and should provide for better planning, management and information to accommodate the effects. c. Product changes: Technological advances resulting from research and development, from growing customer sophistication have resulted in product changes. Modern organization is faced with the necessity to optimize return from a given product in a much shorter time. New companies with new products are being born overnight. These factors contributing to call for better management and the systems approach. The manager must keep abreast of factor influencing his firms products and operations. The requirement demonstrates once again the need for properly designed management information system. d. The information explosion: As a decision maker, the manager is essentially a processor of information. Ability to obtain, store, retrieve and display the right information for the right decision is vital. To remain ahead of competitors and to keep pace with the technological revolution and its impact on the firms products or services, the manager must keep abreast of selected information and organize it for decision making. 2. Increased Complexity of Management: a. Information feedback systems: Essentially, feed back systems are concerned with the way information is used for the purpose of control, and they apply not only to business or management systems but also to engineering, biological and many other types of systems. b. Decision making: The notion of programming decisions by decision rule is now a basic consideration of management and information systems design. If decisions can be based upon a policy, a procedure, or a rule they are likely to be made better and more economically through systems approach. c. Electronic computer: Electronic computer is the major development made systems approach possible to management. With the help of a computer, the vast amount of data handling connected with storage, processing and retrieval of information is possible. Managers view it as a central element in an information system. The vital element in an information system is the managerial talent that designs and operates the system. The
25

Mahesh - RSREC

computer capability to process and store information has outraced mans ability to design systems that adequately utilize this capability. SYSTEMS APPROACH PROCESS The systems approach to problem solving used a systems orientation to define problems and opportunities and develop solutions. Studying a problem and formulating a solution involve the following interrelated activities: Recognize and define a problem or opportunity using systems thinking. Develop and evaluate alternative system solutions. Select the system solution that best meets your requirements. Design the selected system solution. Implement and evaluate the success of the designed system. 1. Defining Problems and Opportunities Problems and opportunities are identified in the first step of the systems approach. A problem can be defined as a basic condition that is causing undesirable results. An opportunity is a basic condition that presents the potential for desirable results. Symptoms must be separated from problems. Symptoms are merely signals of an underlying cause or problem. Example Symptom: Sales of a companys products are declining. Problem: Sales persons are losing orders because they cannot get current information on product prices and availability. Opportunity: We could increase sales significantly if sales persons could receive instant responses to requests for price quotations and product availability. 2. Systems Thinking Systems thinking is to try to find systems, subsystems, and components of systems in any situation you are studying. This viewpoint ensures that important factors and their interrelationships are considered. This is also known as using a systems context, or having a systemic view of a situation. I example, the business organization or business process in which a problem or opportunity arises could be viewed as a system of input, processing, output, feedback, and control components. Then to understand a problem and save it, you would determine if these basic system functions are being properly performed. Example The sales function of a business can be viewed as a system. You could then ask: Is poor sales performance (output) caused by inadequate selling effort (input), out-of-date sales procedures (processing), incorrect sales information (feedback), or inadequate sales management (control) 3. Developing Alternative Solutions There are usually several different ways to solve any problem or pursue any opportunity. Jumping immediately from problem definition to a single solution is not a good idea. It limits your options and robs you of the chance to consider the advantages and disadvantages of several alternatives. You also lose the chance to combine the best points of several alternative solutions. Where do alternative solutions come from/ experience is good source. The solutions that have worked, or at least been considered in the past, should be considered again. Another good source of solutions is the advice of others, including the recommendations of consultants and the suggestions of expert systems. 4. Evaluating Alternative Solutions Once alternative solutions have been developed, they must be evaluated so that the best solution can be identified. The goal of evaluation is to determine how well each alternative solution meets your business and personal requirements. These requirements are key characteristics and capabilities that you feed are necessary for your personal or business success.
26

Mahesh - RSREC

5. Selecting the Best Solution Once all alternative solutions have been evaluated, you can being the process of selecting the best solution. Alternative solutions can be compared to each other because they have been evaluated using the same criteria. 6. Designing and Implementing a Solution Once a solution has been selected, it must be designed and implemented. You may have to depend on other business end users technical staff to help you develop design specifications and an implementation plan. Typically, design specifications might describe the detailed characteristics and capabilities of the people, hardware, software, and data resources and information system activities needed by a new system. An implementation plan specifies the resources, activities, and timing needed for proper implementation. For example, the following items might be included in the design specifications and implementation plan for a computer-based sales support system: Types and sources of computer hardware, and software to be acquired for the sales reps. Operating procedures for the new sales support system. Training of sales reps and other personnel. Conversion procedures and timetable for final implementation. 7. Post Implementation Review The final step of the systems approach recognizes that an implemented solution can fail to solve the problem for which it was developed. The real world has a way of confounding even the well-designed solutions. Therefore, the results of implementing a solution should be monitored and evaluated. This is called a postimplemented. The focus of this step is to determine if the implemented solution has indeed helped the firm and selected subsystems meet their system objectives. If not, the systems approach assumes you will cycle back to a previous step and make another attempt to find a workable solution. INFORMATION SYSTEM ARCHITECTURE A system architecture or systems architecture is the conceptual design that defines the structure and/or behavior of a system. An architecture description is a formal description of a system, organized in a way that supports reasoning about the structural properties of the system. It defines the system components or building blocks and provides a plan from which products can be procured, and systems developed, that will work together to implement the overall system. System architecture is a representation of a system in which there is a mapping of functionality onto hardware and software components, a mapping of the software architecture onto the hardware architecture, and human interaction with these components. It is a process because a sequence of steps is prescribed to produce or change the architecture, and/or a design from that architecture, of a system within a set of constraints. Systems architecture is primarily concerned with the internal interfaces among the system's components or subsystems, and the interface between the system and its external environment, especially the user. A description of the design and contents of a computer system if documented, it may include information such as a detailed inventory of current hardware, software and networking capabilities; a description of long-range plans and priorities for future purchases, and a plan for upgrading and/or replacing outdated equipment and software. Systems architecture makes use of elements of both software and hardware and is used to enable design of such a composite system. A good architecture may be viewed as a 'partitioning scheme,' or algorithm, which partitions all of the system's present and foreseeable requirements into a workable set of cleanly bounded subsystems with nothing left over. That is, it is a partitioning scheme which is exclusive, inclusive, and exhaustive. A major purpose of the partitioning is to arrange the elements in the sub systems so that there is a minimum of communications needed among them. In both software and hardware, a good sub system tends to
27

Mahesh - RSREC

be seen to be a meaningful "object". Moreover, a good architecture provides for an easy mapping to the user's requirements and the validation tests of the user's requirements. Ideally, a mapping also exists from every least element to every requirement and test. A robust architecture is said to be one that exhibits an optimal degree of fault-tolerance, backward compatibility, forward compatibility, extensibility, reliability, maintainability, availability, serviceability, usability, and such other quality attributes as necessary and/or desirable.
Organization And Management Behavioral science techniques Quantitative Techniques

Decision Techniques Experience rules

Planning

Organizing

Directing

Staffing

Controlling

RESOURCE FLOWS Manpower Money Materials Machines

Organization and Management System Architecture

Importance of System Architecture: 1. It is a representation of System: System Architecture allows users to represent descriptions, concepts, roles, individuals and rules. 2. It is an arrangement of components 3. A formal description of system 4. It is a body of knowledge Purpose of the System Architecture Process Every business exceeding a few people enables the efficient concurrent work of these people by dividing the tasks in smaller more specialized jobs, the divide and rule principle in action. This decomposition of responsibilities requires an opposing force integrating the activities in a useful overall business result. Several integrating processes are active in parallel, such as project management, commercial management etcetera.
28

Mahesh - RSREC

The System Architecture Process is responsible for: The Integral Technical aspects of the Product Creation Process, from requirement to deployment. The Integral Technical Vision and Synergy in the Policy and Planning Process.

Management Information System Architecture QUANTITATIVE TECHNIQUES AND MIS INTERFACING A variety of techniques have been developed over the years to explore for and extract information from large data sets. At the end of the 1980s a new discipline, named data mining, emerged. Traditional data analysis techniques often fail to process large amounts of data efficiently. Data mining is the process of discovering valid, previously unknown, and ultimately comprehensible information from large stores of data. Data Mining - Data mining is primarily used as a part of information system today, by companies with strong consumer focus retail, financial, communication, and marketing organizations. It enables these companies to determine relationships among "internal" factors such as price, product positioning, or staff skills, and "external" factors such as economic indicators, competition, and customer demographics. And, it enables them to determine the impact on sales, customer satisfaction, and corporate profits. Finally, it enables them to "drill down" into summary information to view detail transactional data. Data Mining is the process of extracting knowledge hidden in large volumes of raw data. Data mining automates the process of finding relationships and patterns in raw data and delivers results that can be either utilized in an automated decision support system or assessed by a human analyst. Modern computer data mining systems self-learn from the previous history of the investigated system, formulating and testing hypotheses about the rules which this system obeys. In general, data mining techniques can be divided into two broad categories: predictive data mining and discovery data mining.

29

Mahesh - RSREC

Predictive data mining is applied to a range of techniques that find relationship between a specific variable (called target variable) and the other variables in your data. The following are examples of predictive mining techniques: Classification is about assigning data records into pre-defined categories. In this case the target variable is the category and the techniques discover the relationship between the other variables and the category. Regression is about predicting the value of a continuous variable from the other variables in a data record. The most familiar value prediction techniques include linear and polynomial regression. Discovery data mining is applied to range of techniques that find patterns inside your data without any prior knowledge of what patterns exist. The following are examples of discovery mining techniques: Clustering is the term for range of techniques, which attempts to group data records on the basis of how similar they are. Association and sequence analysis describes a family of techniques that determines relationship between data records. The particularity of contemporary requirements for the data processing is the following: the data have the unlimited quantity, the data differ (quantitative, qualitative and textual), the results should be particular and comprehensible and the tools for the processing of raw data should be easy to use. The modern technology of Data Mining (discovery-driven data mining) is based on the concept of patterns, reflecting fragments of poly dimensional relationships within the data. These patterns are regularities, which are characteristic to the data sub-retrievals that can be reflected compactly in the form, which is easy to comprehend for the human. The traditional mathematical statistics, for a long time applying for the role of the basic tool of data analysis, clearly gave up in front of the problems coming into existence. The main reason the concept of the mean of retrieval that leads to the operations on the fictitious values. The techniques of mathematical statistics proved to be useful mainly for the verification of preliminary defined hypotheses (verification-driven data mining) and for the rough research analysis that forms the basis of operative analytical data processing (online analytical processing, OLAP). At the same time the strong correlation exists between data mining and statistical methods, because statistical methods can be divided into the similar categories as data mining techniques: dependence methods and interdependence methods. The objective of the dependence methods is to determine whether the set of independent variables affects the set of dependent variables individually and/or jointly. That is, statistical techniques only test for the presence or absence of relationships between the two sets of variables. The objective of interdependent methods is to identify how and why the variables are related among themselves. The classification of the data mining and statistical methods is the following: Data Mining 1. Predictive techniques: Classification, Regression. 2. Discovery techniques: Association Analysis, Sequence Analysis, Clustering. Statistical methods 1. Dependence methods: Discriminant analysis, Logistic regression. 2. Interdependence methods: Correlation analysis, Correspondence analysis, Cluster analysis.

UNIT 2: PHYSICAL DESIGN OF COMPUTER SUB-SYSTEMS


30

Mahesh - RSREC

Database A database is a collection of tables which consists of information that is organized so that it can easily be accessed, managed, and updated. DATABASE MANAGEMENT SYSTEM A DBMS program helps organization use their integrated collections of data records and files known as databases. It allows different user application programs to easily access the same database. For example, a DBMS makes it easy for an employee database to be accessed by payroll, employee benefits, and other human resource programs. A DBMS also simplifies the process of retrieving information from databases in the form of displays and reports. Instead of having to write computer programs to extract information, end users can ask simple questions in a query language. Examples of popular mainframe and midrange packages are DB2 by IBM, Oracle 8 by Oracle Corporation and Microsoft Access. Components of a Database Management System: 1. Database administrator 2. Schema Rules and regulations that describes the nature of logical and physical relationships among records in the database. 3. People who put the data. 4. People who need the data. 5. The database itself. Objectives of the DBMS: 1. Provide for mass storage of relevant data. 2. Make access to the data easy for the user. 3. Provide prompt response to user requests for data. 4. Make the latest modifications to the database available immediately. 5. Eliminate redundant data. 6. Allow multiple users to be active at one time. 7. Allow for growth in the database system. 8. Protect the data from physical harm and unauthorized access. RELATIONAL DATABASE MANAGEMENT SYSTEM The relational database model was conceived by E. F. Codd in 1969, then a researcher at IBM. The model is based on branches of mathematics called set theory and predicate logic. The basic idea behind the relational model is that a database consists of a series of unordered tables (or relations) that can be manipulated using non-procedural operations that return tables. This model was in vast contrast to the more traditional database theories of the time that were much more complicated, less flexible and dependent on the physical storage methods of the data. When designing a database, you have to make decisions regarding how best to take some system in the real world and model it in a database. This consists of deciding which tables to create, what columns they will contain, as well as the relationships between the tables. A well-designed database takes time and effort to conceive, build and refine. The benefits of a database that has been designed according to the relational model are numerous. Some of them are:
31

Mahesh - RSREC

Data entry, updates and deletions will be efficient. Data retrieval, summarization and reporting will also be efficient. Since the database follows a well-formulated model, it behaves predictably. Since much of the information is stored in the database rather than in the application, the database is somewhat self-documenting. Changes to the database schema are easy to make.

Rules for relational database design 1. There should be relationship between database functionality. 2. All the information should be stored as values. 3. Every database should contain a primary key. 4. The database must use null values in the place of missing data. 5. The database schema must be described using relational database syntax. 6. The database must support a common language that is SQL (Structured query language). 7. The database system must be able to update. 8. The database must provide insert, update and delete functionality. 9. Changes to the physical and logical structure must be transparent. 10. The database must support integrity constraints. Normal Forms The normal forms defined in relational database theory represent guidelines for record design. The normalization rules are designed to prevent update anomalies and data inconsistencies. Normalization theory gives us the concept of normal forms to assist in achieving the optimum structure. The normal forms are a linear progression of rules that you apply to your database, with each higher normal form achieving a better, more efficient design. First Normal Form First normal form deals with the "shape" of a record type. 1NF says that, for every row-by-column position in a given table, must contain only one value, not an array or list of values. If lists of values are stored in a single column, there is no simple way to manipulate those values. Retrieval of data becomes much more laborious and difficult to generalize. Under first normal form, all occurrences of a record type must contain the same number of fields. First normal form excludes variable repeating fields and groups. This is not so much a design guideline as a matter of definition. Relational database theory doesn't deal with records having a variable number of fields. Second Normal Form A table is said to be in Second Normal Form (2NF), if it is in 1NF and every non-key column is fully dependent on the (entire) primary key. In another way, tables should only store data relating to one "thing" (or entity) and that entity should be described by its primary key. Second normal form is violated when a non-key field is a fact about a subset of a key. It is only relevant when the key is composite, i.e., consists of several fields. Consider the following inventory record:
--------------------------------------------------| PART | WAREHOUSE | QUANTITY | WAREHOUSE-ADDRESS | ====================-------------------------------

32

Mahesh - RSREC

The key here consists of the PART and WAREHOUSE fields together, but WAREHOUSE-ADDRESS is a fact about the WAREHOUSE alone. The basic problems with this design are: The warehouse address is repeated in every record that refers to a part stored in that warehouse. If the address of the warehouse changes, every record referring to a part stored in that warehouse must be updated. Because of the redundancy, the data might become inconsistent, with different records showing different addresses for the same warehouse. If at some point in time there are no parts stored in the warehouse, there may be no record in which to keep the warehouse's address. To satisfy second normal form, the record shown above should be decomposed into (replaced by) the two records:
------------------------------| PART | WAREHOUSE | QUANTITY | ====================------------------------------------------| WAREHOUSE | WAREHOUSE-ADDRESS | =============--------------------

When a data design is changed in this way, replacing unnormalized records with normalized records, the process is referred to as normalization. The term "normalization" is sometimes used relative to a particular normal form. Thus a set of records may be normalized with respect to second normal form but not with respect to third. The normalized design enhances the integrity of the data, by minimizing redundancy and inconsistency, but at some possible performance cost for certain retrieval applications. Third Normal Form A table is said to be in Third Normal Form (3NF), if it is in 2NF and if all non-key columns are mutually independent. Third normal form is violated when a non-key field is a fact about another non-key field, as in
-----------------------------------| EMPLOYEE | DEPARTMENT | LOCATION | ============------------------------

The EMPLOYEE field is the key. If each department is located in one place, then the LOCATION field is a fact about the DEPARTMENT -- in addition to being a fact about the EMPLOYEE. The problems with this design are the same as those caused by violations of second normal form: The department's location is repeated in the record of every employee assigned to that department. If the location of the department changes, every such record must be updated. Because of the redundancy, the data might become inconsistent, with different records showing different locations for the same department. If a department has no employees, there may be no record in which to keep the department's location. To satisfy third normal form, the record shown above should be decomposed into the two records:
------------------------| EMPLOYEE | DEPARTMENT | ============------------------------------------| DEPARTMENT | LOCATION | ==============-----------

Boyce Codd Normal Form


33

Mahesh - RSREC

A relation is in Boyce-Codd Normal Form (BCNF) if every determinant is a candidate key. The BoyceCodd normal form (abbreviated BCNF) is a "key heavy" derivation of the third normal form. It is if the key uniquely identifies a row, but the key includes more columns than are actually required to uniquely identify a row, then a table is in BCNF. Fourth Normal Form Under fourth normal form, a record type should not contain two or more independent multi-valued facts about an entity. In addition, the record must satisfy third normal form. Consider employees, skills, and languages, where an employee may have several skills and several languages. We have here two many-to-many relationships, one between employees and skills, and one between employees and languages. Under fourth normal form, these two relationships should not be represented in a single record such as
------------------------------| EMPLOYEE | SKILL | LANGUAGE | ===============================

Instead, they should be represented in the two records


-------------------- ----------------------| EMPLOYEE | SKILL | | EMPLOYEE | LANGUAGE | ==================== =======================

Fifth Normal Form Fifth normal form deals with cases where information can be reconstructed from smaller pieces of information that can be maintained with less redundancy. Second, third, and fourth normal forms also serve this purpose, but fifth normal form generalizes to cases not covered by the others. Example: namely one involving agents, companies, and products. If agents represent companies, companies make products, and agents sell products, then we might want to keep a record of which agent sells which product for which company. This information could be kept in one record type with three fields:
----------------------------| AGENT | COMPANY | PRODUCT | |-------+---------+---------| | Smith | Ford | car | | Smith | GM | truck | -----------------------------

This form is necessary in the general case. For example, although agent Smith sells cars made by Ford and trucks made by GM, he does not sell Ford trucks or GM cars. Thus we need the combination of three fields to know which combinations are valid and which are not. But suppose that a certain rule was in effect: if an agent sells a certain product, and he represents a company making that product, then he sells that product for that company.
----------------------------| AGENT | COMPANY | PRODUCT | |-------+---------+---------| | Smith | Ford | car | | Smith | Ford | truck | | Smith | GM | car | | Smith | GM | truck | | Jones | Ford | car | -----------------------------

34

Mahesh - RSREC

In this case, it turns out that we can reconstruct all the true facts from a normalized form consisting of three separate record types, each containing two fields:
------------------| AGENT | COMPANY | |-------+---------| | Smith | Ford | | Smith | GM | | Jones | Ford | --------------------------------------| COMPANY | PRODUCT | |---------+---------| | Ford | car | | Ford | truck | | GM | car | | GM | truck | --------------------------------------| AGENT | PRODUCT | |-------+---------| | Smith | car | | Smith | truck | | Jones | car | -------------------

These three record types are in fifth normal form, whereas the corresponding three-field record shown previously is not. Advantage of fifth normal form is that certain redundancies can be eliminated. Unavoidable Redundancies Normalization certainly doesn't remove all redundancies. Certain redundancies seem to be unavoidable, particularly when several multi valued facts are dependent rather than independent. SYSTEMS DESIGN After the completion of requirements analysis for a system, system design activity takes place for the alternative which is selected by management. The system design phase usually consists of following three activities: 1. Reviewing the systems informational and functional requirements 2. Developing a model of the new system, including logical and physical specifications of outputs, inputs, processing, storage, procedures and personnel. 3. Reporting results to management. The systems design must confirm to the purpose, scale and general concepts of the system that management approved during the requirements analysis phase. Therefore, users and system analysts must review each of these matters again before starting the design because they establish both the direction and the constraints that system developers must follow during rest of the activities. DESIGNING SYSTEM INPUT Input is the term denoting either an entrance or changes which are inserted into a system and which activate/modify a Process (engineering). Input is a medium through which the user communicates his needs with the computer system. It is an abstract concept, used in the modeling, system(s) design and system(s) exploitation. Factors to be considered while designing input system

Content: The analyst is required to consider the types of data that are needed to be gathered to generate the desired user outputs. This can be quite complicated because new systems often mean new information and new information often requires new sources of data. Timeliness: In data processing, it is very important that data is inputted to computer in time because outputs cannot be produced until certain inputs are available. Media: Another important input consideration includes the choice of input media and subsequently the devices on which to enter the data. Format: After the data contents and media requirements are determined, input formats are considered. While specifying the record formats, for instance, the type and length of each data field as well as any other special characteristics must be defined.
35

Mahesh - RSREC

Data input specification: A source document differs from a turnaround document in that the former contains data that change the status of a resource while the latter is a machine readable document. Transaction throughput is the number of error-free transactions entered during a specified time period. A document should be concise because longer documents contain more data and so take longer to enter and have a greater chance of data entry errors. Numeric and Mnemonic Coding: Numeric coding substitutes numbers for character data (e.g., 1=male, 2=female); mnemonic coding represents data in a form that is easier for the user to understand and remember. (e.g., M=male, F=female). Error Detection: The more quickly an error is detected, the closer the error is to the person who generated it and so the error is more easily corrected. An example of an illogical combination in a payroll system would be an option to eliminate federal tax withholding. Multi level Messaging: By "multiple levels" of messages, means allowing the user to obtain more detailed explanations of an error by using a help option, but not forcing a lengthy message on a user who does not want it. Common user shortcuts. An error suspense record would include the following fields: data entry operator identification, transaction entry date, transaction entry time, transaction type, transaction image, fields in error, error codes, date transaction reentered successfully. A data input specification is a detailed description of the individual fields (data elements) on an input document together with their characteristics (i.e., type and length). Feedback: The end user should be given a chance to provide feedback on the usability of the system.

Error Messages to be displayed for the end user


Be specific and precise, not general, ambiguous, or vague.(BAD: Syntax error, Invalid entry, General Failure) Don't JUST say what's wrong---- Be constructive; suggest what needs to be done to correct the error condition. Be positive; Avoid condemnation. Possibly even to the point of avoiding pejorative terms such as "invalid" "illegal" or "bad." Be user-centric and attempt to convey to the user that he or she is in control by replacing imperatives such as "Enter date" with wording such as "Ready for date." Consider multiple message levels: the initial or default error message can be brief but allow the user some mechanism to request additional information. Consistency in terminology and wording. o Place error messages in the same place on the screen o Use consistent display characteristics (blinking, color, beeping, etc.)

Interactive Screen Design

The primary differences between an interactive and batch environment are: o interactive processing is done during the organization's prime work hours o interactive systems usually have multiple, simultaneous users o the experience level of users runs from novice to highly experienced o developers must be good communicators because of the need to design systems with error messages, help text, and requests for user responses The seven step path that marks the structure of an interactive system is 1. Greeting screen (e.g., company logo) 2. Password screen -- to prevent unauthorized use
36

Mahesh - RSREC

3. 4. 5. 6. 7. 2

Main menu -- allow choice of several available applications Intermediate menus -- further delineate choice of functions Function screens -- updating or deleting records Help screens -- how to perform a task Escape options -- from a particular screen or the application

An intermediate menu and a function screen differ in that the former provides choices from a set of related operations while the latter provides the ability to perform tasks such as updates or deletes. 3 The difference between inquiry and command language dialogue modes is that the former asks the user to provide a response to a simple question (e.g., "Do you really want to delete this file?") where the latter requires that the user know what he or she wants to do next (e.g., MS-DOS C:> prompt; VAX/VMS $ prompt; Unix shell prompt). GUI Interface (Windows, Macintosh) provide Dialog Boxes to prompt user to input required information/parameters. 4 Directions for designing form-filling screens: o Fields on the screen should be in the same sequence as on the source document. o Use cuing to provide the user with information such as field formats (e.g., dates) o Provide default values. o Edit all entered fields for transaction errors. o Move the cursor automatically to the next entry field o Allow entry to be free-form (e.g., do not make the user enter leading zeroes) o Consider having all entries made at the same position on the screen. A default value is a value automatically supplied by the application when the user leaves a field blank. The eight parts of an interactive screen menu are: 1. Locator -- what application the user is currently in 2. Menu ID -- allows the more experienced user access without going through the entire menu tree. 3. Title 4. User instructions 5. Menu list 6. Escape option 7. User response area 8. System messages (e.g., error messages) Highlighting should be used for gaining attention and so should be limited to critical information, unusual values, high priority messages, or items that must be changed. 2 Potential problems associated with the overuse of color are: o Colors have different meanings to different people and in different cultures. o A certain percentage of the population is known to have color vision deficiency. o Some color combinations may be disruptive.
1

Information density is important because density that is too high makes it more difficult to discern the information presented on a screen, especially for novice users. 3 Rules for defining message content include: o Use active voice. o Use short, simple sentences. o Use affirmative statements. o Avoid hyphenation and unnecessary punctuation. o Separate text paragraphs with at least one blank line. o Keep field width within 40 characters for easy reading.
2

37

Mahesh - RSREC

o o o o o o

Avoid word contractions and abbreviations. Use nonthreatening language. Avoid godlike language. Do not patronize. Use mixed case (upper and lower case) letters. Use humor carefully.

Symmetry is important to screen design because it is aesthetically pleasing and thus more comforting. Input verification is asking the user to confirm his or her most recent input (e.g., "Are you sure you want to delete this file?") 4 Adaptive models are useful because they adapt to the user's experience level as he or she moves from novice to experience over time as experience with the system grows. 5 "Within User" sources of variation include: warm up, fatigue, boredom, environmental conditions, and extraneous events.
6 7

The elements of the adaptive model are: o Triggering question to determine user experience level o Differentiation among user experience o Alternative processing paths based on user level o Transition of casual user to experienced processing path o Transition of novice user to experienced processing path o Allowing the user to move to an easier processing path

Interactive tasks can be designed for closure by providing the user with feedback indicating that a task has been completed. 9 Internal locus of control is making users feel that they are in control of the system, rather than that the system is in control of them. 10 Examples of distracting use of surprise are: o Highlighting o Input verification o Flashing messages o Auditory messages Losing the interactive user can be avoided by using short menu paths and "You are here" prompts. Some common user shortcuts are: direct menu access, function keys, and shortened response time.
8

DESIGNING SYSTEM OUTPUT The term output applies to any information produced by an information system, whether printed or displayed. System output may be a report, a document or a message. When analysts design computer output, they identify the specific output that is needed to meet the information requirements, select methods for presenting information, create the information requirements, select methods for presenting information, create documents, reports or other formats that contain information produced by the system. Output objectives: The output from an information system should accomplish one or more of the following objectives: 1. Convey information about past activities, current status or projections of the future. 2. Signal important events, opportunities, problems or warnings. 3. Trigger an action. 4. Confirmation of an action. Important Factors to be considered in Output Design:
38

Mahesh - RSREC

1. Content: Content refers to the actual pieces of data included among the outputs provided to users. For example, the contents of a weekly report to a sales manager might consist of sales persons name, sales calls made by each sales person during the week and total amount of each product sold. 2. Form: Form refers to the way that content is presented to users. Content can be presented in various forms; quantitative, non quantitative, text graphics, video and audio. 3. Out put volume: The amount of data output required at any one time is known as output volume. It is better to use high speed printer or a rapid retrieval display unit, which are fast and frequently used output devices in case the volume is high. 4. Timeliness: timeliness refers to when users need outputs. Some outputs are required on a regular, periodic basis perhaps daily, weekly, monthly, at the end of a quarter or annually. Other types of reports are generated on request. 5. Media: Input Output medium refers to the physical device used for input, storage or output. A variety of output media are available in the market these days which include paper, video display, microfilm, magnetic tape/disk and voice output. 6. Format: The manner in which data are physically arranged is referred to as format. This arrangement is called out put format when referring to data output on a printed report or on a display screen. Guidelines apply for output design 1. Proper allocation of output components 2. Classification of reports 3. Provision of page breaks 4. Provision of review facility 5. Flexibility in design 6. Security by intermediary Problems faced by output systems 1. Delay in providing information 2. Data overload 3. Excessive distribution 4. Non-Tailoring Output Documents 1. External Reports: Produced for use or distribution outside the organization; often on preprinted forms. 2. Internal Reports: Produced for use within the organization; ex: stock paper 3. Periodic Reports: produced with a set frequency (daily, weekly, monthly, every fifth Tuesday, etc.) 4. Ad-Hoc (On Demand) Reports: These reports are produced on irregular interval; produced upon user demand. 5. Detail Reports: These reports consist of one line per transaction. 6. Summary Reports: Provides an overview. 7. Exception Reports: only shows errors, problems, out-of-range values, or unexpected conditions or events. 8. History Reports: Provides past information about a transaction. 9. Audit Reports: Provides audit information.

Procedure Design
39

Mahesh - RSREC

After designing the input system and output system we have to plan for process designing in order to connect input components with output components. The most important tool used in procedure design is flowchart. It is the common tool which is used to explain system process and components involved in the system. A great deal of information has been gathered since the flowcharts for the subsystems were sketched. Inputs and outputs have been detailed. Numerous users have reviewed those flowcharts. At this point, corrections should be made to the subsystem flowcharts. In addition each task specified inside the subsystem should be broken down further in either mathematical or narrative format. This process sties all the pieces of information we have gathered back to the original MIS goal. It also serves to reverify the detailed design and add flesh to the outline. A flowchart is a common type of diagram that represents an algorithm or process showing the steps as boxes of various kinds, and their order by connecting these with arrows. This diagrammatic representation can give a step-by-step solution to a given problem. Data is represented in these boxes, and arrows connecting them represent flow / direction of flow of data. Flowcharts are used in analyzing, designing, documenting or managing a process or program in various fields. Symbols A typical flowchart may have the following kinds of symbols: Start and end symbols represented as circles, ovals or rounded rectangles, usually containing the word "Start" or "End", or another phrase signaling the start or end of a process, such as "submit enquiry" or "receive product". Arrows represents "flow of control". An arrow coming from one symbol and ending at another symbol represents that control passes to the symbol the arrow points to. Rectangles (or oblongs) represents processing steps; ex: "save changes" Parallelogram represents Input/output. Examples: Get X from the user; display X. Diamond (rhombus) represents a Condition or decision. These typically contain a Yes/No question or True/False test. A number of other symbols that have less universal currency, such as: A Document represented as a rectangle with a wavy base; A Manual input represented by parallelogram, with the top irregularly sloping up from left to right. An example would be to signify data-entry from a form; A Manual operation represented by a trapezoid with the longest parallel side at the top, to represent an operation or adjustment to process that can only be made manually. A Data File represented by a cylinder. Types of Flow Charts:

Document flowcharts, showing controls over a document-flow through a system Data flowcharts, showing controls over a data flows in a system System flowcharts showing controls at a physical or resource level Program flowchart, showing the controls in a program within a system

40

Mahesh - RSREC

Sample Procedure Flow Chart

SYSTEM SECURITY Controls upon information systems are based upon the two underlying principles of the need to ensure the accuracy of the data held by the organization and the need to protect against loss or damage. The most common threats faced by organizational information systems can be placed into following categories, Accidents: A number of estimates suggest that 40-65% of all damage caused to information systems or corporate data arises as a result of human error. Some of the ways in which human errors can occur include: Inaccurate data entry: As an example, consider a typical relational database management system, where update queries are used to change records, tables and reports. If the contents of the query are incorrect, errors might be produced within all of the data manipulated by the query. Significant problems might be caused by adding or removing even a single character to a query. Attempts to carry out tasks beyond the ability of the employee: In smaller computer based information systems, a common cause of accidental damage involves users attempting to install new hardware items or software applications. Failure to comply with procedures for the use of organizational information systems: Where organizational procedures are unclear or fail to anticipate potential problems, users may often ignore established methods, act on their own initiative or perform tasks incorrectly. Failure to carryout backup procedures: In addition to carrying out regular backups of important business data, it is also necessary to verify that any backup copies made are accurate and free from errors. Natural disasters: All information systems are susceptible to damage caused by natural phenomena, such as storms, lightning strikes, floods and earthquakes. Sabotage: With regard to information systems, sabotage may be deliberate or unintentional and carried out on an individual basis or as an act of industrial sabotage. Individual sabotage is typically carried out by a disgruntled employee who wishes to take some form of revenge upon their employer. The logic bomb (sometimes known as a time bomb) is a well known example of how an employee may cause deliberate damage to the organizations information systems. A logic bomb is a destructive program that activates at a certain time or in reaction to a specific event. In most cases, the logic bomb is activated some months after the employee has left the organization. Another well known example is known as backdoor. The back door is a section of program code that allows a user to circumvent security procedures in order to gain full access to an
41

Mahesh - RSREC

information system. Although back doors have legitimate uses, such as for program testing, they can also be used as an instrument of sabotage. Vandalism: Deliberate damage caused to hardware, software and data is considered a serious threat to information systems security. The threat from vandalism lies in the fact that the organization is temporarily denied access to some of its resources. Even relatively minor damage to parts of a system can have a significant effect on the organization as a whole. In a small network system, for example, damage to a server or shared storage device might effectively halt the work of all those connected to the network. Theft: Theft can be divided into two basic categories: physical theft and data theft. Physical theft, as the term implies, involves the theft of hardware and software. Data theft normally involves making copies of important files without causing any harm to the originals. However, if the original files are destroyed or damaged, then the value of the copied data is automatically increased. Unauthorized Use: One to the most common security risks in relation to computerized information systems is the danger of unauthorized access to confidential data. Contrary to the popular belief encouraged by the media, the risk of hackers, gaining access to a corporate information system is relatively small. Most security breaches involving confidential data can be attributed to the employees of the organization. Computer viruses: There are several different types of computer virus. Some examples include: - The link virus attaches itself to the directory structure of a disk. In this way, the virus is able to manipulate file and directory information. Link viruses can be difficult to remove since they become embedded within the affected data. - Parasitic viruses insert copies of themselves into legitimate programs, such as operating system files, often making little effort to disguise their presence. In this way, each time the program file is run, so too is the virus. The virus remains in the computers memory performing various operations in the background. Such operations might range from crating additional copies of itself to deleting files on a hard disk. - Macro viruses are crated using the high level programming languages found in email packages, web browsers and applications software, such as word processors. Reducing threat to information systems: Containment: The strategy containment attempts to control access to an information system. One approach involves making potential targets as unattractive as possible. This can be achieved in several ways but a common method involves creating the impression that the target information system contains data of little or no value. Second technique involves creating an effective series of defenses against potential threats. If the expense, time and effort required to gain access to the information system is greater than any benefits derived from gaining access, then intrusion becomes less likely. Deterrence: A strategy based upon deterrence uses the threat of punishment to discourage potential intruders. The overall approach is one of anticipating and countering the motives of those most likely to threaten the security of the system. A common method involves constantly advertising and reinforcing the penalties for unauthorized access. Obfuscation: Obfuscation concerns itself with hiding or distributing assets so that any damage caused can be limited. This provides a more comprehensive approach to security than containment or deterrence since it also provides a measure of protection against theft and other threats. Another method involves carrying out regular audits of data, hardware, software and security measures.

42

Mahesh - RSREC

Recovery: In anticipating damage or loss, a great deal of emphasis is placed upon backup procedures and recovery measures. In large organizations, a backup site might be created, so that data processing can be switched to a secondary site immediately in the event of an emergency. Types of Controls: Physical Protection: Physical protection involves the use of physical barriers intended to protect against theft and unauthorized access. The reasoning behind such an approach is extremely simple if access to rooms and equipment is restricted, risks of theft and vandalism are reduced. Biometric Controls: These controls make use of the unique characteristics of individuals in order to restrict access to sensitive information or equipment. Scanners that check fingerprints, voice prints or even retinal patterns are examples of biometric controls. Telecommunications controls: These controls help to verify the identity of a particular user. Common types of communications controls include passwords and user validation routines. Auditing: This involves taking stock of procedures, hardware, software and data at regular intervals. With regard to software and data, audits can be carried out automatically with an appropriate program. Auditing software works by scanning the hard disk drives of any computers, terminals and servers attached to a network system.

43

Mahesh - RSREC

UNIT 3: MIS DEVELOPMENT


INFORMATION SYSTEM PLANNING Information system planning is focused on determining the information needs and also ensures that information system planning aligns with the overall business planning. For information system planning R. Nolan has given a model known as Nolan stage model. Which initially had four stages of growth and later on it was reviewed, and as a result there is an addition of two intermediate growth stages. The Nolan model basically describes in which stage organizations information system exists. This will provide a base for planning to proceed to next stage of the growth. The Nolan stage model explains the evolution of information system within an organization by considering the various stages of growth. This model was developed in the mid of seventies. Expenditure on information technology increases as the information system passes through various stages. Stage 1: Initiation This is the first stage of Nolans model. This stage depicts that computer system is used for transactionprocessing which is basically the bottom line of the organization hierarchy. At transaction processing level typically, high volume of data processing is done in terms of accounting the business transactions, billing and payroll, etc. so very little planning of information system is required. The users are mostly unaware of the technology. So new application are the development with the help of traditional languages like COBOL, FORTRAN, and etc. System analysis and design has very few methodologies. Stage 2: Contagion This is the second stage of Nolans growth model, also known as expansion stage. It is related to unplanned and uncontrolled growth. Actually at this moment the users have developed their interest to know the possibilities of it but still, they do not know much about its pros and cons. The growth of large number of IT applications with minimum check is the key feature of this stage. Technical problems with in the development of programs appear in this stage. There was very little control of the development of information system as well as expenditure associated with IT. Stage 3: Control This is the third stage of Nolans stage model. Because of the unplanned growth, of a large number of IT applications as well as projects, a need arose to manage the information system and also the reorganization of data processing department. The data processing manager becomes more accountable or responsible to justifying the expenditure on information system development. The growth of projects is controlled by imposing changes on user department for information system development project and the use of computer services. Users are witnesses of progress in the development of information systems. Stage 4: Integration This is the fourth stage of Nolans model, known as integration stage. At this stage data processing has a new direction. Information systems are more information oriented means give importance to information product. Because of this concept and to facilitate it, there is an introduction of interactive terminals in the user department, the development of data base and the introduction of data communication technology has taken place. The controlled user departments are now in a position to satisfy the repressed demand for information support. So there is tremendous growing demand for IT applications. As a consequence of this, there is a hike in expenditure also. Stage 5: Data Administration This is the fifth stage of Nolans stage model. This stage did not exist in the initial model. This stage has come into existence to overcome and also to control the problem of data redundancy. At this moment it is realized that data is an important resource of the organization. So it should be duly planned and managed. This stage is characterized by the development of an integrated database serving whole organizations information need and also develop an IT application to successfully access these databases. Users become more accountable for the integrity and appropriate use of the data and information resources.
44

Mahesh - RSREC

Stage 6: Maturity This is the sixth stage. This was added to the enhanced model. This stage indicates towards a mature organization, which took information system as an integral part of the organization functioning. It indicates that the application portfolio is complete and a representative of an organizations activity. Actually, application portfolio matches with the overall objectives of the organization. Planning of the information system was coordinated and comprehensive because top management realized that information was an important resource. Manager of information system is on the same footing as other managers of the organization. Thus, planning of development of information system in the organization is built into the organizations overall development. SYSTEM DEVELOPMENT LIFE CYCLE As we all know the law of nature, that every thing either man, animal or an event has a definite starting or beginning and it also has definite ending or termination. Likewise the computer based information system or management information system has its starting or beginning and after passing through so many activities it has a completion stage. Using the system approach to develop information system involves a multi step process called the information system development cycle, also known as system development life cycle or phases / stages of MIS. When the systems approach to problem solving is applied to the development of information system solutions to business problems, it is called information systems development or application development. Most computerbased information systems are conceived, designed, and implemented using some form of systematic development process. In this process, end users and information specialists design information systems based on an analysis of the information requirements of an organization. Using the systems approach to develop information system solutions involves a multistep process called the information systems development cycle; also know as the systems development life cycle (SDLC).
Define Project Scope Initiation Analysis and Planning Development Design Coding

Testing Unit Testing Integration Testing - Operational Testing

Implementation

Integration

Implementation

Deployment

Maintenance and Support

Traditional / Waterfall Cycle Model of SDLC


45

Mahesh - RSREC

Defining Project Scope Initiation: The first step is to identify a need for the new system. This will include determining whether a business problem or opportunity exists, conducting a feasibility study to determine if the proposed solution is cost effective, and developing a project plan. This process may involve end users who come up with an idea for improving their work. Ideally, the process occurs in tandem with a review of the organization's strategic plan to ensure that IT is being used to help the organization achieve its strategic objectives. Management may need to approve concept ideas before any money is budgeted for its development. Requirement Analysis: Requirements analysis is the process of analyzing the information needs of the end users, the organizational environment, and any system presently being used, developing the functional requirements of a system that can meet the needs of the users. Also, the requirements should be recorded in a document, email, user interface storyboard, executable prototype, or some other form. The requirements documentation should be referred to throughout the rest of the system development process to ensure the developing project aligns with user needs and requirements. Development Design: After the requirements have been determined, the necessary specifications for the hardware, software, people, and data resources, and the information products that will satisfy the functional requirements of the proposed system can be determined. The design will serve as a blueprint for the system and helps detect problems before these errors or problems are built into the final system. Coding: Coding is the act of creating final system, this step is done by IS professionals. Testing The system must be tested to evaluate its actual functionality in relation to expected or intended functionality. Some other issues to consider during this stage would be converting old data into the new system and training employees to use the new system. End users will be key in determining whether the developed system meets the intended requirements, and the extent to which the system is actually used. Unit testing: Unit testing is a method by which individual units of source code are tested to determine if they are fit for use. A unit is the smallest testable part of an application. In procedural programming a unit may be an individual function or procedure. Unit tests are created by programmers or occasionally by white box testers. Integration testing: Integration testing is the phase in software testing in which individual software modules are combined and tested as a group. It occurs after unit testing and before system testing. Operational testing: A continuing process of evaluation that may be applied to system and system components to determine their validity or reliability. Implementation: Implementation is the realization of an application, or execution of system. Finally, implementation involves a conversion process from the use of a present system to the operation of a new or improved application. Conversion methods can soften the impact of introducing new technology into an organization. Thus, conversion may involve operating both new and old systems in parallel for a trial period, or operation of a pilot system on a trial basis at one location. Phasing in the new system in one application or location at a time is another popular conversion method. Integration: After implementation communication systems are established between all the functional areas and integration is established between old and new systems. Deployment: Software deployment is all of the activities that make a software system available for use. The general deployment process consists of several interrelated activities with possible transitions between them. Maintenance and Support: Systems maintenance is the final stage of the system development cycle. It involves the monitoring, evaluation, and modifying of a system to make desirable or necessary improvements. This may include a post-implementation review process to ensure that the newly implemented system is meeting the functional business requirements that were established for it when it was designed. Errors in the
46

Mahesh - RSREC

development of a system are corrected by the maintenance activity. Systems maintenance also includes modifying a system due to internal changes in a business or external changes in the business environment. SYSTEM ANALYSIS AND DESIGN METHOD 1. Feasibility study - prepare for feasibility study - define the problem - select best feasibility option - prepare feasibility report 2. Requirement analysis - establish analysis framework - investigate current processing - investigate current data - derive logical view of current services - develop investigation report 3. Requirement specification - define required system processing - develop required data model - derive system functions - enhance required data model - develop processing specifications - confirm system objectives 4. Logical system specification - define user dialogs - define update processes - define enquiry processes 5. Physical design - prepare physical component design - create physical data design - create functional component implementation map - optimize physical data design 6. Implementation 7. Maintenance Advantages of Traditional methods: 1. Easy to understand 2. Provide development structure to inexperienced 3. Objectives are well defined 4. Provide requirement stability 5. Good for management control 6. Quality is important rather than schedule When to use traditional methods: 1. When requirements are well known 2. When Product definition is stable 3. When developing a new version of existing system 4. When technology is understood
47

Mahesh - RSREC

REASONS FOR EVOLUTION OF RAD METHOD OR MODERN DEVELOPMENT METHODS OR DEMERITS OF TRADITIONAL METHODS: A gap of understanding between users and developers: Users tend to know less about what is possible and practical from a technology perspective, while developers may be less aware of the underlying business decision making issues which lie behind the systems development requirement. Tendency of developers to isolate themselves from users: Historically, systems developers have been able to hide behind a wall of jargon, thus rendering the user community at an immediate disadvantage when discussing IS/IT issues. The tendency for isolation is enhance by physical separation of some computer staff in their own air conditioned rooms. Quality measured by closeness of product to specification: This is a fundamental difficulty - the observation that the system does exactly what the specification said it would do hides the fact that the system may still not deliver the information that the users need for decision making purposes. Long development times: The traditional model will reveal that the processes of analysis and design can be very laborious and time consuming. Development times are not helped by the fact that an organization may be facing rapidly changing business conditions and requirements may similarly be changing. Business needs change during the development process: A method is needed where successive iterations in the development process are possible so that the latest requirements can be incorporated. What users get isnt necessarily what they want: The user may see of new information system is at the testing or training stage. At this point, it will be seen whether the system as delivered by the IS/IT professionals is what the user actually needs and appropriate analogy here is the purchase of a house or car simply on the basis of discussions with an estate agent or garage, rather than by actually visiting the house or driving the car. RAPID APPLICATION DEVELOPMENT James Martin, in his book first coining the term, wrote, Rapid Application Development (RAD) is a development lifecycle designed to give much faster development and higher-quality results than those achieved with the traditional lifecycle. It is designed to take the maximum advantage of powerful development software that has evolved recently. RAD (rapid application development) is a concept that products can be developed faster and of higher quality through: Gathering requirements using workshops or focus groups Prototyping and early, reiterative user testing of designs The re-use of software components A rigidly paced schedule that defers design improvements to the next product version Less formality in reviews and other team communication Some companies offer products that provide some or all of the tools for RAD software development. (The concept can be applied to hardware development as well.) These products include requirements gathering tools, prototyping tools, computer-aided software engineering tools, language development environments such as those for the Java platform, groupware for communication among development members, and testing tools. RAD usually embraces object-oriented programming methodology, which inherently fosters software re-use. The most popular object-oriented programming languages, C++ and Java, are offered in visual programming packages often described as providing rapid application development. RAD Process Requirement planning phase: (A workshop utilizing structured discussion of business problems) User description phase: (Automation tools capture information from users) Construction phase: (Productivity tools such as code generators, screen generators, etc.) Cutover phase: (Illustration of system, user acceptance testing and user training)
48

Mahesh - RSREC

Requirements Planning: Also known as the Concept Definition Stage, this stage defines the business functions and data subject areas that the system will support and determines the systems scope. User Design: Also known as the Functional Design Stage, this stage uses workshops to model the systems data and processes and to build a working prototype of critical system components. Construction: Also known as the Development Stage, this stage completes the construction of the physical application system, builds the conversion system, and develops user aids and implementation work plans. Implementation: Also known as the Deployment Stage, this stage includes final user testing and training, data conversion, and the implementation of the application system. The evidence from project failures for projects in the 1980s and 1990s implies that traditional structured methodologies have a tendency to deliver systems that arrive too late and therefore no longer meet their original requirement. Traditional methods can fail in information requirements, basic needs, and Identify the basic a number of ways: PROTOTYPE APPLICATION DEVELOPMENT
scope of application, estimated costs

Software prototyping, an activity during certain software Initial Prototypeis the creation of prototypes, Development of development, i.e., incomplete versions of the software program being developed. A prototype typically simulates only a few aspects of the and Revise the eventual program, and may be Test features of completely different from the eventual implementation. The conventional purpose of a prototype is Proto Typeusers of the software to evaluate developers' Use to allow and Refine Requirements proposals for the design of the eventual product by actually trying them out, rather than having to interpret and evaluate the design based on descriptions. Prototyping can also be used by end users to describe and prove Are requirements that developers have not considered, so "controlling the prototype" can be a key factor in the User / Yes commercial relationship between developers and their clients Designe This process is in contrast with the 1960s and 1970s monolithic development cycle of building the entire Satisfied program first and then working out any inconsistencies between design and implementation, which led to higher ? software costs and poor estimates of time and cost. Use prototype as specification
for application development No Working Prototype Obtain User Signoff from the Approved Prototype Revise / Enhance prototype 49 Operational Prototype r

Mahesh - RSREC

Prototype Application Development Advantages of prototyping There are many advantages to using prototyping in software development some tangible, some abstract. Reduced time and costs: Prototyping can improve the quality of requirements and specifications provided to developers. Because changes cost exponentially more to implement as they are detected later in development, the early determination of what the user really wants can result in faster and less expensive software. Improved and increased user involvement: Prototyping requires user involvement and allows them to see and interact with a prototype allowing them to provide better and more complete feedback and specifications. The presence of the prototype being examined by the user prevents many misunderstandings and miscommunications that occur when each side believe the other understands what they said. Since users know the problem domain better than anyone on the development team does, increased interaction can result in final product that has greater tangible and intangible quality. The final product is more likely to satisfy the users desire for look, feel and performance. Disadvantages of prototyping Using, or perhaps misusing, prototyping can also have disadvantages. Insufficient analysis: The focus on a limited prototype can distract developers from properly analyzing the complete project. This can lead to overlooking better solutions, preparation of incomplete specifications or the conversion of limited prototypes into poorly engineered final projects that are hard to maintain. Further, since a
50

Mahesh - RSREC

prototype is limited in functionality it may not scale well if the prototype is used as the basis of a final deliverable, which may not be noticed if developers are too focused on building a prototype as a model. User confusion of prototype and finished system: Users can begin to think that a prototype, intended to be thrown away, is actually a final system that merely needs to be finished or polished. (They are, for example, often unaware of the effort needed to add error-checking and security features which a prototype may not have.) This can lead them to expect the prototype to accurately model the performance of the final system when this is not the intent of the developers. Users can also become attached to features that were included in a prototype for consideration and then removed from the specification for a final system. If users are able to require all proposed features be included in the final system this can lead to conflict. Developer misunderstanding of user objectives: Developers may assume that users share their objectives (e.g. to deliver core functionality on time and within budget), without understanding wider commercial issues. For example, user representatives attending Enterprise software (e.g. PeopleSoft) events may have seen demonstrations of "transaction auditing" (where changes are logged and displayed in a difference grid view) without being told that this feature demands additional coding and often requires more hardware to handle extra database accesses. Users might believe they can demand auditing on every field, whereas developers might think this is feature creep because they have made assumptions about the extent of user requirements. If the developer has committed delivery before the user requirements were reviewed, developers are between a rock and a hard place, particularly if user management derives some advantage from their failure to implement requirements. Developer attachment to prototype: Developers can also become attached to prototypes they have spent a great deal of effort producing; this can lead to problems like attempting to convert a limited prototype into a final system when it does not have an appropriate underlying architecture. (This may suggest that throwaway prototyping, rather than evolutionary prototyping, should be used.) Excessive development time of the prototype: A key property to prototyping is the fact that it is supposed to be done quickly. If the developers lose sight of this fact, they very well may try to develop a prototype that is too complex. When the prototype is thrown away the precisely developed requirements that it provides may not yield a sufficient increase in productivity to make up for the time spent developing the prototype. Users can become stuck in debates over details of the prototype, holding up the development team and delaying the final product. Expense of implementing prototyping: the start up costs for building a development team focused on prototyping may be high. Many companies have development methodologies in place, and changing them can mean retraining, retooling, or both. Many companies tend to just jump into the prototyping without bothering to retrain their workers as much as they should.

UNIT 4: INFORMATION SYSTEMS AND COMPUTERS IN MANAGEMENT


51

Mahesh - RSREC

COMPUTERS IN MANAGEMENT The rapid evolution of computer technology is expanding mans desire to obtain computer assistance in solving more and more complex problems: problems which were considered solely in the domain of mans intuitive and judgmental processes, particularly in organizations, a few years ago. Information systems are becoming of ever greater interest in progressive and dynamic organizations. The need to obtain access conveniently, quickly and economically makes it imperative to devise procedures for the creation, management and utilization of databases in organizations. The basic uses of a computer system in an organization are, Perception - initial entry of data whether captured or generated, into the organization; Recording - physical capture of data; Processing - transformation according to the specific needs of the organization; Transmission - the flows which occur in an information system; Storage - presupposes some expected future use; Retrieval - search for recorded data; Presentation - reporting, communication; and Decision making - a controversial inclusion, except to the extent that the information system engages in decision making that concerns itself. Role of MIS in organization can be compared with the role of heart in the body. It plays following important roles in the organization. It ensures the appropriate data id collected from the various sources, proceeds and sent further to all needy destinations. It helps in Strategic Planning, Modeling Systems, Decision Support Systems, Management Control, Operational Control and Transaction Processing etc. It helps junior management personnel by providing the operational data for planning, scheduling and control and helps them further in decision-making at the operational level to correct and out of control situation. It helps the top management in goal setting, strategic planning and evolving the business plans and their implementation. It plays the role of information generation, communication, problem identification and helps in the process of decision-making. 1. Enterprise collaborative systems: The systems which provide basic services to an employee in an organization are called enterprise collaborative systems. These are cross functional information systems that enhance communication, coordination and collaboration among the members of business teams and workgroups. The goal of enterprise collaborative systems is to enable us to work together more easily and effectively. Components of Enterprise collaborative systems: a. Electronic Communication Tools: The basic services provided by electronic communication tools are email, instant messaging, voice mail, faxing and paging. These tools enable us to electronically send messages over computer networks. This helps you to share every thing from voice and text messages among the team members. The ease and efficiency of such communications are major contributors to the collaboration process. b. Electronic Conferencing Tools: This helps people communicate and collaborate while working together. A variety of conferencing methods enables the members of teams and workgroups at different locations to exchange ideas interactively at the same time, or at different times at their convenience.
52

Mahesh - RSREC

The services provided by electronic conferencing tools are, voice conferencing, video conferencing, discussion forums and electronic meeting systems. c. Work Collaborative Systems: These tools help people accomplish or manage group work activities. The category of software includes calendaring and scheduling tools, task and project management tools, workflow systems and knowledge management tools. Functional Business Systems: There are many ways to use information technology in business as there are business activities to be performed, business problems to be solved, and business opportunities to be pursued. Let us have a basic understanding and appreciation of the major ways information systems are used to support each of the functions of business that must be accomplished in any company that wants to succeed. 1. Marketing Systems: The business function of marketing is concerned with the planning, promotion and sale of existing products in existing products in existing markets and the development of new products and new markets to better attract and serve present and potential customers. Thus, marketing performs a vital function in the operating of a business enterprise. Business firms have increasingly turned to information technology to help them perform viral marketing functions in the face of the rapid changes of todays environment. a. Interactive Marketing: Interactive Marketing refers to the evolving trend in marketing whereby marketing has moved from a transaction-based effort to a conversation. Interactive marketing is the ability to address the customer, remember what the customer says and address the customer again in a way that illustrates that we remember what the customer has told us. Amazon.com is an excellent example of the use of interactive marketing, as customers record their preferences and are shown book selections that match not only their preferences but recent purchases. b. Sales Force Automation Systems (SFA): Typically a part of a companys customer relationship management system, is a system that automatically records all the stages in a sales process. SFA includes a contact management system which tracks all contact that has been made with a given customer, the purpose of the contact, and any follow up that might be required. This ensures that sales efforts are not duplicated, reducing the risk of irritating customers. SFA also includes a sales lead tracking system, which lists potential customers through paid phone lists, or customers of related products. Other elements of an SFA system can include sales forecasting, order management and product knowledge. c. Customer relationship management: Customer relationship management is a broadly recognized, widelyimplemented strategy for managing and nurturing a companys interactions with clients and sales prospects. It involves using technology to organize, automate, and synchronize business processesprincipally sales activities, but also those for marketing, customer service, and technical support. The overall goals are to find, attract, and win new clients, nurture and retain those the company already has, entice former clients back into the fold, and reduce the costs of marketing and client service. 2. Production / Operations Systems: Manufacturing information systems support the production / operations function that includes all activities concerned with the planning and control of the processes of producing goods or services. Information systems are used for operation s management and production helps all firms to plan, monitor and control manufacturing process. a. Material requirements planning (MRP) is a production planning and inventory control system used to manage manufacturing processes. Most MRP systems are software-based, while it is possible to conduct MRP by hand as well. An MRP system is intended to simultaneously meet three objectives: * Ensure materials and products are available for production and delivery to customers. * Maintain the lowest possible level of inventory. * Plan manufacturing activities, delivery schedules and purchasing activities.
53

Mahesh - RSREC

b. Manufacturing Resource Planning (MRP II) is defined by APICS as a method for the effective planning of all resources of a manufacturing company. Ideally, it addresses operational planning in units, financial planning in dollars, and has a simulation capability to answer "what-if" questions and extension of closed-loop MRP. c. Process control system (PCS) is a general term that encompasses several types of control systems, including supervisory control and data acquisition (SCADA) systems, distributed control systems (DCS), and other smaller control system configurations such as skid-mounted programmable logic controllers (PLC) often found in the industrial sectors and critical infrastructures. 3. Finance Systems: Computer based financial management systems support business managers and professionals in decisions concerning the financing of a business, allocation and control of financial resources within the business. a. Cash Management: Cash management system involves the process of forecasting and managing cash position. b. Credit Management: A typical company will have a credit management system it monitors the working capital requirements and analyze the profitable sources of finance. c. Capital Budgeting: Capital budgeting process involves evaluating the profitability and financial impact of proposed capital expenditures. Long term expenditure proposals for facilities and equipment can be analyzed using a variety of return on investment evaluation techniques. d. Investment Management: Investment management is the professional management of various securities (shares, bonds and other securities) and assets (e.g., real estate), to meet specified investment goals for the benefit of the investors. 4. Accounting System: Accounting systems are the oldest and most widely used information systems in business. Computer based accounting system records and reports the flow of funds through an organization on historical basis and produce important financial statements such as balance sheets and income statements. Such systems also produce forecasts future conditions such as projected financial statements and financial budgets. a. Accounts receivable (A/R) is one of a series of accounting transactions dealing with the billing of a customer for goods and services he/she has ordered. In most business entities this is typically done by generating an invoice and mailing or electronically delivering it to the customer, who in turn must pay it within an established timeframe called "creditor payment terms." b. Accounts payable is a file or account that contains money that a person or company owes to suppliers, but has not paid yet (a form of debt). When you receive an invoice you add it to the file, and then you remove it when you pay. c. The general ledger, sometimes known as the nominal ledger, is the main accounting record of a business which uses double-entry bookkeeping. It will usually include accounts for such items as current assets, fixed assets, liabilities, revenue and expense items, gains and losses. d. Inventory control: Supervision of the supply, storage and accessibility of items in order to insure an adequate supply without excessive oversupply. e. Payroll: Accounting systems process payroll of the employees on the basis of time and attendance and work performance. 5. Human Resource System: The human resource management system involves the recruitment, placement, evaluation, compensation and development of the employees of an organization. The goal of human resource management is the effective and efficient use of human resources of a company. Human resource information systems are designed to support, a. Planning and to meet the personnel needs of the business. b. To maintain the employee details and skill inventory record of current employees. c. To produce compensation related information.
54

Mahesh - RSREC

OFFICE AUTOMATION SYSTEMS


The term office automation refers to all tools and methods that are applied to office activities which make it possible to process written, visual and sound data in computer aided manner. An office automation system (OAS) facilitates everyday information processing tasks in offices and business organizations. These systems include a wide range of tools such as spreadsheets, word processors, and presentation packages. Although telephones, e-mail, v-mail, and fax can be included in this category, we will treat communication systems as a separate category. OASs help people perform personal recordkeeping, writing, and calculation chores efficiently. Of all the system types, OASs and communication systems are the most familiar. Objectives of office automation system:

- Facilitates everyday information processing tasks in offices and business organizations. - These systems include wide range of tools such as spread sheet, word processor, presentation tools, telephones, ephabex, email, v-mail, fax and Xerox. - Desktop publishing system comprising text and image processing system. Activities / Functions of office automation: 1. Exchange of information 2. Management of administrative documents 3. Handling numerical data 4. Meeting planning 5. Management of work schedules Office suite tools: 1. Word Processing: Text and image processing systems store, revise, and print documents containing text or image
data. These systems started with simple word processor but have evolved to include desktop publishing systems for creating complex documents ranging from brochures to book chapters. 2. Spreadsheet: Spreadsheets are an efficient method for performing calculations that can be visualized in terms of the cells of a spreadsheet. Although spreadsheet programs seem second nature today, the first spreadsheet program was VisiCalc, which helped create the demand for the first personal computers in the late 1970s. 3. Presentation tool: Presentation packages help managers develop presentations independently, instead of working with typists and technical artists. These products automatically convert outlines into printed pages containing appropriately spaced titles and subtitles. These pages can be copied directly onto transparencies or slides used in presentations. 4. Database: Personal database systems and note-taking systems help people keep track of their own personal data (rather than the organizations shared data.) 5. Scheduler: Typical applications include an appointment book and calendar, to do list, and a notepad. When using these tools for personal productivity purposes, users can apply any approach they want because the work is unstructured. In these situations, some individuals use them extensively and enjoy major efficiency benefits, whereas others do not use them at all. The same tools can also be used for broader purposes; however, in which they are incorporated into larger systems that organizations use to structure and routinize tasks. For example, a corporate planning system may require each department manager to fill in and forward a pre-formatted spreadsheet whose uniformity will facilitate the corporations planning process.

Main office suites: MS Office, Apple works, Corel word perfect, IBM/Lotus smart suite, Sun star office and Open office (freeware).
55

Mahesh - RSREC

DECISION SUPPORT SYSTEMS Decision support systems constitute a class of computer-based information systems including knowledge-based systems that support decision-making activities. DSSs serve the management, operations, and planning levels of an organization and help to make decisions, which may be rapidly changing and not easily specified in advance. A properly designed DSS is an interactive software-based system intended to help decision makers compile useful information from a combination of raw data, documents, personal knowledge, or business models to identify and solve problems and make decisions. Typical information that a decision support application might gather and present are: inventories of all of your current information assets (including relational data sources, data warehouses, and data marts), comparative sales figures between one week and the next, Projected revenue figures based on new product sales assumptions. DSS components may be classified as: 1. Inputs: Factors, numbers, and characteristics to analyze 2. User Knowledge and Expertise: Inputs requiring manual analysis by the user 3. Outputs: Transformed data from which DSS "decisions" are generated 4. Decisions: Results generated by the DSS based on user criteria
Corporate Database User with Structured / Unstructured Problem Dialogue System using a Planning Language

User Database

DSS Model Base

Benefits of DSS: 1. Improves personal efficiency 2. Expedites problem solving (speed up the progress of problems solving in an organization) 3. Facilitates interpersonal communication 4. Promotes learning or training 5. Increases organizational control 6. Generates new evidence in support of a decision 7. Creates a competitive advantage over competition 8. Encourages exploration and discovery on the part of the decision maker 9. Reveals new approaches to thinking about the problem space 10. Helps automate the managerial processes. Executive decision making environment: Information can be used to evaluate the marketplace by surveying changing tastes and needs, monitoring buyers' intentions and attitudes, and assessing the characteristics of the market. Information is critical in keeping tabs on the competition by watching new product developments, shifts in market share, individual company performance, and overall industry trends. Intelligence helps managers anticipate legal and political changes, and
56

Mahesh - RSREC

monitor economic conditions in the mother land and abroad. In short, intelligence can provide answers to two key business questions: How am I doing? And where am I headed?" 1. Environmental information: a. Organization-specific information: e.g. general information about an organization, business activities, and key success factors. b. Industry-specific information: e.g. information about supply and demand situations and change processes in the industry. c. General knowledge of the business environment: e.g. demographical, physical, international, legislative, political, economic, technological, social environment, and cultural information. General knowledge often relates to matters which organizations cannot have affect, but which have an influence on e.g. the entire industry. 2. Competitors information 3. Internal information

Data Flow Representation of Executive Decision Planning Environment


Transaction Processing Data Internal Projections Strategic Planning Tactical Planning Other Areas of Firm

External Data

Decision Support Tools: General purpose languages Database languages NOMAD II Focus RAMIS R-base 5000 SQL A/S 400 Database Oracle Dbase Informix Teradata Access

Brain of decision making Model base software Foresight Lotus 1-2-3 Model Multiplan Omnicalc

Special purpose planning languages Statistical software SAS SPSS

57

Mahesh - RSREC

EXPERT SYSTEMS An expert system is a knowledge-based information system; that is, it uses its knowledge about a specific area to act as an expert consultant to users. The components of an expert system are a knowledge base and software modules that perform inferences on the knowledge and offer answers to a users questions. Expert systems are being used in many different fields, including medicine, engineering, the physical sciences, and business. For example, expert systems now help diagnose illnesses, search for minerals, analyze compounds, recommend repairs, and do financial planning. Expert systems can support either operations or management activities. Evolution: Expert systems were introduced by researchers in the Stanford Heuristic Programming Project, Edward Feigenbaum father of expert systems at Stanford University. Expert systems were among the first truly successful forms of AI software. An expert system is software that attempts to provide an answer to a problem, or clarify uncertainties where normally one or more human experts would need to be consulted. Expert systems are most common in a specific problem domain, and are a traditional application and/or subfield of artificial intelligence. A wide variety of methods can be used to simulate the performance of the expert however common to most or all are 1) The creation of a knowledge base which uses some knowledge representation formalism to capture the Subject Matter Expert's (SME) knowledge and 2) A process of gathering that knowledge from the SME and codifying it according to the formalism, which is called knowledge engineering. Expert systems may or may not have learning components but a third common element is that once the system is developed it is proven by being placed in the same real world problem solving situation as the human SME, typically as an aid to human workers or a supplement to some information system. Expert systems are designed and created to facilitate tasks in the fields of accounting, medicine, process control, financial service, production, human resources etc. the foundation of a successful expert system depends on a series of technical procedures and development that may be designed by certain technicians and related experts. As such, expert systems do not typically provide a definitive answer, but provide a probabilistic recommendation. Shells or Inference Engine: A shell is a complete development environment for building and maintaining knowledge-based applications. It provides a step-by-step methodology, and ideally a user-friendly interface such as a graphical interface, for a knowledge engineer that allows the domain experts themselves to be directly involved in structuring and encoding the knowledge. Examples of shells include Drools, CLIPS, JESS, d3web, and eGanges. The topic expert systems have many points of contact with general systems theory, operations research, business process reengineering and various topics in applied mathematics and management science. Advantages: 1. Provides consistent answers for repetitive decisions, processes and tasks. 2. Holds and maintains significant levels of information. 3. Always asks a question, that a human might forget to ask. 4. Can work continuously. 5. Can be used by user more frequently. 6. A multi user expert system can serve more users at a time. Disadvantages: - Cannot adapt to changing environment, unless knowledge base is changed.
58

Mahesh - RSREC

Working Model of an Expert System


Expert Advice User Interface Program Interface Engine Program Knowledge Base

User

Knowledge Acquisition Program

Knowledge Engineering

Expert / Knowledge Engineer

KNOWLEDGE MANAGEMENT SYSTEM (KM SYSTEM) Knowledge management system refers to a generally IT based system for managing knowledge in organizations for supporting creation, capture, storage and dissemination of information. It can comprise a part of a Knowledge Management initiative. The idea of a KM system is to enable employees to have ready access to the organization's documented base of facts, sources of information, and solutions. For example a typical claim justifying the creation of a KM system might run something like this: an engineer could know the metallurgical composition of an alloy that reduces sound in gear systems. Sharing this information organization wide can lead to more effective engine design and it could also lead to ideas for new or improved equipment. Forms of Knowledge Management System: A KM system could be any of the following, 1. Document based i.e. any technology that permits creation/management/sharing of formatted documents such as Lotus Notes, web, distributed databases etc. 2. Ontology/Taxonomy based: these are similar to document technologies in the sense that a system of terminologies (i.e. ontology) are used to summarize the document e.g. Author, Subject, Organization etc. as in DAML & other XML based ontologies. 3. Based on AI technologies which use a customized representation scheme to represent the problem domain. 4. Provide network maps of the organization showing the flow of communication between entities and individuals 5. Increasingly social computing tools are being deployed to provide a more organic approach to creation of a KM system. KMS systems deal with information (although Knowledge Management as a discipline may extend beyond the information centric aspect of any system) so they are a class of information system and may build on, or utilize other information sources. Distinguished Features of a KMS can include: 1. Purpose: a KMS will have an explicit Knowledge Management objective of some type such as collaboration, sharing good practice or the like. 2. Context: One perspective on KMS would see knowledge is information that is meaningfully organized, accumulated and embedded in a context of creation and application. 3. Processes: KMS are developed to support and enhance knowledge-intensive processes, tasks or projects of e.g., creation, construction, identification, capturing, acquisition, selection, valuation, organization, linking, structuring, formalization, visualization, transfer, distribution, retention, maintenance,
59

Mahesh - RSREC

refinement, revision, evolution, accessing, retrieval and last but not least the application of knowledge, also called the knowledge life cycle. 4. Participants: Users can play the roles of active, involved participants in knowledge networks and communities fostered by KMS, although this is not necessarily the case. KMS designs are held to reflect that knowledge is developed collectively and that the distribution of knowledge leads to its continuous change, reconstruction and application in different contexts, by different participants with differing backgrounds and experiences. 5. Instruments: KMS support KM instruments, e.g., the capture, creation and sharing of the codifiable aspects of experience, the creation of corporate knowledge directories, taxonomies or ontologies, expertise locators, skill management systems, collaborative filtering and handling of interests used to connect people, the creation and fostering of communities or knowledge networks. Strategies for delivering knowledge in digital age Leveraging organizational knowledge by establishing experts network Capturing and distributing expert knowledge, communication and collaboration, content development Accessing and retrieving documents stored online

Enterprise Intelligence Creating Knowledge Sharing and Management Document Management

Knowledge Management System Architecture


Single Point to Access All Corporate Data Personalized Views on News and Data Collaboration Tools Community Work Areas User (Customer / Employee)

Enterprise Knowledge Portal

Portal Server with Knowledge Management Engine

Structured Data Sources ERP Database CRM Database Other Databases

Unstructured Data Sources Email groupware File system - Documents - Presentations Web - Internet - Intranet

Enterprise Knowledge Base

60

Mahesh - RSREC

A KMS offers integrated services to deploy KM instruments for networks of participants, i.e. active knowledge workers, in knowledge-intensive business processes along the entire knowledge life cycle. KMS can be used for a wide range of cooperative, collaborative, adhocracy and hierarchy communities, virtual organizations, societies and other virtual networks, to manage media contents; activities, interactions and workflows purposes; projects; works, networks, departments, privileges, roles, participants and other active users in order to extract and generate new knowledge and to enhance, leverage and transfer in new outcomes of knowledge providing new services using new formats and interfaces and different communication channels. Benefits of KM Systems: Some of the advantages claimed for KM systems are: 1. Sharing of valuable organizational information throughout organizational hierarchy. 2. Can avoid re-inventing the wheel, reducing redundant work. 3. May reduce training time for new employees 4. Retention of Intellectual Property after the employee leaves if such knowledge can be codified. GROUP DECISION SUPPORT SYSTEMS (GDSS) Group decision support systems are referred to as a Group Support Systems (GSS) or electronic meeting systems with a collaboration technology designed to support meetings and group work. Significant research supports measuring impacts of: Adapting human factors for these technologies, Facilitating interdisciplinary collaboration, and Promoting effective organizational learning. Group Decision Support Systems are categorized within a time-place paradigm. Whether synchronous or asynchronous the systems matrix comprises: same time AND same place same time BUT different place different time AND different place different time BUT same place Commercial software products that support GDSS practices over the Internet in both synchronous and asynchronous settings include facilitate.com, smartSpeed Connect, ThinkTank and ynSyte's WIQ. There is also an initiative to create open-source software that can support similar group processes in education, where this category of software has been called a Discussion Support System. An electronic meeting system (EMS) is a type of computer software that facilitates creative problem solving and decision-making of groups within or across organizations. The term was coined by Jay Nunamaker in 1991. The term is synonymous with Group Support Systems (GSS) and essentially synonymous with Group Decision Support Systems (GDSS). Electronic meeting systems form a class of applications for computer supported cooperative work. Mainly through (optional) anonymization and parallelization of input, electronic meeting systems overcome many deleterious and inhibitive features of group work. Evolution: Nunamaker developed the EMS technology in mid sixties. (1) At the University of Arizona, a prototype called Plexsys was developed building on the PSL/PSA project. (2) At the University of Minnesota a system called SAMM (Software Aided Meeting Management) was created. (3) At Xerox PARC, Colab was developed. (4) Researchers at the University of Michigan developed MAC-based EMS-tools.
61

Mahesh - RSREC

Features of EMS: Following are the features of EMS systems, Web Application: EMS consists of easy to use GUIs that provide desktop functionality while running in a web browser. Group decision support products deliver the functionality of an EMS over the Internet Standard functionality: An electronic meeting system is a suite of configurable collaborative software tools that can be used to create predictable, repeatable patterns of collaboration among people working toward a goal. With an electronic meeting system, each user typically will have his or her computer, and each user can contribute to the same shared object at the same time. Thus, nobody needs to wait for a turn to speak; so people don't forget what they want to say while they are waiting for the floor. Electronic brainstorming: In an electronic brainstorming, the group creates a shared list of ideas. In contrast to paper-based brainstorming methods, contributions are directly entered by the participants and immediately visible to all, typically in anonymised format. By sidestepping social barriers (anonymity) and overcoming limitations of process (parallelization) more ideas are generated and shared with less conformity than in a traditional brainstorming session. The benefits of electronic brainstorming increase with group size Discussion: Discussion tools in EMS resemble a structured chat which may be conducted on separate topics in parallel, often based on a super ordinate task or question. Parallelization occurs at multiple levels: (1) At the level of multiple topics which are presented for discussion at the same time. Participants are free to contribute to some topics while merely scanning others. (2) Further, parallelization occurs at the level of contributions which the participants can enter independently of each other. Discussions are conducted anonymously or named. Voting: Sophisticated EMS provides a range of vote methods such as numeric scale, rank order, budget or multiple selections. In more advanced systems, a ballot list can be subjected to multiple votes on multiple criteria with different vote methods for e.g., utility or impact analysis. Results are available in real time, typically both as tables and charts. Agenda: Modern EMS organizes the process of a meeting by an electronic agenda which structures the activities of a meeting or workshop by topic, chronology and supporting tool. From the agenda, the host (facilitator) of the meeting invites ("starts") the participants to contribute to the various activities. Minutes: The minutes of an EMS-based meeting exist as database content. They can be exported to file or printed. Formatting and available file formats differ substantially between EMS. Advantages of GDSS / EMS: Any-place (online) capability which avoids travel time and cost Increased participant availability (any place, any time) Increased interactivity and participation by parallelization Increased openness and less personal prejudice through anonymity More sophisticated analysis by voting and analysis in real time Less effort in preparation by use of meeting templates Repeatable meeting and workshop process through meeting templates automatic, comprehensive, neutral documentation Disadvantages of GDSS / EMS: The formerly high infrastructure requirements have been reduced to Internet access and a web browser The formerly high demands on facilitators have been greatly reduced in systems that are designed to support everyday use by the non-expert user The traditional cultural barriers to the use of technology in meetings have been overcome through the general familiarization of users with telephone and web conferences.

62

Mahesh - RSREC

ARTIFICIAL INTELLIGENCE Artificial intelligence (AI) is the intelligence of machines and the branch of computer science that aims to create it. Commonly define the field as "the study and design of intelligent agents where an intelligent agent is a system that perceives its environment and takes actions that maximize its chances of success. John McCarthy, who coined the term in 1956, defines it as "the science and engineering of making intelligent machines. The field was founded on the claim that a central property of humans, intelligencethe sapience of Homo sapienscan be so precisely described that it can be simulated by a machine. This raises philosophical issues about the nature of the mind and limits of scientific hubris, issues which have been addressed by myth, fiction and philosophy since antiquity. Artificial intelligence has been the subject of optimism, today, has become an essential part of the technology industry, providing the heavy lifting for many of the most difficult problems in computer science. AI research is highly technical and specialized, deeply divided into subfields that often fail to communicate with each other. Subfields have grown up around particular institutions, the work of individual researchers, the solution of specific problems, longstanding differences of opinion about how AI should be done and the application of widely differing tools. The central problems of AI include such traits as reasoning, knowledge, planning, learning, communication, perception and the ability to move and manipulate objects. Turing Test: In 1950, Alan Turing proposed a general procedure to test the intelligence of an agent now known as the Turing test. This procedure allows almost all the major problems of artificial intelligence to be tested. Artificial intelligence can also be evaluated on specific problems such as small problems in chemistry, handwriting recognition and game-playing. Such tests have been termed subject matter expert Turing tests. Smaller problems provide more achievable goals and there are an ever-increasing number of positive results. The broad classes of outcome for an AI test are: Optimal: it is not possible to perform better Strong super-human: performs better than all humans Super-human: performs better than most humans Sub-human: performs worse than most humans Attributes / Traits of Intelligence behavior or Artificial Intelligence: Thinking, reasoning and problem solving: Early AI researchers developed algorithms that imitated the stepby-step reasoning that human were often assumed to use when they solve puzzles and play board games or make logical deductions. By the late 1980s and '90s, AI research had also developed highly successful methods for dealing with uncertain or incomplete information, employing concepts from probability and economics. Knowledge representation: Knowledge representation and knowledge engineering are central to AI research. Many of the problems machines are expected to solve will require extensive knowledge about the world. Among the things that AI needs to represent are: objects, properties, categories and relations between objects; situations, events, states and time; causes and effects; knowledge about knowledge (what we know about what other people know); and many other, less well researched domains. Planning: Intelligent agents must be able to set goals and achieve them. They need a way to visualize the future (they must have a representation of the state of the world and be able to make predictions about how their actions will change it) and be able to make choices that maximize the utility (or "value") of the available choices. Natural language processing: Natural language processing gives machines the ability to read and understand the languages that humans speak. Many researchers hope that a sufficiently powerful natural language processing system would be able to acquire knowledge on its own, by reading the existing text available over the internet.
63

Mahesh - RSREC

Perception: Machine perception is the ability to use input from sensors (such as cameras, microphones, sonar and others more exotic) to deduce aspects of the world. Computer vision is the ability to analyze visual input. A few selected sub problems are speech recognition, facial recognition and object recognition. Social Intelligence: Emotion and social skills play two roles for an intelligent agent. First, it must be able to predict the actions of others, by understanding their motives and emotional states. (This involves elements of game theory, decision theory, as well as the ability to model human emotions and the perceptual skills to detect emotions.) Also, for good human-computer interaction, an intelligent machine also needs to display emotions. Creativity: A sub-field of AI addresses creativity both theoretically from a philosophical and psychological perspective and practically through specific implementations of systems that generate outputs that can be considered creative. Applications of Artificial Intelligence: Game playing: You can buy machines that can play master level chess for a few hundred dollars. There is some AI in them, but they play well against people mainly through brute force computation--looking at hundreds of thousands of positions. To beat a world champion by brute force and known reliable heuristics requires being able to look at 200 million positions per second. Speech recognition: In the 1990s, computer speech recognition reached a practical level for limited purposes. Thus United Airlines has replaced its keyboard tree for flight information by a system using speech recognition of flight numbers and city names. It is quite convenient. On the other hand, while it is possible to instruct some computers using speech, most users have gone back to the keyboard and the mouse as still more convenient. Understanding natural language / Learning systems: Just getting a sequence of words into a computer is not enough. Parsing sentences is not enough either. The computer has to be provided with an understanding of the domain the text is about, and this is presently possible only for very limited domains. Computer vision / Virtual reality: The world is composed of three-dimensional objects, but the inputs to the human eye and computers' TV cameras are two dimensional. Some useful programs can work solely in two dimensions, but full computer vision requires partial three-dimensional information that is not just a set of two-dimensional views. At present there are only limited ways of representing three-dimensional information directly, and they are not as good as what humans evidently use. Expert systems: A ``knowledge engineer'' interviews experts in a certain domain and tries to embody their knowledge in a computer program for carrying out some task. How well this works depends on whether the intellectual mechanisms required for the task are within the present state of AI. Heuristic classification: One of the most feasible kinds of expert system given the present knowledge of AI is to put some information in one of a fixed set of categories using several sources of information. An example is advising whether to accept a proposed credit card purchase. Information is available about the owner of the credit card, his record of payment and also about the item he is buying and about the establishment from which he is buying it (e.g., about whether there have been previous credit card frauds at this establishment). Robots: Robots have become common in many industries. They are often given jobs that are considered dangerous to humans. Robots have proven effective in jobs that are very repetitive which may lead to mistakes or accidents due to a lapse in concentration and other jobs which humans may find degrading. General Motors Corporation uses around 16,000 robots for tasks such as painting, welding, and assembly. Japan is the leader in using and producing robots in the world. Medicine: A medical clinic can use artificial intelligence systems to organize bed schedules, make a staff rotation, and provide medical information. Artificial neural networks are used for medical diagnosis (such as in Concept Processing technology in EMR software), functioning as machine differential diagnosis.
64

Mahesh - RSREC

UNIT 5: IMPLEMENTATION, EVALUATION AND MAINTENANCE OF MIS


IMPLEMENTATION AND MAINTENANCE Conversion Conversion is the process of changing from the old system to the new system. Four main conversion strategies can be employed. They are the parallel strategy, the direct cutover strategy, the pilot strategy and the phased strategy. In a parallel strategy both the old system and its potential replacement are run together for a time until everyone is assure that the new one functions correctly. This is the safest conversion approach because, in the event of errors or processing disruptions, the old system can still be used as a backup. But, this approach is very expensive, and additional staff or resources may be required to run the extra system. The direct cutover strategy replaces the old system entirely with the new system on an appointed day. At first glance, this strategy seems less costly than the parallel conversion strategy. But, it is a very risky approach that can potentially be more costly than parallel activities if serious problems with the new system are found. There is no other system to fall back on. Dislocations, disruptions and the cost of corrections are enormous. The pilot study strategy introduces the new system to only a limited area of the organization, such as a single department or operating unit. When this version is complete and working smoothly, it is installed throughout the rest of the organization, either simultaneously or in stages. The phased approach strategy introduces the new system in stages, either by functions or by organizational units. If, for example, the system is introduced by functions, a new payroll system might begin with hourly workers who are paid weekly, followed six months later by adding salaried employees( who are paid monthly) to the system. If the system is introduced by organizational units, corporate headquarters might be converted first, followed by outlying operating units four months later. Moving from an old system to a new system requires that end users be trained to use the new system. Detailed documentation showing how the system works from both a technical and end user standpoint is finalized during conversion time for use in training and everyday operations. Lack of proper training and documentation contributes to system failure, so this portion of the systems development process is very important. PRODUCTION AND MAINTENANCE After the new system is installed and conversion is complete, the system is said to be in production. During this stage the system will be reviewed by both users and technical specialists to determine how well it has met its original objectives and to decide whether any revisions or modifications are in order. In some instances, a formal post implementation audit document will be prepared. After the system has been fine tuned, it will need to be maintained while it is in production to correct errors, meet requirements or improve processing efficiency. Once a system is fully implemented and is being used in business operations, the maintenance function begins. Systems maintenance is the monitoring, or necessary improvements. For example, the implementation of a new system usually results in the phenomenon known as the learning curve. Personnel who operate and use the system will make mistake simply because they are familiar with it. Though such errors usually diminish as experience is gained with a new system, they do point out areas where a system may be improved. Maintenance is also necessary for other failures and problems that arise during the operation of a system. End users and information systems personnel then perform a troubleshooting function to determine the causes of and solutions to such problems.

65

Mahesh - RSREC

Maintenance also includes making modifications to an established system due to changes in the business organizations, and new e-business and ecommerce initiatives may require major changes to current business systems.

SYSTEM MODELING FOR MIS


Models: A representation of some of the important aspect of the system being built Modeling: The process of creating models The purpose of models: Models can be used to document the system analysis and design result Learning from the modeling process Reducing complexity by abstraction and helping focus the analysts efforts on a few aspects of the large, complex information at a time Models are used as memory aids for analysts Models are used as communication media between team members and stakeholders Requirements models are used as documentation for future development teams when they maintain or enhance the system Types of Models: Mathematical model: a series of formulas that describe technical aspects of a system Descriptive model: narrative memos, reports, or lists that describe some aspect of a system Graphical model: diagrams and schematic representations of some aspect of a system Mathematical Model: A mathematical model uses mathematical language to describe a system. The process of developing a mathematical model is termed mathematical modeling (also modeling). Mathematical models are used not only in the natural sciences (such as physics, biology, earth science, meteorology) and engineering disciplines, but also in the social sciences (such as economics, psychology, sociology and political science); physicists, engineers, computer scientists, and economists use mathematical models most extensively. Eykhoff (1974) defined a mathematical model as 'a representation of the essential aspects of an existing system (or a system to be constructed) which presents knowledge of that system in usable form'. Mathematical models can take many forms, including but not limited to dynamical systems, statistical models, differential equations, or game theoretic models. These and other types of models can overlap, with a given model involving a variety of abstract structures. Classifying mathematical models: Many mathematical models can be classified in some of the following ways: 1. Linear vs. nonlinear: Mathematical models are usually composed by variables, which are abstractions of quantities of interest in the described systems, and operators that act on these variables, which can be algebraic operators, functions, differential operators, etc. If all the operators in a mathematical model exhibit linearity, the resulting mathematical model is defined as linear. A model is considered to be nonlinear otherwise. In a mathematical programming model, if the objective functions and constraints are represented entirely by linear equations, then the model is regarded as a linear model. If one or more of the objective functions or constraints are represented with a nonlinear equation, then the model is known as a nonlinear model. Nonlinearity, even in fairly simple systems, is often associated with phenomena such as chaos and irreversibility. Although there are exceptions, nonlinear systems and models tend to be more difficult to study than linear ones.
66

Mahesh - RSREC

2. Deterministic vs. probabilistic (stochastic): A deterministic model is one in which every set of variable states is uniquely determined by parameters in the model and by sets of previous states of these variables. Therefore, deterministic models perform the same way for a given set of initial conditions. Conversely, in a stochastic model, randomness is present, and variable states are not described by unique values, but rather by probability distributions. 3. Static vs. dynamic: A static model does not account for the element of time, while a dynamic model does. Dynamic models typically are represented with difference equations or differential equations. 4. Lumped vs. distributed parameters: If the model is heterogeneous (varying state within the system) the parameters are distributed. If the model is homogeneous (consistent state throughout the entire system) then the parameters are lumped. Distributed parameters are typically represented with partial differential equations. Descriptive Model: Descriptive Modeling is one of the two main branches of Data modeling (the other one being Predictive Modeling). It is also called "Exploratory Analysis". Its purpose is to extract compact and easily understood information from large, sometimes gigantic data tables. Contrary to Predictive Modeling, it makes no distinction between variables, that are here all placed on the same footing. Descriptive Modeling is not as neatly structured as Predictive modeling. It encompasses a large and disparate set of goals and techniques. The study may focus on one variable only, and then require only ordinary Statistics. Calculating the mean and variance of a variable, draw its histogram with various bin widths are simple Descriptive Models. Graphical Model: A graphical model is a probabilistic model for which a graph denotes the conditional independence structure between random variables. They are commonly used in probability theory, statisticsparticularly Bayesian statisticsand machine learning. A factor graph is an undirected bipartite graph connecting variables and factors. Each factor represents a probability distribution over the variables it is connected to. Graphs are converted into factor graph form to perform belief propagation. A clique tree or junction tree is a tree of cliques, used in the junction tree algorithm. A chain graph is a graph which may have both directed and undirected edges, but without any directed cycles (i.e. if we start at any vertex and move along the graph respecting the directions of any arrows, we cannot return to the vertex we started from if we have passed an arrow). Both directed acyclic graphs and undirected graphs are special cases of chain graphs, which can therefore provide a way of unifying and generalizing Bayesian and Markov networks. SYSTEMS ENGINEERING Systems engineering is an interdisciplinary field of engineering that focuses on how complex engineering projects should be designed and managed. It is heavily related to Industrial Engineering and Operations Research, and thus all these terms can get blended or combined depending on the context. Issues such as logistics, the coordination of different teams, and automatic control of machinery become more difficult when dealing with large, complex projects. Systems engineering deals with work-processes and tools to handle such projects, and it overlaps with both technical and human-centered disciplines such as control engineering and project management. The term systems engineering can be traced back to Bell Telephone Laboratories in the 1940s. The need to identify and manipulate the properties of a system as a whole, which in complex engineering projects may greatly differ from the sum of the parts' properties, motivated the Department of Defense, NASA, and other industries to apply the discipline. The evolution of systems engineering, which continues to this day, comprises the development and identification of new methods and modeling techniques. These methods aid in better comprehension of
67

Mahesh - RSREC

engineering systems as they grow more complex. Popular tools that are often used in the Systems Engineering context were developed during these times, including USL (Universal systems language), UML (Unified modeling language), QFD (Quality function deployment), and IDEF (Integration definition). Need for system engineering: Systems engineering signifies both an approach and, more recently, as a discipline in engineering. The aim of education in Systems Engineering is to simply formalize the approach and in doing so, identify new methods and research opportunities similar to the way it occurs in other fields of engineering. As an approach, Systems Engineering is holistic and interdisciplinary in flavor. Systems Engineering focuses on analyzing and eliciting customer needs and required functionality early in the development cycle, documenting requirements then proceeding with design synthesis and system validation while considering the complete problem, the system lifecycle. Scope of Systems engineering:

Baseline: A baseline is a line that is a base for measurement or for construction or point of reference. Lifecycle planning: Making plans, projections, and decisions based upon the life cycle theory. This usually helps with resource allocation when considering how much to allot to various projects. The steps that comprise this integration lifecycle are, Access, discovery, cleansing, integration, delivery, development and management, auditing, monitoring and reporting Integrated teaming: The purpose of integrated teaming (IT) is to form and sustain an integrated team for the development of work products. Integrated team members Provide the needed skills and expertise to accomplish the teams tasks. Provides the advocacy and representation necessary to address all essential phases of the products life cycle Collaborate internally and externally with other teams and relevant stake holders as appropriate Share common understanding of the teams tasks and objectives Conduct themselves in accordance with established operating principles and ground rules Systems engineering management: Systems engineering management describes about the plan, activities, processes and tools used in system engineering process. Systems engineering process: A systems engineering process is a process for applying systems engineering techniques to the development of all kinds of systems. Systems engineering processes are related to the stages in a system life cycle. The systems engineering process usually begins at an early stage of the system life cycle and at the very beginning of a project.
68

Mahesh - RSREC

Development phasing: The most important task in the development phase is to build the application. It is common wisdom that it's easier to build an application when a clear set of expectations and properly defined and tested product architecture exists. The work in this phase should be much more straightforward as a result of the work done in the preceding phases. System Engineering Process:

69

Mahesh - RSREC

SYSTEM ENGINEERING METHODOLOGY FOR MIS PROBLEM SOLVING Trade-off studies: Trade- off analysis is a systemic approach to balancing the trade-offs between time, cost and performance. Information is required from costing, scheduling, project review reports, and the original plan. This information can be used to support the following generally accepted six step approach to trade-off analysis. Recognizing and understanding the basis for project conflicts Reviewing the project objectives Analyzing the project environment and status Identifying the alternative courses of action Analyzing and selecting the best alternative Revising the project plan Cost-effective analysis: Cost-effectiveness analysis (CEA) is a form of economic analysis that compares the relative costs and outcomes (effects) of two or more courses of action. Cost-effectiveness analysis is distinct from cost-benefit analysis, which assigns a monetary value to the measure of effect. Risk management: Risk management is the identification, assessment, and prioritization of risks followed by coordinated and economical application of resources to minimize, monitor, and control the probability and/or impact of unfortunate events or to maximize the realization of opportunities. Configuration Management: Configuration management is the process of managing change in hardware, software, firmware, documentation, measurements, etc. As change requires an initial state and next state, the marking of significant states within a series of several changes becomes important. The identification of significant states within the revision history of a configuration item is the central purpose of baseline identification. Interface management: Interface management is the systematic control of all communications that support a process operation. Given the significance of human involvement in most operations, it is important that interactions between people be managed and carefully coordinated to avoid incidents resulting from misunderstandings and lack of information. Data management: It comprises all the disciplines related to managing data as a valuable resource. Data management is the development, execution and supervision of plans, policies, programs and practices that control, protect, deliver and enhance the value of data and information assets. Performance measurement: The purpose of performance measurement is to help organizations understand how decision-making processes or practices led to success or failure in the past and how that understanding can lead to future improvements. Key components of an effective performance measurement system include these: Clearly defined, actionable, and measurable goals that cascade from organizational mission to management and program levels; Cascading performance measures that can be used to measure how well mission, management, and program goals are being met; Established baselines from which progress toward the attainment of goals can be measured; Accurate, repeatable, and verifiable data; and Feedback systems to support continuous improvement of an organizations processes, practices, and results Performance measurement tools: Total productive maintenance (TPM), Technical reviews.

70

Mahesh - RSREC

What are limitations of MIS? What are the factors which lead to the success and failure of MIS in an organization? Factors Contributing to Success If a MIS is to be success then it should have all the features listed as follows: The MIS is integrated into the managerial functions. It sets clear objectives to ensure that the MIS focuses on the major issues of the business. An appropriate information processing technology required to meet the data processing and analysis needs of the users of the MIS is selected. The MIS is oriented, defined and designed in terms of the users The MIS is kept under continuous surveillance, so that its open system design is modified according to the changing information needs. MIS focuses on the results and goals, and highlights the factors and reasons for non achievement. MIS is not allowed to end up into an information generation mill avoiding the noise in the information and the communication system. The MIS recognizes that a manager is a human being and therefore, the systems must consider all the human behavioral factors in the process of the management. The MIS recognizes that the different information needs for different objectives must be met with. The globalization of information in isolation from the different objectives leads to too much information and information and its non-use. The MIS is easy to operate and, therefore, the design of the MIS has such features which make up a user-friendly design. MIS recognizes that the information needs become obsolete and new needs emerge. The MIS design, therefore, has a basic potential capability to quickly meet new needs of information. The MIS concentrates on developing the information support to manager critical success factors. It concentrates on the mission critical applications serving the needs of the top management Factors Contributing to Failures Many a times MIS is a failures. The common factors which are responsible for this are listed as follows: The MIS is conceived as a data processing and not as an information processing system. The MIS does not provide that information which is needed by the managers but it tends to provide the information generally the function calls for. The MIS then becomes an impersonal system. Under estimating the complexity in the business systems and not recognizing it in the MIS design leads to problems in the successful implementation. Adequate attention is not given to the quality control aspects of the inputs, the process and the outputs leading to insufficient checks and controls in the MIS. The MIS is developed without streamlining the transaction processing systems in the organization. Lack of training and appreciation that the users of the information and the generators of the data are different, and they have to play an important responsible role in the MIS. The MIS does not meet certain critical and key factors of its users such as a response to the query on the database, an inability to get the processing done in a particular manner, lack of user-friendly system and the dependence on the system personnel. A belief that the computerized MIS can solve all the management problems of planning and control of the business. Lack of administrative discipline in following the standardized systems and procedures, wrong coding and deviating from the system specifications result in incomplete and incorrect information. The MIS does not give perfect information to all the users in the organization.

71

Mahesh - RSREC

PITFALLS IN MIS DEVELOPMENT Pitfalls in developing stage: 1. No clear definition of mission and purpose 2. No objectives for the company 3. Lack of management participation 4. MIS organization 5. Over reliance on the consultant or manufacturer 6. Communication gap Pitfalls in planning stage: 1. MIS response to business plan 2. Failures in designing master plan 3. Failure in setting project and system objectives 4. Facing reality constraints 5. User resistance - Threat to ego - Economic threat - Threat of insecurity - Loss of autonomy and control - Interpersonal relations changed Pitfalls in implementation: 1. Hardware perspective - Incompatibility 2. Software perspective Incompatibility 3. Need for extensive testing 4. Controlling the MIS project Pitfalls in Designing: 1. Confusion over considering alternative designs 2. Ignoring importance of user interface 3. Over automation 4. Worshipping the computer 5. Lack of proper documentation

72

Mahesh - RSREC

Unit-6
Testing security The primary reason for testing the security of an operational system is to identify potential vulnerabilities and subsequently repair them. It is imperative that organizations routinely test systems for vulnerabilities and misconfigurations to reduce the likelihood of system compromise. A small number of flaws in software programs are responsible for the vast majority of successful Internet attacks. Testing techniques for software quality assurance: To test a code once it has been declared as complete is critical to achieving the quality assured to the customer. This testing is expected to discover and, more importantly, to correct errors before the software is demonstrated and delivered. The goal is therefore to discover and correct so that the customer does not encounter any errors later. This is achieved through test cases designed specifically to detect errors in the code. The test cases are designed and written using testing techniques. Testing principles: 1. Tests should be traceable to the customer requirement stipulated in the software requirement specification. 2. Test cases should be designed as soon as the software requirement specifications are finalized. 3. Test cases should be planned for lower level components and taken to higher level. 4. Test cases should address at least all critical key functions, features, facilities and performance needs stipulated in the software requirement specification. 5. Test cases should not leave behind any fatal errors on delivery to the customer. Qualities of a good test: 1. There is a high probability of detecting an error. 2. It is not redundant. Each test should have a unique purpose, which is not available in any other test. 3. It is simple to execute and is independent of other tests, when it comes to execution. Testing methods: A successful project gives rise to customer satisfaction and builds a strong partnership leading to repeat business. The quality definitions built around customer expectations from the project. Quality assurance is a system or process that ensures delivery of quality to the customer. Customers expectations are delivered only when the system that makes it possible has certain quality features. The testing strategy is built on the following test platforms: Test Unit test Module test Application test Integration test Test specification plan (TSP): 1. Scope of Testing 2. Test Plan 3. Test Specification 4. Test case data and test environment
73

That which is being tested Process Function Business operations Multiple business operations

Focus Code Design Design Interface

Mahesh - RSREC

5. Expected results 6. Actual results 7. Errors found and reported 8. Corrective action taken 9. Retest from 4-8 White box testing for procedures: White box testing develops test cases that covers and guarantees that all 1. Independent paths with in a module are covered. 2. Logical decisions are tested on true or false criteria. 3. Covers all loops (Iterative processes) 4. Data structures are validated. Black box testing for functional requirements: Black box testing focuses on testing the functional requirements of the system. It is not a substitute for white box testing. Black box testing identifies the following kinds of errors: 1. Incorrect or missing functions 2. Interface missing 3. Error in data model 4. Error in access to external data source GUI testing: Graphical user interface (GUI) is a client end component in software that is reusable and, therefore, tends to become generic. Test cases need to be developed on following points: Windows, Pull down menus, Data entry and Object operations. Client-Server testing: Client/Server architecture demands a number of factors for testing an effective operations system. The testing strategy to client/server is executed in the following manner: 1. Test client application in isolation mode i.e., regarding operations of server and network. 2. Test client application in conjunction with server operation. 3. Test complete client/server architecture. Quality assurance in security: Systems running on a web platform need protection from unauthorized access so that confidential, private information is not lost to undesirable elements in the public. Hence, the quality of the security system is extremely important. The security issues the system should tackle are protection to information from users. Measures: Secure the host machine, access restriction to web server, document encryption and use of digital signature, fire wall implementation. Documentation testing: Errors in documentation give rise to fatal results. Documentation is tested in following order: Technical review, actual live operational review. Security testing techniques: There are several different types of security testing. Some testing techniques are predominantly manual, requiring an individual to initiate and conduct the test. Other tests are highly automated and require less human involvement.
Network Scanning: Network scanning involves using a port scanner to identify all hosts potentially connected to

an organization's network, the network services operating on those hosts, such as the file transfer protocol (FTP) and hypertext transfer protocol (HTTP), and the specific application running the identified service, such as WUFTPD, Internet Information Server (IIS) and Apache for the HTTP service. The result of the scan is a
74

Mahesh - RSREC

comprehensive list of all active hosts and services, printers, switches, and routers operating in the address space scanned by the port-scanning tool.
Vulnerability Scanning: Vulnerability scanners attempt to identify vulnerabilities in the hosts scanned.

Vulnerability scanners can also help identify out-of-date software versions, applicable patches or system upgrades, and validate compliance with, or deviations from, the organization's security policy. To accomplish this, vulnerability scanners identify operating systems and major software applications running on hosts and match them with known exposures.
Password Cracking: Password cracking programs can be used to identify weak passwords. Password cracking

verifies that users are employing sufficiently strong passwords. Passwords are generally stored and transmitted in an encrypted form called a hash. When a user logs on to a computer/system and enters a password, a hash is generated and compared to a stored hash. If the entered and the stored hashes match, the user is authenticated. Log Review: Various system logs can be used to identify deviations from the organization's security policy, including firewall logs, IDS logs, server logs, and any other logs that are collecting audit data on systems and networks. While not traditionally considered a testing activity, log review and analysis can provide a dynamic picture of ongoing system activities that can be compared with the intent and content of the security policy. Virus Detection: All organizations are at risk of contracting computer viruses, Trojans and worms if they are connected to the Internet, or use removable media (e.g., floppy disks and CD-ROMs), or use shareware/freeware software. The impact of a virus, Trojan, or worm can be as harmless as a pop-up message on a computer screen, or as destructive as deleting all the files on a hard drive. With any malicious code, there is also the risk of exposing or destroying sensitive or confidential information. There are two primary types of anti-virus programs available: those that are installed on the network infrastructure and those that are installed on end-user machines. Wireless LAN testing: Attackers and other malicious parties now regularly drive around office parks and neighborhoods with laptops equipped with wireless network cards attempting to connect to open access points (this practice is called war driving). To operate a relatively secure wireless network, an organization needs to create a wireless policy and broadly disseminate that policy to all employees and contractors. Given the low cost and ease of use, an organization will need to periodically test its networks for unauthorized and/or misconfigured wireless LANs, as well as periodically scan their sites for incoming signals from neighboring wireless LANs. Penetration Testing: Penetration testing is security testing in which evaluators attempt to list out the security features of a system based on their understanding of the system design and implementation. The purpose of penetration testing is to identify methods of gaining access to a system by using common tools and techniques used by attackers. Coding Techniques: Programming, or as it is commonly called, coding means translating the detailed design of a software product, developed during the design phase, into a sequence of statements that can be executed by a computer. Several evaluation criteria are used in judging the quality of a program. These criteria include memory utilization, readability, execution time and algorithmic complexity. If certain software design information was given to several different people, each of whom was given different evaluation criteria for coding the design, the result would be drastically different programs. Programming languages:

75

Mahesh - RSREC

Over 250 programming languages are in use today. A commercial programming language is defined as a recognized language that is available for purchase and is supported by standard compilers, input/output access, utilities, documentation, library, etc. programming languages can be categorized in many ways, Machine language: The set of instruction either written in binary or in decimal which can be directly understand by the CPU of the computer without the help of a translation program is known as machine language. Machine language is most fundamental language which can used string of 1s and 0s the internal circuit of computer is wired in such a manner which immediately understand the code and convert it into electric signals needed to run the computer. Procedural languages: A procedural programming language provides the means for a programmer to define precisely each step in the performance of a task i.e. the programmer knows what is to be accomplished by a program and provides, through the language, step by step instructions on how the task is to be done using procedural language, the programmer specifies language statements to perform a sequence of algorithmic steps. Ex: C, C++, COBOL, FORTRAN, Pascal and Basic. Imperative languages: In an imperative language, expressions are computed and results are stored as variables. Language control structures direct the computer to execute an exact sequence of statements. Most procedural languages or high level languages are imperative languages. Ex: C, C++, COBOL, FORTRAN, Pascal and Basic. Non procedural languages: In a non procedural language the programmer tells the computer, through the language, what must be done, but leaves the details of how to perform the task to the language itself. Ex: SQL, Dbase. Functional languages: In functional languages every entity is a function. A function in a functional language has the same meaning as a function in mathematics. In mathematics a function has a name and one or more arguments. Ex: Lisp is the oldest and most widely used functional programming language. Declarative / logic languages: As the name suggests, these languages are designed around mathematical logic, predicate calculus and lambda calculus. The objective of the design of logic programming languages is to use mathematical propositions to define objects and operations on objects. Languages used for logic programming are usually called declarative languages. Ex: Prolog is widely used logic programming language. Object oriented language: Three main characteristics separate a procedural language from an object oriented language. Object oriented languages define and use classes, inheritance, and polymorphism. Ex: Small talk, C++, ADA 95. Visual languages: The rapid development in Graphic User Interface (GUI) environment and its acceptance in the user community have encouraged the introduction of a series of languages called visual languages. Examples of such languages are visual basic, visual C++ and visual FoxPro. Fourth generation languages: During 1970s and early 1980s many languages were developed whose main goal was to be non procedural and very easy to learn and use. The original goal was to give users the capability to define, design and develop application software that normally would have required the services of a system analyst / design or a programmer. Ex: ADF, Application Factory, Intellect, Natural, SQL, Focus and Use It. Specific language features:
76

Mahesh - RSREC

Preprocessor: There are programming languages that provide some form of preprocessor. Some of the tasks performed by a preprocessor are indentation, line numbering, some preprocessors can, with a single key stroke or a combination of two keys, type specific key words of the language. Some preprocessors attempt to add internal documentation to the program. Naming constraints: The lexical rules applied in naming an entity in a program are referred to as identifier definition or naming. Most languages allow the use of letters and digits in naming a program entity. Some languages limit the size of a name (the number of letters and digits) to use in the program. Named constants: Some programming languages allow the programmer to define named constants in the program. Some even allow the programmer to define named constant expressions. There is no difference between the constant name and a variable name except that a constant name may not be redefined during execution. User defined data type: This added feature allows programmers to define data types other than the basic types provided by the language itself. The use of this feature can affect the readability of a program and its structure. Sub programs: This feature allows a programmer to split a program into smaller programs called functions, procedures, or other generic sub program names. Scoping: This term refers to how data can be moved or shared between the main program and its subprograms as well as among sub programs. In a program we may have global definitions, local definitions and parameters or arguments that are defined in a subprogram declaration statement. Each language has its own method of passing information among subprograms and creating data communication paths between the main program and its subprograms. Operations: Most programming languages provide basic arithmetic, relational and logical operations. These operations are and, or, addition, subtraction, multiplication, division, less than, greater than, equal, not equal, less than or equal and greater than or equal. Data structure support: Most programming languages provide for arrays. Some languages provide for other data structures such as records, sets, stacks and ques along with operations. Data abstraction and Information hiding: Recently developed programming languages include facilities for data abstraction with in their development environment. File handling utilities: Most programming languages provide a window to the operating system for input and output handling. The use of a text file is common in almost any programming. Access to other system utilities: Every system environment has a library of prewritten and compiled programs that perform certain tasks. These programs are usually called utility programs. Ex: Text editors, sorters, searchers, graphic interfaces, file handlers and data communication facilitators. Debuggers: A debugger is a software support tool that helps the programmer to walkthrough the program at his or her own speed while the program is being executed. Debuggers are powerful tools for performing dynamic desk checking of programs while the program is being executed. Detection of Error / Debugging: Bug is a name given to errors that are found in a software product and debugging is the name of the process that locates and removes a bug from a software product. The error may arise in the software or in the hardware. Several kinds of errors are associated with the programming process. These errors are referred to as, Syntax errors: As the name implies, syntax errors are caused by improper use of the programming language syntax. Syntax errors, sometimes called typos, are usually identified by the compiler and are easy to remove. Runtime errors: When all syntax errors are removed, an error free object code will be produced by the compiler. While the object code is being executed, it may reach a state from which it cannot continue execution. This is called runtime error. Sources of runtime errors: Improper operation: Division by zero, arithmetic operations on character type data and certain binary operations on mismatched operands are examples of runtime errors caused by improper operation.

77

Mahesh - RSREC

Access violations: Writing to a file that is opened for output, accessing an invalid file, referencing an invalid memory location, having improper clearance to access certain data and opening a file with inconsistent parameters are examples of runtime errors caused by access violation. Logic errors: The program executes with no runtime error, but the output it produces does not satisfy requirement specification. In such cases it is said that program has logic error. Improper linkage: Calling a non existent or unlinked module, procedure, function, or library; unauthorized use of system utilities; and faulty database linkages are examples of runtime errors caused by improper linkage. External errors: Any error whose cause cannot be identified any where within the software system is called an external error. Examples of external errors are faults in the hardware that executes the software, changes in the operating environment. Debugging Process: A debugging process is initiated when a failure occurs in the execution of an operational timeline or a test case. Debugging is a term applied to a process where by failure symptoms are examined, the fault that caused the failure is uncovered. If there were no faults, no debugging would be necessary. The main objective of testing is to find legitimate ways to make the software fail. On the other hand, the primary objective of debugging is to locate and remove and identified and localized fault in the programmable.
Test Specification Test

Locate

Design error

Repair error

Retest

The debugging process consists of six steps: 1. Information gathering: The software investigating a failure needs as much evidence as possible. The test log, test anomaly reports, test summary reports. Feedback from debugging efforts can improve the quality of the test logs and anomaly reports. 2. Fault isolation: the next step is to isolate the failure to a specific statement, a block of statements, a control structure, an implementation logic, etc. To isolate the failure, the software engineer can resort to different approaches, Binary partition approach Question and Answer approach 3. Fault confirmation: The fault isolation process, if successfully done, identifies a function, a procedure, a control structure, a block of statements, or a single specialized testing, desk checking, reviews, walkthroughs, and other methods, the cause of failure must be fully verified. 4. Documentation: Every correction, modification and addition t the module must be carefully documented. New test cases must be documented, and any abnormal behavior of the program under the new test cases must be recorded. A structured documentation of the debugging process can help the overall testing process a great deal. 5. Fixing the fault: When the presence of a fault in a program module, procedure, function or program segment is proved and exact nature of the fault and its exact location have been identified and confirmed, remedial action may be initiated. When a correction is designed, its effect on the entire software product must be carefully evaluated. This procedure is best accomplished by rerunning of acceptance tests (regression tests). 6. New testing after each correction: After each modification, change, or correction in a module, it is necessary to reexamine the entire software component. Cost-benefit analysis A cost benefit analysis can be viewed as another form of trade-off in which financial considerations are balanced against potential benefits for one alternative versus another. Cost-benefit results can also be decision
78

Mahesh - RSREC

criteria. A cost-benefit analysis can also evaluate the financial merit of making a capital investment in one of system or software product. The cost-benefit analysis attempts to quantify the cost of acquiring or upgrading an information system and maintaining it in operational condition for its estimated lifetime. These costs are compared with the quantified benefits derived from operating the new system. Benefits are categorized as tangible and intangible. Tangible benefits are those that are easily identified and quantified. Tangible benefits can be found in improved productivity, increased sales, decreased cost of each transaction, reduced number of required employees, etc. intangible benefits are not quantifiable. The comparision criteria for cost-benefit analysis: 1. Payback period method 2. Return on Investment 3. NPV method Costs: The following lists describe the acquisition and operating costs associated with an information system. Acquisition costs include the following direct costs: Computing resources: - Hardware procured or developed - Software procured or developed Brick and mortar: - New buildings - Real estate acquisition - Modified buildings Installation: - Training - Testing - Conversion Operation and maintenance costs include following direct costs: Staffing: - Operators - Maintainers - Security Supplies: - Paper - Disks and tapes Maintenance: - Hardware - Software - Air conditioning - Ventilation Utilities: - Power - Communication Facilities: - Rent - Building maintenance - Furniture Insurance Security Benefits: The tangible benefits associated with a new or improved information system usually fall into one of the following categories: Reduced operating personnel: - Operators - Maintainers - Trainers Improved performance: - Fewer errors - Reduced time per transaction Increased functional capability: - Additional functions / services - Improved accuracy Increased system capacity: - More records - More fields - More speed Improved response time Intangible benefits: - goodwill - customer satisfaction
79

Mahesh - RSREC

Assessing the value & Risk of Information systems: Software development risk can be defined in general terms as the potential occurrence of an event that would be detrimental to the software development process, plans, or products. The risk attribute is measured as the combined effect of the likelihood of the undesirable event is occurring and the likelihood that a particular quantified, measured or assessed consequence will result given the occurrence of the undesirable event. Risk management is simply an organized means of identifying each risk area, quantifying the risk, developing ways to predict the occurrence of the undesirable event form attribute measurements, tracking measurement on a continuing basis, and developing planned alternatives or corrective actions incase the undesirable event occurs. Top ten software risk items and suggested risk management techniques: Risk item Risk management technique 1. Personnel shortfall: Staffing with appropriate personnel, job matching, team building, securing key personnel agreements, cross-training, rescheduling key people, subcontracting. 2. Unrealistic schedule and budget: Detailed cost and schedule estimation, designing to cost, software reuse, requirement scrubbing, renegotiation with client. 3. Developing wrong software functions: Organization analysis, mission analysis, user surveys, proto typing, early user manual development, development of acceptance criteria. 4. Developing the wrong user interface: Prototyping, operational scenarios, task analysis, and user characterization. 5. Gold plating: Requirement scrubbing, proto typing and cost benefit analysis. 6. Continuing stream of requirement changes: High change threshold, information hiding, incremental development, deferral of changes to later increment, tight change control, agreement to acceptance criteria. 7. Shortfalls in externally furnished components (procured hardware / software): Bench marking, inspection, reference checking, compatibility analysis and acceptance testing. 8. Shortfalls in externally performed tasks (subcontractors): Reference checking, pre award audits, award-feecontracts, competitive design or proto typing. 9. Real-time performance shortfalls: Simulation, bench marking, modeling, proto typing. 10. Straining computer capabilities: Technical analysis, cost-benefit analysis, proto typing and performance analysis.

80

Mahesh - RSREC

Unit 7 Software Specification (Software Requirement Specification): Every manufactured product must be specified in some fashion. As a product becomes more complicated, it will require moir detailed specification. For example, if a product is a kind of chocolate the specification may include 1) texture 2) color 3) taste 4) precise amount of each ingredient 5) vitamin content, specifications will be much more detailed and specialized. These specifications are needed to describe the requirements and design of each component of the shuttle, operational timelines, acceptance and performance standards and many other detailed requirements and design specifications. A given product may be specified in terms of its behavior and performance. Such specifications are called operational specifications. A product may also be described by its effect on its environment and by its properties. Such specifications are called descriptive specifications. Both these specifications are used in describing what software must do. The software specification document, when completed, serves as a contract between client and developer. The more attention given to the software requirement specification document, the more accurate and precise the SRS document, the better the quality of the final product. While the possibility of developing a new system or, as often is the case, modifying an existing system is being discussed, a team of analysts (system engineers) are called into the process. Often this team may consist of one or two independent consultants or they may be from a prospective development organization. This team, in cooperation with the operator / user group, produces the initial system requirement specification that often upon its final approval, will be published as part of a request for bid. The development organization selected by the client organization will revise / rewrite the original system requirements specifications into several subordinate specification documents. A generic SRS suggested by IEEE (Institute of Electric and Electronics Engineers) software engineering standards. This table of contents is divided into three parts. The first two parts contain general information about the users, environment, operation and constraints surrounding the system as a whole. The third section provides specific requirements for every function. Software requirement specification outline Table contents: 1. Introduction 1.1 Purpose 1.2 Scope 1.3 Definition, Acronyms or Abbreviations 1.4 References 1.5 Overview 2. General Description 2.1 Product perspective 2.2 Product functions 2.3 User characteristics 2.4 General constraints 2.5 Assumptions 3. Specific Requirements 3.1 Functional Requirement 3.1.1 Introduction
81

Mahesh - RSREC

3.1.2 Inputs 3.1.3 Processing 3.1.4 Output 3.1.5 External interfaces 3.1.5.1 User interfaces 3.1.5.2 Hardware interfaces 3.1.5.3 Software interfaces 3.1.5.4 Communication interfaces 3.1.6 Performance requirements 3.1.7 Design constraints 3.1.8 Attributes 3.1.8.1 Security 3.1.8.2 Maintainability 3.1.9 Other Requirements 3.1.9.1 Databases 3.1.9.2 Operations 3.1.9.3 Site adaptation Introductory section of SRS: Section 1 of the SRS includes general information about the software product, its objectives and its scope. In particular, it does the following: 1. Gives a full description of the main objectives of the SRS document. 2. Lists key people involved in developing the document and identifies the contributions and responsibilities of each person. 3. Identifies the software products to be developed by name and function, lists product limitations. 4. Describes the benefits of the software product as clearly and precisely as possible. 5. Defines all acronyms and abbreviations in a common location whenever used for the first time. A collection of acronym definitions serves as a glossary. 6. Includes both internal and external references for every specification listed. General description section of SRS: Section 2 of the SRS document should include general information regarding the factors that affect the final product and its requirements. The information to be included in section 2 of the SRS document includes: the following: 1. Placement of the product in proper perspective within the overall system. 2. A short description of the functions performed by the software. 3. Description of any user behavior characteristics that may be required in the use of the software. 4. Factors other than user and operator needs that may impose constraints on a software product. Such factors may impose physical limits on the design and development of the software product. 5. Any assumption made in the SRS or implicit dependencies must be clearly stated to identify assumptions made before the design and coding of the software. Specific requirement section of SRS: Section 3 of the SRS must contain all of the technical information and data needed to design the software. Specific software requirements are usually developed by people who are knowledgeable in the application area. A functional requirement normally refers to an independent portion of the software system. The following information is included in the functional requirement: 1. Describe the purpose of each functional requirement. Any special algorithms, formulas or techniques must be used to achieve the objectives of the function. 2. Describe the input to the function. Specifically, what inputs must be accepted, in what format. 3. Describe in precise language the process that the function must perform on the input data it receives.

82

Mahesh - RSREC

4. Describe the output desired from the function. This description should include form and shape of the output, the destination for the output, and the volume of output, process by which output is stored or destroyed and the process for handling error messages produced as output. External interface requirements: This section should specify the entire external interface requirements needed to make the software product a successfully operational product. Some of such external interface requirements are as follows: 1) operators / users interface characteristics 2) hardware interface 3) interfaces with other software components or products including other systems, utility software, databases and operating systems 4) communication interfaces specifying various interfaces between the software product and networks 5) database requirements. Performance requirements: This section should include any form of static or dynamic requirements specified by exact measurement. Such requirements may include the number of connections to the system, number of simultaneous users, response time, number of files, size of files, number of records, number of transactions per interval, amount of data to be processed with in a time unit, database access per time unit, similar specific performance requirements. Overall design constraints: Any form of design constraints caused by environmental limitations, hardware limitations, audit and tracing requirements, accounting procedures and compliance with standards would be specified here. Attributes: In this part, most of the attributes required of the software product would be specified. A short list of such attributes would include the following: 1) detailed security requirements 2) software product availability requirement 3) product maintainability requirement 4) mobility requirement and 5) reliability requirement. Other software specification documents: Software design specification: The design to specification document is used by software engineer to produce the software design specification document, which IEEE standard refers to as the software design description. The SDS is at the level of detail that can be given to the programming team to start coding the software. Test and integration specification: One of the important reasons for this document is to specify tests and test cases for each module as well as for the integrated system. The acceptance criteria must be specified in this document and must be constantly updated as changes are imposed on the software system. Software performance specifications: In this document, a clear specification is given for all of the quantifiable performance requirements for the software system. Performance requirements include the following: 1) Reliability 2) Availability 3) Response time 4) Backup facility 5) Recovery procedures Maintenance requirement specification: This document should include all of the detailed agreements regarding the maintenance of the software during the operational phase by the system developer. Software Metrics (or) Attributes Software metric is a measure of some property of a piece of software or its specifications. Since quantitative methods have proved so powerful in the other sciences, computer science practitioners and theoreticians have worked hard to bring similar approaches to software development. Software metrics are numerical data related to software development. Metrics strongly support software project management activities. A metric quantifies a characteristic of a process or product. Tom DeMarco stated, You cant control what you can't measure. Software metrics are divided into following categories:
83

Mahesh - RSREC

Identification related metrics: A software product, component or module should be thoroughly and accurately identified. Given below is a template for identifying a software product and placing it in an appropriate framework. Software identification template Software product name: Product release and version: Development organization: Date (current): Name of collector: Narrative: Product partition structure: Sub function / module / unit name: Language employed: References (specifications and related system documentation): Size related metrics: Size can be referred to the memory space occupied by executable code or to the number of lines of source code. The size metric is important to the software development planning process. Executable source statements direct the actions of computer at runtime. Data declaration statements reserve some memory at compilation time. Compiler directive statements define macros, create labels. Comment source statements are internal documentation statements provided to assist maintainers in reading and understanding the code. Design and development metrics: Design and development metrics are those that define the characteristics of the software product and its operational and development environment. These metrics can be divided into product related metrics, operational metrics and metrics associated with development environment. 1. Product related metrics Reliability requirements Database characteristics and requirements Complexity 2. Operational metrics CPU limitations Memory constraints 3. Development metrics Capability and experience of system engineer / software engineer Access to computing and development resources Experience with development environment, using programming Language, using tools, using software engineering methods Software quality metrics: software quality is general software metric measuring how well the software product meets stated function and performance requirements. The use of audits, reviews and walkthroughs has a measurable positive effect on software quality. Errors, faults, defects and failures are quantifiable attributes used in defining and measuring software reliability and software quality metrics. Complexity related metrics: Software complexity attributes are often related with quality, reliability, cost and schedule. In general, if the requirements are more complex then implementation will take longer and it costs more. Software complexity metrics are listed as follows: control operations, computation operating, device dependent operations, data management operations, data structures, etc. Its not an easy task t value complexity attribute. For this purpose, Boehm has given software complexity rating chart to assign a value for the complexity metrics.
84

Mahesh - RSREC

Execution time related metrics: Test related metrics: The most important test related metrics are number of expected failures / faults, number of actual failures / faults and time spent in identifying the failures and fixing the faults. Documentation related metrics: Software products are usually accompanied by an abundant quantity of hard copy documents that must be produced. Specifications, listings, program description, test plans, test cases, test results and user manuals. Some of the metrics associated with documentation are the following: 1) name and type of the document 2) purpose of the document 3) page counts 4) figure counts 5) printing counts 6) revision schedule. Performance related metrics: Performance metrics include those measures of software associated with the use of CPU memory, input/output channel resources, and the use of absolute time in meeting software requirements. Performance parameters Actual Required CPU utilization Memory utilization Primary Secondary Response time I/O channel capacity Terminals supported External device use Labour related metrics: A valuable parameter for any manager or software engineer is the amount of effort required to perform a specific task or a set of tasks. The labour related metrics are used in preparing cost estimates and sizing the workforce needed to perform particular tasks. A second use of labour metrics is the performance evaluation of development team members. Software Quality Assurance Quality is the degree to which a finished product conforms to explicitly stated function and performance requirements. Quality is also the degree to which the development and manufacture of the product conforms to stated policies, and standards. Quality is what ever the customer or client says it is. According to IEEE software quality assurance (SQA) can be thought of as having two major components. The first component consists of formal and informal measures taken to assure that the software product delivered at the end of development cycle conforms to actual system needs. The second component alludes to the formal and informal steps taken to assure that the software product has been developed in strict accordance with applicable standards and practices. The contents of this agreement include the following: 1. Identification and allocation of software requirements and documentation 2. Software requirement specification and documentation 3. Software design specification and documentation 4. Tools and Training 5. Coding process and documentation 6. Testing and test documentation 7. Delivery 8. Post delivery support and evaluation 9. Maintenance Software quality assurance planning:
85

Mahesh - RSREC

The following is a list of fundamental tasks that must be included in SQA plan: 1. Selection and modification of quality assurance practices: Identification of specific tools, techniques and requirements for software development of quality monitoring, shaping each tool, technique to fit the unique aspects of development. 2. Software project panning evaluation: The evaluation of the software development planning process and plan parameters is one of the most important reviews. 3. Acceptance requirements evaluation: Early agreement in detail, as to what constitutes an acceptable product is essential to the perceived quality of the delivered product. 4. Specification evaluation: Poorly defined requirements will ultimately impact every aspect of a software development process including cost, schedule and technical integrity. 5. Design process evaluation: If software design processes follow a planned methodology and firmly based on requirements then quality of the end product is assured. 6. Coding practices evaluation: Poor coding practice can negate good requirements specifications and good design. 7. Testing and integration process evaluation: the adequacy of test plans and the integrity of the entire test program is the surest indicator of a products quality. 8. Management and project control process evaluation: Project management and controls are essential ingredients for software development success. 9. Configuration management practice evaluation: Maintaining tight control of the software configuration during the dynamic development process is an important feature of a products ultimate quality. 10. Tailoring quality assurance procedures: Although, quality assurance discipline follows a consistent structure, the techniques must be tailored to the specific software development. Software Quality Assurance Process: SQA should begin at project inception. Every member of the software development team should be dedicated to SQA. The SQA process begins by performing an in-depth review of the software development process, its standards, documentation, policies and practices for verifying the following: a) Completeness, applicability, team commitment, training needs, roles and responsibilities, management, staffing and self review, documentation. b) Full understanding of contractual requirements: Specifically what is required by the contract and statement of work? A careful reading of contract and statement of work or equaling is the only way to obtain the required understanding. c) Understanding of Acceptance test criteria: The development of a negotiated acceptance test criteria incorporated into the contract is a crucial point to be reviewed by the SQA team. d) Definition of formal and informal reviews: The quality gates with explicit entrance and exit requirements and a well thought out checklist should be defined. e) Assessment of development tools: A quality check of all tools for completeness, certification, documentation, maintenance or support arrangements, maturity and training in their use is an important quality assurance responsibility. f) Assessment of traceability: A formal process should be described for tracing defects in products, process or procedure to source. The next step in the process is to establish a set of quality attributes that will serve as indicators of software quality during the development cycle.

86

Mahesh - RSREC

Unit-8 Knowledge and Human dimension A number of factors influence the responses of computer users and the performance of software developers. Human factors are of interest from two points of view: users perspective and software developers perspective. Cognitive engineering (CE) has identified three main influences on the perspective of software users: task representation, agent paradigm and view of the external world as a task domain. CE is a form of applied behavioral science concerned with the study and development of principles, methods tools and techniques that guide the design of computerized systems intended to support human performance. Influences on programmer performance
Personal software process Individual differences Group behavior Organizational behavior Human factors Cognitive science

Software Developer Performanc e

In the context of software engineering, the study of human factors is concerned with the relationships between stimuli presented to developers and the responses they make. The stimuli can be software tools, information displays, requirements diagrams, documentation, input devices, various types of computer programs and templates. Programming and cognitive ergonomics: Cognitive ergonomics is the study of problem solving, information analysis and procedural skills relative to the influence of human factors. Human factors questions concerning programming language structures, computer programming, and human computer interaction in terms of the mental effort required. Cognitive science and programming: Cognitive science is the study of how knowledge is acquired, represented in memory, and used in problem solving. The cognitive science approach to computer programming has focused on how programmers represent knowledge, how program structures are learned and how knowledge applied in developing software. Software comprehensive model
Rules of discourse Understan ding process: matching documents to plans

External representatio ns (documents)

Program ming 87

Mahesh - RSREC

Internal representation Current mental representation of computer program (Plans / schemas)

Learning to programming: Two learning styles have been identified in the study of how novices learn computer programming. 1. Comprehensive learning: A novice grasps the overall layout of a program but may not understand the rules needed to operate with or on the information needed to write a program. These learners mainly interested in understanding, not doing. 2. Operational learning: A novice grasps the rules needed to write a program but may not grasp the complete picture of the domain knowledge. These learners are primarily interested in doing programming, not necessarily understanding it. Comprehensive learners would do well in specifying requirements, trouble-shooting and reverse engineering during software maintenance, not design. Operational learners would gravitate toward forward engineering (implementation of requirements) and would do well in software design. An ideal team member would be comfortable in both settings. Characteristic design behaviors: Design behavior Explanation Formation of mental model Support mental simulation of an emerging computer program Simulation Check for unforeseen interactions and external consistency with specification Systematic expression Ensure equal level of detail is developed across functions Representing constraints Aid in simulating unfamiliar elements Note making Capture issues for different levels in system hierarchy Personal software process model: The personal software process model provides a framework to help software engineers to plan, track, evaluate and produce high quality products. The PSP model characterizes the maturity level of an individual engineer. In PSP, the focus is on a precise framework for evolving the skills of a software engineer. In other words, PSP provides a framework for personal capability maturity development cycle that embedded inside each phase of a software development process. Human computer interaction: Three main elements are identified in the use of computers t support humans in performing various tasks: task domain (external world), agents and task representation. The study of these elements is aimed at designing human computer systems that enhance performance of tasks by humans. Verification and Validation The terms verification and validation are used frequently in software development in general and in software testing in particular. They can be distinguished by the activities associated with them. Software verification: All actions taken at the end of a given development phase to confirm that the software product as being built correctly satisfies the conditions imposed at the start of the development phase. Software validation: All actions taken at the end of the development cycle to confirm that the software product as built correctly reflects the software requirement specification or its equivalent.
88

Mahesh - RSREC

During and after the implementation process, the program being developed must be checked to ensure that it needs its specification and delivers the functionality expected by the people paying for the software. Verification and validation is the name given to these checking and analysis processes. Verification and validation starts with requirements reviews and continues through design reviews and code inspections to product testing. Verification and validation is not the same thing; although they are often confused. Boehm expressed the difference between them: Verification: Are we building the product right? Validation: Are we building the right product? These definitions tell us that the role of verification involves checking that the software confirms to its specifications. One should check that it meets its specified functional and non functional requirements. Validation, however, is a more general process. The aim of validation is to ensure that the software system meets the customers expectations. It goes beyond checking that system confirms to its specifications to showing that the software does what customer expects it to do. The ultimate goal of the verification and validation process is to establish confidence that the software system is fit for purpose. The level of required confidence depends up on following factors: 1. Software function: The level of confidence required depends on how critical the software is to an organization. 2. User expectations: Since from 1990s the users have high expectations of their software. They are not willing to accept the system failures. So, software companies must devote more effort to verification and validation. 3. Marketing environment: When a system is marketed, the sellers of the system must take into account competing programs, the price those customers are willing to pay for a system and the required schedule for delivering that system. Where a company has few competitors, it may decide to release a program before it has been fully tested and debugged because they want to be first in the market. When the customers are not willing to pay high prices for software, they may be willing to tolerate more software faults. All of these factors must be considered when deciding how much effort should be spent on verification and validation. With in verification and validation process there are two complementary approaches to system checking and analysis: 1. Software inspections or peer reviews: In this process the person will analyze and check system representations such as the requirements document, design diagrams and the program source code. They can use inspections at all stages of the process. Software inspections and automated analysis are static verification and validation techniques as it dont need to run the software on a computer. 2. Software testing: Software testing involves running an implementation of the software with test data. It examines the outputs of the software and its operational behavior to check that it is performing as required. Testing is a dynamic technique of verification and validation. Inspection Process

Planning Overview Individual preparation Rework Follow-up 89

Вам также может понравиться