Вы находитесь на странице: 1из 113

DEPARTMENT OF MASTER OF BUSINESS ADMINISTRATION SUBJECT CODE : 571214 SUBJECT NAME : MANAGEMENT INFORMATION SYSTEM CLASS / SEMESTER : I MBA

/ II SEM 571214 MANAGEMENT INFORMATION SYSTEM 1. INTRODUCTION 9

Data, Information, Intelligence, Information Technology, Information System, evolution , types based on functions and hierarchy, System AnalystRoles and Functions. 2. SYSTEM ANALYSIS AND DESIGN 9

SDLC, SSLC, System Analysis and System Design. Tools DFD ER - Object Modeling DBMS - RDBMS - OODBMS 3. INFORMATION SYSTEM 9

Functional areas - Finance, marketing, production, personnel ,material IS, DIS, EIS, KMS, GIS, International Information System. 4. SECURITY AND CONTROL 9 Security, Testing, Error Detection, Controlls, IS Vulnerability, Computer Crimes, Securing the web, intranets and wireless networks, software audit, ethics in IT. 5. NEW IT INITIATIVES (SYSTEM AUDIT ) 9 E-business, e-governance, ERP, SCM, e-CRM, data warehousing and data mining. Business intelligence, pervasive computing, CMM. TOTA L : 45 TEXT BOOK Kenneth C. Laudon and Jane Price Laudon, Management Information systems Managing the digital firm, Pearson Education Asia. REFERENCES Gordon B.Davis, Management Information system: Conceptual Foundations, Structure and Development, McGraw Hill, 1974.

UNIT-1 INTRODUCTION TO MIS Data Streams of raw facts representing events such as business transactions. It is unprocessed facts and figures without any added interpretation or analysis. "The price of crude oil is $80 per barrel." Information Clusters of facts that is meaningful and useful to human beings in the processes such as making decisions. It is data that has been interpreted so that it has meaning for the user. It is collection of facts organized in such a way that they have value beyond the facts themselves. "The price of crude oil has risen from $70 to $80 per barrel" gives meaning to the data and so is said to be information to someone who tracks oil prices. Knowledge is a combination of information, experience and insight that may benefit the individual or the organization. It is the awareness and understanding of a set of information and ways that information can be made useful to support a specific task or reach a decision. "When crude oil prices go up by $10 per barrel, it's likely that petrol prices will rise by 2paise per litre" is knowledge. For ex: A set of raw sales figures is data. For the Sales Manager tasked with solving a problem of poor sales in one region, or deciding the future focus of a sales drive, the raw data needs to be processed into a sales report. It is the sales report that provides information. This information when applied with a detailed analysis can improve sales in that region. DEFINITION OF INFORMATION SYSTEM An Information System is a system that accepts data resources as input and processes them into information products as output. This supports an organization's business strategies, business processes, and organizational structures and culture to increase the business value of the enterprise in a dynamic business environment. An Information system is a set of interrelated components that collect (or retrieve), process, store, and distribute information to support decision making and control in an organization. For ex: ATMs, Airline reservation systems, Course reservation systems. BASIC COMPONENTS OF AN INFORMATION SYSTEM An Information System is defined as a set of people, procedures and resources that collects, transforms, and disseminates information, and provides feedback to meet an objective in an organization. All Information systems need the basic five components to perform various business activities, namely i) Hardware are the mechanical, magnetic, electronic, and electrical components making up a computer system. It refers to objects that you can actually touch, like disks, disk drives, display screens, keyboards, printers, boards, and chips, etc. ii) Software are the instructions that tell a computer what to do and how to do it. It is a general term for the various kinds of programs used to operate computers and related

devices. It is the term used to describe the instructions that cause the computer system to behave in a given way. System software offers a protective shield to all software applications. It also provides support to the physical components of computers. System software coordinates all external devices of computer system like printer, keyboard, displays etc. Ex: Operating System Windows XP Programming software usually provides tools to assist programmers to develop the programming languages necessary to run computer software. Compilers, interpreters, linkers and text editors are some of the basic tools used in programming software. Ex: C Compiler Application software is any tool that functions and is operated by means of a computer, with the purpose of supporting or improving the software user's work. It may be for a general or a special purpose. It is also used for commercial purpose. The application software is widely used in educational, business and medical fields. Computer games are the most popular forms of application software. Industrial automation, databases, business software, medical software and educational software prove to be of great help in the respective fields. Ex: General purpose Word processor (MS-Word), Special purpose 3D Studio Max. iii) People There are many roles for people in information systems. System analysts, Programmers, Technicians, Engineers, Network Managers, Database Managers, Knowledge workers, Production or Service workers, Data entry operators and many other computer professionals who utilize the computer-based information systems are the personnel in an information system. iv)Standard Operating Procedures are sets of rules or guidelines, which an organization establishes for the use of a computer-based information system. It is a set of procedures that define how employees and managers should deal with certain situations. v) Data & Network resources Data is a vital resource in an organization and must be managed. Data is facts, figures, and values from which information is derived. A database management system is a computer based tool used to set up a database, make it available within an organization and control the integrity of those data resources. A network is a collection of computers and devices connected to each other. The network allows computers to communicate with each other and share resources and information. Capabilities expected from an information system are Transaction Processing System, Decision Support System, Executive Information System, Management Information System, Work flow system and Expert Systems. An Information System (IS) is nothing more than a group of interrelated elements that work together to capture, process, maintain and disseminate information. At its most basic, any IS consists of an integrated set of data inputs, processes, storage mechanisms, and outputs.

Inputs include all of the raw data that your institution gathers or generates as a part of its operations, such as a loan amount, client name, account number, interest rate or payment amount. Processes consist of related tasks that are intended to achieve specific goals and objectives they are the ongoing activities that capture, manipulate, relate and analyze data in the system. For example, one process might compare actual loan payment amounts against scheduled payments in order to flag, or otherwise highlight, those loans that are in arrears. To achieve the systems objectives, these processes translate data (i.e., the raw facts) into useful and meaningful information.3 Storage provides a means for organizing, relating and preserving the systems data. It maintains relationships among items of data for example, between clients and loans or clients and savings balances and stores and safeguards it. In a manual system, storage may be as simple as a set of file folders and a filing cabinet. In a computer-based system, storage generally refers to database management software and a computer hard drive or other, similar device. Outputs are the information that your system generates to describe or summarize your historical activities and to help guide future operations and decision-making. They include financial statements, lists, transaction journals, management reports, online inquiries and various other analyses. In some cases, the format and content of the output is defined by the software (a standard or preformatted report). In others, the user determines the content or format (an ad hoc or a custom report). Information becomes knowledge in the hands of people who can use it creatively to innovate and solve business problems. FRAMEWORK OF AN IS BUSINESS ENVIRONMENT

INPUT/ DATA

PROCESS

OUTPUT / INFORMATION

FEEDBACK or CONTROL STORAGE

BOUNDARY A Framework is defined as a brief set of ideas for organizing a thought process about a particular type of object or situation. The Information System framework identifies the major components and activities that make the information system to deliver its objective. An information system

uses the resources of people, hardware, software, data and networks to perform input, processing, output, storage and control activities that convert data resources into information products. An IS is nothing more than a group of interrelated elements that work together to capture, process, maintain and disseminate information. At its most basic, any IS consists of an integrated set of data inputs, processes, storage mechanisms, and outputs. TYPES OF INFORMATION SYSTEM There are two types of Information systems: Manual Information System are systems created and maintained on paper, either totally or in conjunction with one or more simple computer spreadsheets. It is the earliest and the most prevalent form of data processing. People receive input data by seeing or hearing them. The data is stored in the brain where the manipulations are done. The outputs obtained from this type of information processing are oral or written reports or a variety of physical actions. A manual IS generally consists of the following elements: Staff and consultants - People are the most important element of any IS, and often the largest cost component in the IS budget. In a manual system, the personnel include: Clerical, accounting and internal auditing personnel. Consultants and outside service providers, including accountants, external auditors and IT professionals. Because manual systems are labor intensive, they typically require more people to maintain than do computerized systems with similar capabilities. However, the staff generally do not require sophisticated computer skills. If portions of the system are maintained in simple spreadsheets, staff members need to be comfortable using a keyboard and mouse, and basic Windows- or DOS-based software programs, such as Excel and Windows Explorer.Comprehensive training and cross-training is vital for the staff of a manual system, as the system is so dependent upon the people who operate and maintain it. Cross-training allows your IS to continue to function in the absence of one or more people. It also allows your institution to grow somewhat more rapidly as additional, knowledgeable staffs are potentially available to accommodate an increased workload. Facility and environment - The requirements that a manual IS imposes on an institutions physical facility and operating environment typically include: Furniture and equipment for staff, including desks, chairs and locking filing cabinets. Buildings, plus offsite record storage if available. Environmental controls, such as those for fire protection. Electric and communications utilities and other infrastructure. Manual systems are highly labor intensive. As such, training and cross-training are vital to maintaining the system. Recordkeeping systems - A manual recordkeeping system typically includes the processes and practices that generate, organize and maintain data specifically, data related to accounting records, historical reports, and legal and administrative documents. Accounting records include such things as source documents (e.g., invoices and bank statements), ledgers, transaction journals and audit records. Historical reports analyze and summarize past activities or represent the status of the institution at various points

in the past such as financial statements, performance indicator/ratio reports and a broad range of operating and management reports. Legal and administrative documents include loan agreements and payment schedules, business plans, policy manuals, community surveys, loan application forms, training records, bylaws and incorporation documents, visitation reports, and past-due notices. Manual recordkeeping systems maintain accounting records, various historical and operational reports, and legal and administrative documents. Activities related to manual recordkeeping include the following: Activity Data gathering Description The manner in which data is generated and collected for input to the manual system. The procedures for entering data into the system and manipulating it, as necessary, to calculate or otherwise provide needed information. Mechanisms for storing and preserving data, such as locking file cabinets; also record retention and destruction policies. The procedures for safeguarding information and for protecting or restoring it in the event of a disaster, such as a flood or fire, or the loss of key personnel. The manner in which information is communicated throughout your institution. Those practices intended to ensure the integrity of the data in the manual system.

Data recording, processing

Data storage, maintenance

Data security, protection, disaster recovery

record

Data analysis, dissemination

reporting,

Internal controls, informationsystem audits

Business practices and procedures - The business policies and practices reflect the institutions mission and culture as well as the overall strategy and tactics in its business plan. These practices provide the foundation for the recordkeeping and information systems, as they are the business activities that the systems are designed to manage. They also include certain internal controls, audits and disaster recovery practices intended to safeguard the information assets. Advantages and Disadvantages of Manual Systems ADVANTAGES DISADVANTAGES

* Less expensive, initially * No computer-literacy requirements * Adaptable * Places fewer demands on infrastructure

* Expensive as organization grows * Quality is less assured

the

* Requires strong internal controls * Security can be problematical * Lower productivity * Limited growth potential * Lack of institutional memory

Computer-based Information System It is an information system that uses hardware and software to perform its information processing activities. It consists of people, procedures, data, programs and computers. It serves as a data storage and retrieval device; has data processing capabilities, and as a communication device by giving the required outputs. Computer-based information systems are highly automated systems that are created and maintained on either a standalone computer or a networked computer system. A computer-based IS consists of a number of integrated elements, including: Staff, consultants and vendors - As with manual systems, people are the most important component of a computerized IS. The personnel involved in establishing and maintaining such a system include: Clerical, accounting and internal audit staff, including existing personnel and any additional people deemed appropriate for the new system. For existing staff, a new system may involve new or altered responsibilities and working relationships. Cross-training among the staff is important in a computerized system to ensure that the system continues to function in the absence of one or more people. Programmers, system and hardware support/maintenance persons, and user support staff (if you maintain an in-house IT function). If you outsource your IT work, you should assign at least one staff person with the responsibility for maintaining a library of your hardware and software documentation and other computer-related records, such as preventive maintenance schedules and repair history. The same person can act as your liaison with IT consultants to coordinate support requests and other necessary work. Consultants and outside service providers, including IT consultants and external auditors. It is essential that you find consultants you are comfortable working with, and that those consultants are willing and able to communicate using language that you can understand. If a consultant is not responsive to your requests, not willing to keep you informed, or implies that any part of the project is too technical for you to understand, it is a cause for concern the chance of successfully concluding the project is reduced. Consider replacing the consultant as soon as is practical. Hire consultants that the management is comfortable with on a personal level, and that communicate effectively with them and the other staff members.

Computer hardware - The hardware element of a computer-based IS typically consists of the following categories of equipment: Input devices, such as keyboards, mice, scanners, bar code readers, smart cards and voice recognition equipment. Output devices, including printers, monitors (computer screens), scanners and backup drives. Processing devices, such as desktop computers (CPUs) and servers.

Telecommunications devices, such as network hardware (for multi-user and multi- location systems) and modems. System software - refers to the basic set of computer programs, usually supplied by the hardware manufacturer, that control and coordinate the workings of all of your computer hardware devices. System software also manages the interaction between your computer hardware and software applications, and creates portions of the user interface. It manages the various hardware devices and interface with applications software. It also includes communications software, language translators and utility programs to manage such routine tasks as backups and data compression. Application software - Applications are user-oriented software programs that automate business processes. The programs are referred to as applications (or, sometimes, modules) because each automates a specific business function. General ledger (GL)/accounting, lending, savings, accounts payable and inventory, word processors and spreadsheets are generally considered to be software applications to automate business processes. If an application is developed specifically for use by your institution, based on your standard practices and operations, it is a custom application and, most likely, owned. If, on the other hand, the application was acquired from a software vendor that markets it commercially to other institutions, it is an off-the-shelf application (also referred to as turnkey or commercially available). Integrated applications generally share access to information in the database, such as a client file or chart of accounts. This eliminates redundant data storage. For ex: savings and lending applications require basic client information to identify the specific client that is depositing or withdrawing savings, and the client borrowing or repaying loaned funds. If these applications are integrated, they can share the same client file. If they are not integrated, each will maintain its own client information. If you purchase two or more standalone applications, it is theoretically possible to develop software programs to link them, thereby approximating the operation of an integrated system. However, this approach might not be practical if the underlying data and processing structures of the applications differ significantly. These links also can be expensive to create and maintain. Applications are developed, or written, in high-level programming languages such as C, COBOL or Visual Basic. An applications programming language is the coding scheme that instructs the system to perform desired actions. Taken together, these instructions are referred to as source code. Many software vendors do not provide customers with source code for their products. Some software vendors provide access

to limited portions of the source code, while others provide source code for an additional fee. Information database - The systems database is comprised of the fields, records and files that contain your organizations current and historical data. Field the numbers and characters that, together, represent one business fact; For ex: a client name, a loan number or a loan disbursement date. Record a collection of related fields; for example, a client record includes all of the data related to one client (e.g., name, client number, gender, age, address). File a collection or related records; for example, a client file includes all of the records for all of your clients. If you do not have source code, you cannot make changes or implement your own enhancements. Depending on the type of database (e.g., hierarchical or relational), different terms may be used. However, the general field-record-file concepts are still relevant. A technical discussion of database concepts is beyond the scope of this guide. The database also establishes and maintains the relationships between all of the items of data. Additional software programs, collectively referred to as the DBMS (or database management system), manage the interface between the database and the software applications. Physical facility and environment - The demands on your institutions physical facility and environment are more complex for a computer-based system than for a manual system. They include: i. Furniture and equipment, such as chairs, desks, filing cabinets, keyboard trays and wrist rests. ii. Buildings, plus an offsite data storage facility, if possible. iii. IT library or other central location for system and software manuals and other reference materials. iv. Environmental controls, such as those for temperature, humidity and fire protection. v. Temperature and humidity requirements for your computer hardware are detailed in the documentation supplied by the manufacturer. vi. Electric and communications utilities and other infrastructure. Computer systems generally also require some form of generator or uninterruptible power supply (UPS). Business practices and procedures - The business policies and practices are an important part of your computerized IS. They reflect the organizations mission and culture, as well as the overall strategy and tactics in its business plan. These practices provide the foundation for IS, as they are the business activities that the system is designed to manage. If there is a change in the system, the related policies and procedures are likely to change somewhat, as well. A more sophisticated system

accommodates more sophisticated practices. The business practices also include certain internal controls, audits and disaster recovery practices intended to safeguard your information assets. The physical facility and the business policies and practices are a vital part of and provide the foundation for your information system.. Advantages and Disadvantages of Computer-Based Systems ADVANTAGES * Ensures the quality of information * Less expensive to operate medium-to-large sophisticated operations. * Enhances productivity * Provides more sophisticated capabilities * Facilitates growth * Develops institutional memory * Utilizes staff more efficiently * Enhances data security * Automates important security controls * Increased productivity and a reduced potential for error and timing issues. DISADVANTAGES * More expensive to establish * Places greater demands on the infrastructure * Requires computer-literate staff * Can generate voluminous data * Potentially inflexible in core processes

FUNCTIONS OF AN INFORMATION SYSTEM

A BUSINESS PERSPECTIVE ON INFORMATION SYSTEMS An Information system Represents an organizational and management solution, based on information technology, to a challenge posed by the environment Is an important instrument for creating value for the organization Transforms through various stages in the business information value chain to add value to information Literacy includes behavioral knowledge about organizations and individuals using information systems and technical knowledge about computers Denotes Computer literacy is knowledge about information technology, focusing on understanding how computer-based technologies work. MANAGEMENT CHALLENGES Design competitive and effective systems Understand system requirements of global business environment Create information architecture that supports organizations goal Determine business value of information systems Design systems people can control, understand and use in a socially, ethically responsible manner.

COMPETITIVE BUSINESS ENVIRONMENT Four powerful worldwide changes that have altered the business environment: Emergence of the Global Economy Management and control in a global marketplace Competition in world markets Global work groups Global delivery systems Transformation of Industrial Economies Knowledge- and information-based economies Productivity New products and services Knowledge: a central productive and strategic asset Time-based competition Shorter product life

- Turbulent environment Limited employee knowledge base Transformation of the Business Enterprise Flattening Decentralization Flexibility Location independence Low transaction and coordination costs Empowerment Collaborative work and teamwork Emergence of the Digital Firm Digitally-enabled relationships with customers, suppliers, and employees Core business processes accomplished via digital networks Digital management of key corporate assets Rapid sensing and responding to environmental changes. DEFINITION OF MIS MIS is the study of information system focusing on their use in business and management. Management Information Systems (MIS) is a field of study that offers great excitement, vast opportunities for lifelong learning, and a chance to be at the leading edge of the tremendous changes in our world.

A Management Information System is used to transform data into useful information as needed to support managerial decision-making with structured decisions which are those that are based on predictable patterns of activity. A Management Information System is a network of communication channels and information processing centers collecting information from its source of origin. (i) Storing, updating, collecting and processing it. (ii) Supplying the processed information to the various users managing the organization. According to McLeod, 1986, there are four major tasks of a typical MIS: Data gathering Data entry Data transformation and Information utilization.

APRO S TE YS
The contemporary approaches found in MIS are: Technical approach which is a combination of computer science, management science and operations research with a practical orientation towards building systems and applications. The Technical approach to Information System emphasizes mathematically based model to study Information Systems, as well as physical Technology and formal capabilities of these Systems. Behavioural approach which is concerned with behavioral issues by sociology, economics and psychology that arise in the development of long term maintenance of

Technical

information systems such as Strategic Business Integration, Design, Implementation, utilization and Management cannot be explored usefully with the Models used in the Technical approach. Sociotechnical approach In this perspective, the performance of a System is optimized when both the Technology and the Organization mutually adjust to one another until a satisfactory fit is obtained. OBJECTIVES OF MIS The basic objectives of MIS are: To provide requisite information support for managerial functions within the organization To make available, right information at the right place at the right time at the lowest cost. To ensure that wrong and unwanted information is not generated and data overload is avoided. SCOPE OF MIS The scope of MIS is: (i) To provide managerial end users with information products that supports much of their day to day decision-making needs. (ii) To provide a variety of reports and displays to management. (iii) To provide information regarding products in advance to the managers. (iv) To retrieve information about internal operations from databases those have been updated by TPS. (v) To obtain data about business environment from external sources and to serve the management. CHARACTERISTICS OF MIS Jerome Kanter has listed the following characteristics of MIS: MIS is management oriented Management directed Integrated system (5 Ms) Avoids redundancy in data storage Common data flow Heavy planning element Sub-system concept Common database Flexibility and ease of use and Computerization.

1. MIS is management oriented: The designing of MIS takes care of the managers, who meet the information requirement. The development of the systems starts after deciding the management needs and keeping in view the overall objectives of the management. 2. Management directed: Since MIS requires heavily planning and investment, management is deeply involved in the design, implementation and maintenance of system.

3. Integrated System: Five Ms Men, Money, Materials, Machines and Methods are the basic resources of management information and is recognized as an important factor and its effective uses contributes to the success of the management . MIS is the catalyst and nerve centre of an organization. It has a number of subsystems. In order to make these an integrated systems effective, it becomes necessarily that they have to be viewed as an integrated system, so that the result is balanced. It binds databases of all subsystems of the business system and through information interchange, integrates the organization. 4. Avoids redundancy in data storage: Since MIS is an integrated system, it avoids unnecessary duplication and redundancy in data gathering and storage. 5. Common data flow: To achieve the objective of integration and to avoid duplication and redundancy in data gathering storage and retrieval, data capturing is usually confined to original sources and it is done only once. Common data flow tries to utilize minimum data processing effort and strives to minimize the number of output documents and reports. This type of integration can avoid duplication, simplify operations and produce an effective MIS. But separate files should be opened which are significant to one application with the use of common data flow. 6. Heavy planning element: Design and implementation of MIS require detailed and meticulous planning of such activities as acquisition and deployment of hardware, software, humanware, data processing operations information presentation and feedback. 7. Subsystem concept: MIS gives provision for breaking into various subsystems based on activity as well as the functions of the organization, so that effective implementation of each subsystem is possible at a time. 8. Common database: Its acts as a master that holds the functional subsystems together. It achieves this aim by allowing access to different master files of data to several functional subsystems. Data requirements for different levels of management also supports the need of than one database, unique databases and common database. 9. Flexibility and ease of use: MIS has been designed flexible enough to accommodate new requirements. The system is easy to operate so that not much computer skills are required on the part of the user to access database for information or for carrying out special analysis of data. 10. Computerization: MIS can be computerized because of its nature as a comprehensive system. This provides speed in creating and accessing files, accuracy, consistency an data processing, reduction in clerical work, avoid human errors etc,.

IMPORTANCE OF MIS MIS affects all areas of business Manufacturing Accounting & Finance Human resources Marketing Top management and Performance evaluations expectations.

EVOLUTION OF MIS Factors which influenced the evolution and fast growth of MIS: - Management theory and techniques - Management accounting and its applications - Changes in production and distribution methods - Development of management science - Introduction of computer into business data processing. The evolution of various computer-based information systems are as follows: 1950 to 1960 EDP (Electronic Data Processing including transaction processing, record keeping, accounting and other applications) 1960 to 1970 MIS (Management reports of prespecified information to support decisionmaking) 1970 to 1980 DSS (Decision Support System interactive adhoc support of the managerial decision-making process) 1980s and 1990s - Strategic End User Support End User Computing Systems Direct computing support for end user productivity and work group collaboration Executive Information System (EIS) Vital information for top management Expert Systems (ES) Knowledge-based expert advice for end users Strategic Information Systems (SIS) Strategic products and services for competitive advantage 2000s Enterprise and Global Internet Working (For end user, enterprise, and interorganizational computing, communications and collaboration, including global operations and management on the internet, intranets, extranets and other enterprise and global networks) Evolution of Management Information System Information system is as old as recorded human history. In the third millennium BC, the earliest use of information system discovered was in a Sumerian temple. The Sumerians used clay tablets for recording receipts and issues of grains to individuals, out of the temple grain store. An information system generates information using data. If the information systems generate information useful for managers in planning and control, the whole system is called Management Information System. Management information is reported on an exceptional basis for managerial decision-making or action.

The evolution of MIS and its fast growth in the last few decades can be attributed to the following factors: Growth of management theory and techniques. Growth of management accounting and applications in business. Changes in production and distribution methods and the consequent changes in the organizational structure. Development of management science. Introduction of computer in business data processing and the developments in information technology.

Another way to study the evolution of MIS is to look at the various application subsystems forming a part of it. Nolons stage theory is one that looks at the growth of stages of MIS in firms. In 1968 , Gary Dickson proposed a model of information system developed based on the organization structure and its information needs. He categorized application systems into: Clerical systems. Information systems. Decision support systems. Programmed systems. These applications systems constitute the MIS. Evolution of Various Computer Based Information Systems: 1 . 2 . Electronic data processing (EDP) (including transaction processing,record accounting and other EDP applications) keeping 1950s 1960s

3 .

Management Reporting Management information systems(MIS) 1960s 1970s (Management reports of prespecified information to support decision making) Decision Support Decision support systems(DSS) 1970s 1980s (Interactive ad hoc support of the managerial decision making process) Stategic End User Support (SEUS) End user computing systems: Direct computing 1980s 1990s support for end user productivity and work group collaboration. Executive Information Systems: Vital information for top management. Expert Systems: Knowledge based expert advice for end users. Strategic Information Systems: Strategic products and services for competitive advantage.

4 .

5 .

Enterprise and Global Internet Working Internet worked information system.

1990s 2000s

HOW TO ANALYSE A BIS (BUSINESS INFORMATION SYSTEM) PROBLEM? IS problems in the business world represent a combination of management, organization and technology issues. A six step process for analyzing a business problem involving information systems is as follows: Identify the problem Analyze and select the best solution to the problem among the alternatives Analyze the value of the firm provided by this solution Identify the technologies to be used to generate the solution Identify the changes to the organizational processes that will be required by the solution Identify the right management policy that will be required to implement the solution.

WHY
ORGANIZATION The key elements of an organization are its:

Structure An organization coordinates work through a structured hierarchy where people are arranged in a pyramid structure of rising authority and responsibility; Standard Operating Procedures (SOPs) SOPs are formal rules for accomplishing tasks that have been developed to cope with expected situations; People Organizations require many kinds of people and skills. In addition to Managers, Knowledge workers (engineers, architects, etc.,) design products or services and create new knowledge and Data workers (secretaries, clerks, etc.,) process the organizations paperwork. Production or Service workers (machinists, packers, etc.,) actually produce the organizations products or services. Politics Different levels and specialties in an organization create different interests and points of view which lead to conflict. Conflict is the basis for organizational politics; Culture Each organization has a unique culture, or a fundamental set of assumptions, values, and way of doing things, that has been accepted by most of the members. The major business functions, or specialized tasks performed by business organizations, consist of sales and marketing, manufacturing, finance, accounting, and human resources. MANAGEMENT Managements job is to make sense out of many situations faced by organizations, make decisions, and formulate action plans to solve organizational problems. Managers perceive business challenges in the environment; they set the organizational strategy and coordinate the work. It is important to note that managerial roles and decisions vary at different levels of the organization. Senior Managers make long-range strategic decisions about what products and services to produce. Middle Managers carry out the programs and plans of senior management. Operational Managers are responsible for monitoring the firms daily activities. TECHNOLOGY Information Technology is one of many tools managers use to cope with change. Computer Hardware is the physical equipment used for input, processing, and output activities in an Information system. Computer Software consists of the detailed programmed instructions that control and coordinate the computer hardware components in an information system. Storage technology includes both the physical media for storing data such as magnetic tape or optical disk, and the software governing the organization of data on these physical media.

Communication Technology, consisting of both physical devices and software, links the various pieces of hardware and transfers data from one physical location to another. A Network links two or more computers to share data or resources such as a printer. The Information Technology (IT) infrastructure consists of the computer hardware, software, data and storage technology, and networks providing a portfolio of shared information technology resources for the organization. The IT infrastructure provides the foundation or platform on which the firm can build its specific information systems. Information Architecture is the particular form that information technology takes in an organization to achieve selected goals or functions. It is a design for the firms key business application systems and the specific ways that are used by each organization.

DIGITAL FIRM

INFOR IT INF

Intensive use of Information technology in business firms coupled with equally significant organizational redesign, has created the conditions for a new phenomenon in industrial society the fully Digital Firm. A Digital Firm is one where nearly all of the organizations significant business relationships with customers, suppliers, and employees are digitally enabled and mediated.

EMERGING DIGITAL FIRM Digital Firms sense and respond to the business environments far more rapidly than traditional firms, giving them more flexibility to survive in turbulent situations. By digitally enabling and streamlining their work, digital firms have the potential to achieve unprecedented levels of profitability and competitiveness. There are four major systems that help the digital firm, namely: Supply Chain Management Systems automate the relationship between a firm and its suppliers in order to optimize the planning, sourcing, manufacturing, and delivery of products and services; Customer Relationship Management Systems attempt to develop a coherent, integrated view of all the relationships a firm maintains with its customers ; Enterprise Systems create an integrated enterprise-wide information system to coordinate key internal processes of the firm, integrating the data from manufacturing and distribution, finance, sales, and human resources. Knowledge Management Systems support the creation, capture, storage, and dissemination of firm expertise and knowledge. Emerging Digital Firm consists of: Electronic commerce The process of buying and selling goods and services electronically involving transactions using the internet, networks, and other digital technologies. Electronic business The use of internet and digital technology to execute all the business processes in the enterprise. It includes e-commerce as well as processes for the internal management of the firm and for coordination with suppliers and other business partners. Digital market Information systems links, buyers and sellers to exchange information, products, services, payments.

FIVE KEY MANAGEMENT ISSUES OF THE DIGITAL FIRM Strategic Business Challenge: How can businesses use information technology to become competitive, effective, and digitally enabled? The Globalization Challenge: How can firms understand the business and systems requirements of a global economic environment? The Information Architecture and Infrastructure Challenge: How can organizations develop an information architecture and information technology infrastructure that can support their goals when business conditions and technologies are changing so rapidly? Information Architecture particular form that IT takes in an organization to achieve selected goals or functions The IS Investment Challenge: How can organizations determine the business value of information systems? The Responsibility and Control Challenge: How can organizations ensure that their information systems are used in ethically and socially responsible manner?

MIS AN EVOLVING CONCEPT When the concept of MIS was first introduced, many proponents thought a single, highly integrated system would bring together processing for all organizational subsystems. It was demonstrated to be too complex to implement Now the latest concept is a Federation of sub-systems, developed and implemented as needed, but conforming to the overall plan, standards and procedures for the MIS. MIS as a concept continues to evolve. The concepts that can be considered to it are: - Decision Support Systems (DSS) - Information Resources Management (IRM) - End-user computing. FUNCTIONS OF MIS A MIS is used to collect data, store and process data and present information to managers. MIS is a combination of computers and procedures for providing information that managers use in making decisions. The functions of MIS are: - Collect data - Store and process data - Present information to managers

INFORM
INFORMATION SYSTEM LEVELS Information systems meet not only operational needs, but also three levels of management needs. Both vertical and horizontal integration exists among all four information levels operational level, lower management, middle management and top management.

VENDO
HORIZONTAL INTEGRATION Horizontal Integration may occur within or between major systems. For ex: The finance and HR personnel payroll system based on employee-related data elements common to both personnel and payroll. VERTICAL INTEGRATION

PURCHASING RECEIVING

Vertical Integration of an information system within production occurs as follows: Functional machine assignment Supervisory machine scheduling Tactical Make or buy decision Strategic new product design

FEEDBACK AND CONTROL Essentials to the design of any management system: Control is the process of comparing an actual output with a desired output for the purpose of improving the performance of a system. Feedback is the action taken to bring the difference between an actual output and a desired output within an acceptable range. OPERATING ELEMENTS OF AN INFORMATION SYSTEM The operating elements of an IS are: Physical components which include, Hardware machines and media Software system and application software Database collection of data Standard Operating Procedures manual or instruction booklet Operating Personnel programmers, system analysts, computer operators, etc., Processing functions which include, Processing Transactions Maintaining Master file Produce Reports Process Inquiries Process Interactive Support Applications Outputs for users The user of MIS provides inputs and receives outputs in the following types: Transaction Documents Preplanned Reports Preplanned inquiries responses Adhoc reports and Inquiry responses User Machine dialog. DATA RESOURCE MANAGEMENT It is a managerial activity that applies IS technology and management tools to the task of managing an organizations data resources. Its major components are: Database Administration is a DRM function that includes responsibility for developing and maintaining the organizations data, designing and monitoring the performance of databases and enforcing security. Data Administration is a DRM function that involves the establishment and enforcement of policies and procedures for managing data. Data Planning is a corporate planning and analysis function that focuses on DRM. It helps in developing overall information policy and data architecture for the firms data resources.

Data Dictionary is a software module and database containing description and definitions concerning the organization databases like structure data elements Inter-relationships other characteristics. DATABASE PROCESSING A database is a self describing collection of integrated records. It contains a directory or dictionary of its contents. The records are integrated because a database contains multiple files and the records within those files are processed by relationship to one another. Database Processing is utilizing a database for data processing activities such as maintenance, information retrieval or report generation. It is the execution of a systematic sequence of operations performed upon data to transform it into information. A Data Processing system processes transactions and produces reports. It represents the automation of fundamental routine processing to support operations.

INFORMATION Information is organized data that has been arranged for better comprehension, understanding and/or retrieval. What is one person's information can become another person's data. It is organized data that has been arranged for better comprehension, understanding and/or retrieval. It is translation of a given set of data or items into a set of quantitative or qualitative symbols. Information reduces uncertainty and triggers action. Information is data that has been processed into a form that is meaningful to the recipient and is of real or perceived value in current or prospective actions or decisions. Types of Information Information may be classified on the basis of the purpose for which it is utilized, into three main categories, namely:

Strategic information is required by managers at the strategic level of management for the formulation of organizational strategies. This relates to long-term planning policies of the organization as a whole. For ex: Information pertaining to new products, new technologies, competitors, etc. Tactical information is used in short-term planning and is of use at management control level. This type of information is generally based on data arising from current activities of the organization. For ex: sales analyses and forecasts, production resource requirements, annual financial statements, etc. Operational information applies to short periods which may vary from an hour to a few days. It is required for taking immediate action. It usually deals with current activity data. For ex: current stocks-in-hand, work-in-progress levels, outstanding orders from customers, etc. Attributes of Information Quality of information refers to its fitness for use, or its reliability. Some of the attributes of information, which influence the quality of information are: Timeliness means that information must reach the recipients within the prescribed timeframe. Accuracy means that information is free from mistakes and errors is clear and accurately reflects the meaning of data on which it is based. Relevance Information is relevant only if it answers specifically for the recipient what, why, where, when, who, and why? Adequacy means information must be sufficient in quantity which is required in the deciding processes of decision-making. Completeness Information provided to a manager must be complete and should meet all his needs. Incomplete information may result in wrong decisions which may prove costly to the organization. Explicitness Information received by a manager should be in such a way that he does not waste any time on processing the report, rather he should be able to extract the required information directly. Exception-based Information which results in saving precious time of the top management and enables the management to devote more time in pursuit of alternatives for the growth of the organization.

Utility Utilities of information which may facilitate or retard its use are Form utility Information should closely match the requirements of the user; Time utility Information, if available when needed, has a greater value; Place utility Value of information is more if it can be accessed or delivered easily; Possession utility the person who has the information influences its value by controlling its dissemination to others in the organization. Dimensions of Information Information may be understood to have the following three dimensions, namely: (i) Economic dimension refers to the cost of information and its benefits. Generation of information costs money. Measuring costs and benefits of information is difficult because of intangible characteristics of information. The costs of information include: Cost of acquiring data Cost of maintaining data Cost of generating information and Cost of communicating information. The cost is related to the response time required to generate information and communicate it. The value of information is the value of the change in decision behavior because of the information. The change in the behavior due to new information is measured to determine the benefits from its use. To arrive at the value of new information, the cost incurred to get this information is deducted from the benefits. (ii) Business dimension Different types of information are required by managers at different levels of the management hierarchy. The information needs of managers at strategic planning level are altogether different from those of operational control managers. It is because of the fact that managers at different levels are required to perform different functions in an organization. (iii) Technical dimension refers to the technical aspects of the database. Various aspects of the database which include the capacity of database, response time, security, validity, data interrelationship, etc., are required for the design of information.

The underlying philosophy behind Information Resource Management (IRM) is to design, inventory and control all of the resources required to produce information. When standardized and controlled, these resources can be shared and re-used throughout the corporation, not just by a single user or application. There are three classes of information resources:

BUSINESS RESOURCES are Enterprises, Business Functions, Positions (Jobs), Human/Machine Resources, Skills, Business Objectives, Projects, and Information Requirements.

IN FOR M A AN

SYSTEM RESOURCES Systems, Sub-Systems (business processes), Administrative Procedures (manual procedures and office automation related), Computer Procedures, Programs, Operational Steps, Modules, and Subroutines. DATA RESOURCES Data Elements, Storage Records, Files (computer and manual), Views, Objects, Inputs, Outputs, Panels, Maps, Call Parameters, and Data Bases. These three classes of information resources provide the rationale as to why there are three complementary methodologies within "PRIDE". ENTERPRISE ENGINEERING METHODOLOGY (EEM) for defining the mission and goals of the business and the development of an Enterprise Information Strategy synchronized with the business. INFORMATION SYSTEMS ENGINEERING METHODOLOGY (ISEM) for designing and building enterprise-wide information systems (business processes crossing organizational boundaries). Software Engineering is considered a subset of ISEM. DATA BASE ENGINEERING METHODOLOGY (DBEM) to design and develop the corporate data base, both logically and physically.

Each methodology consists of a series of defined phases, activities and operations. Laced throughout the methodologies are defined deliverables and review points to substantiate completeness and to provide an effective dialog between management and developers. The methodologies promote design correctness and the production of a quality product. One of the important by-products of cataloging and cross-referencing information resources is a model of the enterprise, including how it is organized and how it operates. Other benefits include: All information resources are controllable, permitting the ability to design integrated systems and perform an "impact analysis" of a proposed resource change. Simplified search of information resources for reuse. Redundancy of resource definition is eliminated. Complete and current documentation of all information resources, in an organized and meaningful way. Communications within the organization is improved since developers and users would use standard and common definitions for information resources, all of which would be in standard business terminology. Techniques of Information Resource Management is derived from the fields that have been associated with the Information Systems. These can be listed as follows: Database design and development that is derived from Computer sciences.

Classification of data and information retrieval that is derived from librarian and Information sciences. Document life cycle that is derived from Records management.

Information Systems and Technology Audits that is derived from other Audit Systems like finance, communication, energy etc. and Organizational psychology. Cost-benefit analysis and valuation of Information resource that is derived from Finance and Business management. Information resource management has become a popular way to emphasize a major change in the management and mission of the information systems function in many organizations. In many organizations, IRM may be viewed as having five major dimensions: A management concept that views data, information, and computer resources (computer hardware, software, and personnel) as valuable organizational resources that should be efficiently, economically, and effectively managed for the benefit of the entire organization. The five dimensions of IRM include: (1) Strategic management, (2) Resource management, (3) Functional management, (4) Technology management,

(5) Distributed management. Strategic Management Information technology must be managed to contribute to a firms strategic objectives and competitive advantages, not just for operational efficiency or decision making.

Operational Management Information technology and information systems can be managed by functional organizational structures and managerial techniques commonly used throughout other business units.

Resource Management Data and information, hardware and software, telecommunications networks, and IS personnel are vital organizational resources that must be managed like other business assets.

Technology Management All technologies that process, store, and communicate data and information throughout the enterprise should be managed as integrated systems of organizational resources.

Distributed Management Managing the use of information technology and information system resources in business units or workgroups is a key responsibility of their managers, no matter what their function or level in the organization.

INTELLIGENCE: Intelligence (abbreviated int. or intel.) refers to discrete or secret information with currency and relevance, and the abstraction, evaluation, and understanding of such information for its accuracy and value. Sometimes called "active data" or "active intelligence", intelligence typically regards the current plans, decisions, and actions of people, as these may have urgency or may otherwise be considered "valuable" from the point of view of the intelligence-gathering entity. Depending on the national policy, some intelligence agencies engage in clandestine and covert activities beyond espionage such as political subversion, sabotage and assassination. Other agencies strictly limit themselves to analysis, or collection and analysis; some governments have other organizations for covert action.

Military intelligence is an element of warfare which covers all aspects of gathering, analyzing, and making use of information, including information about the natural environment (Shulsky and Schmitt, 2002), over enemy forces and the ground. It involves spying, look-outs, high-tech surveillance equipment, and also secret agents. Business intelligence denotes the public or secret information that an organization obtains about its competitors and markets. See also data warehousing.

Intelligence as used here, when done properly, serves a function for organizations similar to that which intelligence (trait) serves for individual humans and animals. Well-known national intelligence organizations India

Research and Analysis Wing (RAW) Intelligence Bureau (IB)

Research and Analysis Wing (R&AW or RAW) Is India's external intelligence agency. It was formed in September 1968, after the newly independent Republic of India was faced with 2 consecutive wars, the Sino-Indian war of 1962 and the India-Pakistani war of 1965, as it was evident that a credible intelligence gathering setup was lacking. Its primary function is collection of external intelligence, counter-terrorism and covert operations. Intelligence Bureau (India) The Intelligence Bureau is India's internal intelligence agency and reputedly the world's oldest intelligence agency.[1] It was recast as the Central Intelligence Bureau in 1947 under the Ministry of Home Affairs. Intelligence cycle management The intelligence cycle is a investigation process used by end users (commander of a task force or supervisor of an investigation unit), which allows that user to gather specific

information, understand the possibilities of that information, and the limitations of the intelligence process. Within the context of government, military and business affairs, intelligence (the gathering and analysis of accurate, reliable information) is intended to help decision-makers at every level to make informed decisions. The intelligence cycle is the continuous process by which: a) Intelligence priorities are set, b) Raw information is collected, c) This information is analyzed, d) The processed information is disseminated, and e) The next set of priorities is determined. Sub cycles also exist: e.g., an analyst (c) may require more information (b). The related field of counterintelligence is tasked with impeding the intelligence efforts of others. An intelligence "consumer" might be: 1. An infantry officer who wants to know what is on the other side of the next hill, or 2. A head of government who wants to know the probability that a foreign leader will go to war over a certain point, or 3. A marketing executive who wants to know what his or her competitors are planning. Intelligence organizations are not, nor can they be expected to be, infallible (intelligence reports are often referred to as "estimates", and often include measures of confidence and reliability), but when properly managed and tasked, can be among the most valuable tools of management and government The Nine Types of Intelligence By Howard Gardner If distinct parts of the brain perform distinct functions, then we can expect to find several different types of mental ability (or disability) corresponding to strengths (or weaknesses) in specialized areas. For example, tone deafness is what neuropsychologists would call a selective deficit: a problem with a particular area or skill center in the brain. A person with tone deafness cannot "carry a tune" because he or she literally does not hear melody like other people. When the same skill center is highly developed, the result is an acute sensitivity to melody and sometimes perfect pitch (the ability to identify the pitch of notes played in isolation, without additional cues). A person can be a selective genius (a savant) or a person can have a selective deficit (missing or defective skill), depending on how well various parts of the brain function. Studies of savants and brain damaged patients helped to lead researcher Howard Gardner to a theory of multiple kinds of intelligence described in his book Frames of Mind (1983).

1. Naturalist Intelligence (Nature Smart) Designates the human ability to discriminate among living things (plants, animals) as well as sensitivity to other features of the natural world (clouds, rock configurations). This ability was clearly of value in our evolutionary past as hunters, gatherers, and farmers; it continues to be central in such roles as botanist or chef. It is also speculated that much of our consumer society exploits the naturalist intelligences, which can be mobilized in the discrimination among cars, sneakers, kinds of makeup, and the like. 2. Musical Intelligence (Musical Smart) Musical intelligence is the capacity to discern pitch, rhythm, timbre, and tone. This intelligence enables us to recognize, create, reproduce, and reflect on music, as demonstrated by composers, conductors, musicians, vocalist, and sensitive listeners. Interestingly, there is often an affective connection between music and the emotions; and mathematical and musical intelligences may share common thinking processes. Young adults with this kind of intelligence are usually singing or drumming to themselves. They are usually quite aware of sounds others may miss. 3. Logical-Mathematical Intelligence (Number/Reasoning Smart) Logical-mathematical intelligence is the ability to calculate, quantify, consider propositions and hypotheses, and carry out complete mathematical operations. It enables us to perceive relationships and connections and to use abstract, symbolic thought; sequential reasoning skills; and inductive and deductive thinking patterns. Logical intelligence is usually well developed in mathematicians, scientists, and detectives. Young adults with lots of logical intelligence are interested in patterns, categories, and relationships. They are drawn to arithmetic problems, strategy games and experiments 4. Existential Intelligence Sensitivity and capacity to tackle deep questions about human existence, such as the meaning of life, why do we die, and how did we get here. 5. Interpersonal Intelligence (People Smart) Interpersonal intelligence is the ability to understand and interact effectively with others. It involves effective verbal and nonverbal communication, the ability to note distinctions among others, sensitivity to the moods and temperaments of others, and the ability to entertain multiple perspectives. Teachers, social workers, actors, and politicians all exhibit interpersonal intelligence. Young adults with this kind of intelligence are leaders among their peers, are good at communicating, and seem to understand others feelings and motives. 6. Bodily-Kinesthetic Intelligence (Body Smart) Bodily kinesthetic intelligence is the capacity to manipulate objects and use a variety of physical skills. This intelligence also involves a sense of timing and the perfection of skills through mindbody union. Athletes, dancers, surgeons, and craftspeople exhibit well-developed bodily kinesthetic intelligence.

7. Linguistic Intelligence (Word Smart) Linguistic intelligence is the ability to think in words and to use language to express and appreciate complex meanings. Linguistic intelligence allows us to understand the order and meaning of words and to apply meta-linguistic skills to reflect on our use of language. Linguistic intelligence is the most widely shared human competence and is evident in poets, novelists, journalists, and effective public speakers. Young adults with this kind of intelligence enjoy writing, reading, telling stories or doing crossword puzzles. 8. Intra-personal Intelligence (Self Smart) Intra-personal intelligence is the capacity to understand oneself and ones thoughts and feelings, and to use such knowledge in planning and directioning ones life. Intra-personal intelligence involves not only an appreciation of the self, but also of the human condition. It is evident in psychologist, spiritual leaders, and philosophers. These young adults may be shy. They are very aware of their own feelings and are self-motivated. 9. Spatial Intelligence (Picture Smart) Spatial intelligence is the ability to think in three dimensions. Core capacities include mental imagery, spatial reasoning, image manipulation, graphic and artistic skills, and an active imagination. Sailors, pilots, sculptors, painters, and architects all exhibit spatial intelligence. Young adults with this kind of intelligence may be fascinated with mazes or jigsaw puzzles, or spend free time drawing or daydreaming. BUSINESS INTELLIGENCE Business intelligence is a general term used to refer to a number of activities a company may undertake to gather information about their market or their competitors. Some areas often included under the blanket heading of business intelligence are: competition analysis, market analysis, and industry analysis. Some people also consider industrial espionage that operates for information-gathering purposes to be a form of business intelligence. Business intelligence tools are a type of application software designed to report, analyze and present data. The tools generally read data that have been previously stored, often, though not necessarily, in a data warehouse or data mart. Types of business intelligence tools The key general categories of business intelligence tools are:

Spreadsheets Reporting and querying software - are tools that extract, sort, summarize, and present selected data OLAP

Online analytical processing, or OLAP (pronounced /olp/), is an approach to quickly answer multi-dimensional analytical queries.[1] OLAP is part of the broader category of business intelligence, which also encompasses relational reporting and data mining.[2] The typical applications of OLAP are in business reporting for sales, marketing, management reporting,

business process management (BPM)[3], budgeting and forecasting, financial reporting and similar areas. The term OLAP was created as a slight modification of the traditional database term OLTP (Online Transaction Processing).[4] Databases configured for OLAP use a multidimensional data model, allowing for complex analytical and ad-hoc queries with a rapid execution time. They borrow aspects of navigational databases and hierarchical databases that are faster than relational databases.[5] The output of an OLAP query is typically displayed in a matrix (or pivot) format. The dimensions form the rows and columns of the matrix; the measures form the values.

Digital Dashboards In management information systems, a dashboard is an executive information system user interface that (similar to an automobile's dashboard) is designed to be easy to read. For example, a product might obtain information from the local operating system in a computer, from one or more applications that may be running, and from one or more remote sites on the Web and present it as though it all came from the same source

Data mining Is the process of extracting patterns from data. Data mining is becoming an increasingly important tool to transform these data into information. It is commonly used in a wide range of profiling practices, such as marketing, surveillance, fraud detection and scientific discovery.

Data mining can be used to uncover patterns in data but is often carried out only on samples of data. The mining process will be ineffective if the samples are not a good representation of the larger body of data. Data mining cannot show up patterns that may be present in the larger body of data if those patterns are not present in the sample being "mined". Inability to find patterns may become a cause for some disputes between customers and service providers. Therefore data mining is not fool proof but may be useful if sufficiently representative data samples are collected. The discovery of a particular pattern in a particular set of data does not necessarily mean that a pattern is found elsewhere in the larger data from which that sample was drawn. An important part of the process is the verification and validation of patterns on other samples of data.

Process mining Process mining is a process management technique, that allow for the analysis of business processes based on event logs. The basic idea is to extract knowledge from event logs recorded by an information system. Process mining aims at improving this by providing techniques and tools for discovering process, control, data, organizational, and social structures from event logs

Process mining techniques are often used when no formal description of the process can be obtained by other means, or when the quality of an existing documentation is questionable. For example, the audit trails of a workflow management system, the transaction logs of an enterprise

resource planning system, and the electronic patient records in a hospital can be used to discover models describing processes, organizations, and products. Moreover, such event logs can also be used to compare event logs with some a priori model to see whether the observed reality conforms to some prescriptive or descriptive model. Contemporary management trends such as BAM (Business Activity Monitoring), BOM (Business Operations Management), BPI (Business process intelligence) illustrate the interest in supporting the diagnosis functionality in the context of Business Process Management technology (e.g., Workflow Management Systems but also other process-aware information systems).

Business performance management Local information systems The term Local Information System (LIS) has emerged over the last 5 years, primarily in the UK public sector. To date it is not widely used elsewhere although other terms like Community Information Systems apply to solutions, primarily in North America, that have a great deal of overlap. Another widely used and largely synonymous term is Data Observatory. Data Observatory is a more widely used term internationally particularly within the area of public health where sites which often include this type of statistical reporting application are often termed a Health observatory

Except for spreadsheets, these tools are sold as standalone tools, suites of tools, components of ERP systems, or as components of software targeted to a specific industry. The tools are sometimes packaged into data warehouse appliances. Business Intelligence System Does Your Business Intelligence System Provide the Answers You Need? Like most enterprises, your company has probably invested heavily in a business intelligence system. Intended to make information a tool for competitive gain, these ubiquitous applications offer many benefitsand a few limitations. As long as organization and delivery of information is your primary goal, the business intelligence system will likely satisfy your needs. It's when it comes to reporting that many solutions fall flat. But reporting is the real key to success in exploiting information for advantage. When reporting tools can't get information to the right people at the right time, decisions are either delayed or made without confidence. To be effective, your business intelligence system must contain an analytic component that allows you to explore and interpret data quickly. For thousand of enterprise users in a variety of industries and levels of responsibility, Spotfire provides indispensable analytic applications.

COMPUTER SYSTEMS ANALYST" Analyze science, engineering, business, and all other data processing problems for application to electronic data processing systems. Analyze user requirements, procedures, and problems to automate or improve existing systems and review computer system capabilities, workflow, and scheduling limitations. May analyze or recommend commercially available software. May supervise computer programmers. OR What do computer systems analysts do? A systems analyst is responsible for researching, planning, coordinating and recommending software and system choices to meet an organization's business requirements. The systems analyst plays a vital role in the systems development process. A successful systems analyst must acquire four skills: analytical, technical, managerial, and interpersonal. Analytical skills enable systems analysts to understand the organization and its functions, which helps him/her to identify opportunities and to analyze and solve problems. Technical skills help systems analysts understand the potential and the limitations of information technology. The systems analyst must be able to work with various programming languages, operating systems, and computer hardware platforms. Management skills help systems analysts manage projects, resources, risk, and change. Interpersonal skills help systems analysts work with end users as well as with analysts, programmers, and other systems professionals. Systems analysts may act as liaisons between vendors and the organization they represent. They may be responsible for developing cost analyses, design considerations, and implementation time-lines. They may also be responsible for feasibility studies of a computer system before making recommendations to senior management. BASIC FUNCTION: Provides analysis, design, configuration, testing, implementation, documentation and staff training for software that includes or supports operating systems, file and application servers, databases and network environments as it applies to Academic Information Systems. Essential job functions also include all tasks assigned to the Analyst/Programmer classification. Basically a systems analyst performs the following tasks: * Interact with the customers to know their requirements * Interact with designers to convey the possible interface of the software *Interact/guide the coders/developers to keep track of system * Perform system testing with sample/live data with the help of testers * Implement the new system development

* Prepare High quality Documentation Responsibilities: Provide technical expertise and recommendations in assessing new IT software projects and initiatives to support and enhance our existing Microsoft based systems. Make recommendations on custom applications which include a number of MS-Access data capture systems for Stewardship and other databases which need to be moved into a central SQL repository. Identify opportunities that can improve efficiency of business processes. Investigate and resolve application functionality related issues and provide first level support and troubleshooting of our Financial Edge and Raiser's Edge systems Coordinate application development for multiple projects. Assist in troubleshooting software application issues. Assist in managing an outsource relationship for 3rd party application development and programming consultants. Assist network administrator with application installation and testing. Troubleshoot technical issues and identify modifications needed in existing applications to meet changing user requirements. Analyze data contained in the corporate database and identify data integrity issues with existing and proposed systems and implement solutions. Provides assistance and advice to business users in the effective use of applications and information technology. Provide minor programming for some in-house IT projects. Provide SQL administration in live and test environments. Write technical procedures and documentation for the applications including operations, user guide, etc. Produce technical documentation for new and existing applications. Verify database and data integrity. Participate in weekly meetings with the IT network team to discuss progress and issues to be resolved, and report progress on a weekly basis to the CIO. Participate on IT project steering committees and be involved in the design phase of any new IT software development projects.

Assist in the creation of the system design and functional specifications for all new development projects. Serve as a liaison and facilitator between all business units to assist in addressing and resolving IT software issues.

Qualifications: Extensive knowledge of data processing, hardware platforms, and enterprise software applications. Technical experience with systems networking, databases, Web development, and user support. Good background in Data Base design in Microsoft SQL and Access. Background in Microsoft .NET, Visual Basic, Excel, Word, Outlook and HTML. Good working knowledge skills with Microsoft Office Products, Microsoft Visio, and Microsoft Project. Strong project management skills with effective results focus within an information systems environment. Strong analytical and problem solving skills.

UNIT-2

SYSTEM ANALYSIS AND DESIGN

Systems Development Life Cycle System development consisted of a programmer writing code to solve a problem or automate a procedure. Nowadays, systems are so big and complex that teams of architects, analysts, programmers, testers and users must work together to create the millions of lines of custom-written code that drive our enterprises. DEFINITION System Development Life Cycle (SDLC) is the overall process of developing information systems through a multistep process from investigation of initial requirements through analysis, design, implementation and maintenance. A System development cycle is a systematic and orderly approach to solving system problems. The Systems Development Life Cycle is a project management technique that divides complex projects into smaller, more easily managed segments or phases. Segmenting projects allows managers to verify the successful completion of project phases before allocating resources to subsequent phases. The Systems Development Life Cycle (SDLC), or Software Development Life Cycle in systems engineering and software engineering, is the process of creating or altering systems, and the models and methodologies that people use to develop these systems. The concept generally refers to computer or information systems. In software engineering the SDLC concept underpins many kinds of software development methodologies. These methodologies form the framework for planning and controlling the creation of an information system: the software development process. Overview Systems Development Life Cycle (SDLC) is any logical process used by a systems analyst to develop an information system, including requirements, validation, training, and user (stakeholder) ownership. Any SDLC should result in a high quality system that meets or exceeds customer expectations, reaches completion within time and cost estimates, works effectively and efficiently in the current and planned Information Technology infrastructure, and is inexpensive to maintain and cost-effective to enhance.[2] Computer systems have become more complex and often (especially with the advent of ServiceOriented Architecture) link multiple traditional systems potentially supplied by different software vendors. To manage this level of complexity, a number of systems development life cycle (SDLC) models have been created: "waterfall"; "fountain"; "spiral"; "build and fix"; "rapid prototyping"; "incremental"; and "synchronize and stabilize. SDLC models can be described along a spectrum of agile to iterative to sequential. Agile methodologies, such as XP and Scrum, focus on light-weight processes which allow for rapid changes along the development cycle. Iterative methodologies, such as Rational Unified Process and Dynamic Systems Development Method, focus on limited project scopes and expanding or improving products by multiple iterations. Sequential or big-design-upfront (BDUF) models, such as Waterfall, focus on complete and correct planning to guide large projects and risks to successful and predictable results.[citation needed]

Some agile and iterative proponents confuse the term SDLC with sequential or "more traditional" processes; however, SDLC is an umbrella term for all methodologies for the design, implementation, and release of software.[3][4] In project management a project can be defined both with a project life cycle (PLC) and an SDLC, during which slightly different activities occur. According to Taylor (2004) "the project life cycle encompasses all the activities of the project, while the systems development life cycle focuses on realizing the product requirements" History The systems development lifecycle (SDLC) is a type of methodology used to describe the process for building information systems, intended to develop information systems in a very deliberate, structured and methodical way, reiterating each stage of the life cycle. The systems development life cycle, according to Elliott & Strachan & Radford (2004), "originated in the 1960s to develop large scale functional business systems in an age of large scale business conglomerates. Information systems activities revolved around heavy data processing and number crunching routines".[6] Several systems development frameworks have been partly based on SDLC, such as the Structured Systems Analysis and Design Method (SSADM) produced for the UK government Office of Government Commerce in the 1980s. PARTICIPANTS IN SDLC Project manager Systems analysts & designers Database analysts & designers Users Programmers Database Administrators (DBAs) Networking experts and Other technical experts.

Responsibilities of the Project Manager Assemble the project team Build detailed project plans Monitor people and plan Work with other management and Ultimately held responsible for success of system development project. Responsibilities of System Analysts & Designers Focus on business needs Bridge business and technology System functions and data Analyst - What should be done? Designer - How should it be done? Greater technology focus Responsibilities of Database Analysts and Designers Focus on business needs Bridge business and technology Primary focus on data requirements Analyst - What data is needed? and Designer - How should it be stored?. Responsibilities of Users Ultimate users of new system Provide requirements, business needs Review documentation Test & accept new system Train other users and May represent actual users. Responsibilities of Programmers Design programs (detailed design) Write programs Test programs and Write SQL for database access. Responsibilities of Database Administrators Ultimately responsible for databases - current and future Provide data and modeling expertise Provide DBMS expertise and Monitor and tune databases. Responsibilities of Other Technical Experts Provide expertise in specified areas networking operating systems

hardware development languages development methodologies and tools

INFORMATION SYSTEM PLANNING

Typica S ystem
s

Problem w ex s ith

sesire to exploit a In r new fo m t D

T et h

Establishing Objectives for Systems Development Performance objectives Quality or usefulness of the output The quality or usefulness of the format of the output The speed at which the output is generated Cost objectives Development costs Costs related to the uniqueness of the system application Fixed investments in hardware and related equipment On-going operating costs of the system

Stra

TRADITIONAL LIFE CYCLE MODEL Software Development Life Cycle or SDLC is a model of a detailed plan on how to create, develop, implement and eventually fold the software. Its a complete plan outlining how the software will be born, raised and eventually be retired from its function.

D eveloping ov

Systems investigation Problem definition A brief description of the proper understanding and definition of the problem is essential to discover the cause of the problem and to plan a directed investigation. The existing system is evaluated. Deficiencies are identified. This can be done by interviewing users of the system and consulting with support personnel. Feasibility study is a preliminary study for an evaluation of whether it is worthwhile to proceed with a project. Technical feasibility Can the hardware, software, and other system components be acquired or developed to solve the problem? Operational feasibility Can the project be put into action or operation? Schedule feasibility Can the project be completed in a reasonable amount of time? Economic feasibility Does the project make financial sense? Legal, political and contractual feasibility Does the project comply with the laws, rules & regulations pertaining to the land? How will the key stakeholders view the proposed system? Are the contractual ramifications assessed due to the construction of a system? Organizational feasibility Does the proposed information system support the objective of the organizations strategic plan? Methods of preliminary investigation i.e. Data collection

Systems analysis is a detailed study of the various operations of a business activity along with its boundaries. The new system requirements are defined. In particular, the deficiencies in the existing system must be addressed with specific proposals for improvement. It describes WHAT should do to meet the requirements of the users. It involves a detailed study of: The information needs of the organization and its end users. Existing information systems (their activities, resources and products) The expected information system (in terms of capabilities of IS required to meet the information needs of the user). Systems design refers to the technical specification that will be implied in constructing the system. The proposed system is designed. Plans are laid out concerning the physical construction, hardware, operating systems, programming, communications, and security issues. The new system is developed. The new components and programs must be obtained and installed. Users of the system must be trained in its use, and all aspects of performance must be tested. If necessary, adjustments must be made at this stage. It describes HOW the system will accomplish the needs of the users. It focuses on three activities: - User interface - Data design and - Process design. Systems implementation involves hardware and software acquisition, site preparation, user training and installation of the system. The system is put into use. This can be done in various ways. The new system can phased in, according to application or location, and the old system gradually replaced. In some cases, it may be more cost-effective to shut down the old system and implement the new system all at once. Systems maintenance and review involves the monitoring, evaluating and modifying of a system to make desirable or necessary improvements. Software needs to be maintained not because some of its modules or programs wear out and need to be replaced, but because there are often some residual errors remaining in the system which have to be removed as soon as they are discovered. This is an ongoing process, until the system stabilizes.

Struct Interv Unstru

SOFTWARE DEVELOPMENT LIFE CYCLE Software development life cycle is basically a process which is adapted and followed during the development of software. A software life cycle model depicts the significant phases or activities of a software project from conception until the product is retired. It specifies the relationships between project phases, including transition criteria, feedback mechanisms, milestones, baselines, reviews, and deliverables. Software Development Life Cycle or SDLC is a model of a detailed plan on how to create, develop, implement and eventually fold the software. Its a complete plan outlining how the software will be born, raised and eventually be retired from its function. Typically, a life cycle model addresses the phases of a software project: System requirements phase, System feasibility phase, System analysis phase, System design phase, System coding phase, System integration and testing phase, System documentation phase, System implementation phase and System maintenance and support phase. (i) System requirements phase: begins when an opportunity to add, improve, or correct a system is identified and formally requested through the presentation of a business case. The business case should, at a minimum, describe a proposals purpose, identify expected benefits, explain how the proposed system supports one of the organizations business strategies. should also identify alternative solutions and collect details as many informational, functional, and network requirements as possible. (ii) System feasibility phase: After making an analysis in the system requirement the next step is to make analysis of the software requirement. In other words feasibility study is also called as software requirement analysis. In this phase development team has to make communication with customers and make analysis of their requirement and analyze the system. By making analysis this way it would be possible to make a report of identified area of problem. By making a detailed analysis on this area a detailed document or report is prepared in this phase which has details like project plan or schedule of the project, the cost estimated for developing and executing the system, and Target dates for each phase of delivery of system developed and so on. This phase is the base of software development process since further steps taken in software development life cycle would be based on the analysis made on this phase and so careful analysis has to be made in this phase. The feasibility support documentation should be compiled and submitted for senior management or board study. The feasibility study document should provide an overview of the proposed project and identify expected costs and benefits in terms of economic, technical, and operational feasibility. The document should also describe alternative solutions and include a recommendation for approval or rejection. The document should be reviewed and signed off on by all affected parties. (iii) System analysis phase: is the most critical step in completing development, acquisition, and maintenance projects. Careful planning, particularly in the early stages of a project, is necessary to coordinate activities and manage project risks effectively. The depth and formality of project plans should be commensurate with the characteristics and risks of a given project. Project plans refine the information gathered during the initiation phase by further identifying

the specific activities and resources required to complete a project. A critical part of a project managers job is to coordinate discussions between user, audit, security, design, development, and network personnel to identify and document as many functional, security, and network requirements as possible. Primary items organizations should address in formal project plans include: System overview Roles and responsibilities Communication Defined deliverables Control requirements Risk management Standards Documentation Scheduling Budget and Testing. (iv) System design phase: involves converting the informational, functional, and network requirements identified during the initiation and planning phases into unified design specifications that developers use to script programs during the development phase. Program designs are constructed in various ways. Using a top-down approach, designers first identify and link major program components and interfaces, then expand design layouts as they identify and link smaller subsystems and connections. Using a bottom-up approach, designers first identify and link minor program components and interfaces, then expand design layouts as they identify and link larger systems and connections. Contemporary design techniques often use prototyping tools that build mock-up designs of items such as application screens, database layouts, and system architectures. End users, designers, developers, database managers, and network administrators should review and refine the prototyped designs in an iterative process until they agree on an acceptable design. Audit, security, and quality assurance personnel should be involved in the review and approval process. (v) System coding phase: involves converting design specifications into executable programs. Effective development standards include requirements that programmers and other project participants discuss design specifications before programming begins. The procedures help ensure programmers clearly understand program designs and functional requirements. Programmers use various techniques to develop computer programs. The large transactionoriented programs associated with financial institutions have traditionally been developed using procedural programming techniques. Procedural programming involves the line-by-line scripting of logical instructions that are combined to form a program. Advancements in programming techniques include the concept of "object-oriented programming." Object-oriented programming centers on the development of reusable program routines (modules) and the classification of data types (numbers, letters, dollars, etc.) and data structures (records, files, tables, etc.). (vi) System testing and integration phase: A software or system which is not tested would be of poor quality. This is because this is the phase where system developed would be tested and reports are prepared about bugs or errors in system. To do this testing phase there are different levels and methods of testing like unit testing, system test and so on. Based on the need the

testing methods are chosen and reports are prepared about bugs. After this process the system again goes to development phase for correction of errors and again tested. This process continues until the system is found to be error free. To ease the testing process debuggers or testing tools are also available. The system is then integrated and tested as a whole. Primary tests include: Acceptance Testing End users perform acceptance tests to assess the overall functionality and interoperability of an application. End-to-End Testing End users and system technicians perform end-to-end tests to assess the interoperability of an application and other system components such as databases, hardware, software, or communication devices. Functional Testing End users perform functional tests to assess the operability of a program against predefined requirements. Functional tests include black box tests, which assess the operational functionality of a feature against predefined expectations, or white-box tests, which assess the functionality of a features code. Integration Testing End users and system technicians perform integration tests to assess the interfaces of integrated software components. Parallel Testing End users perform parallel tests to compare the output of a new application against a similar, often the original, application. System Testing Technicians perform system tests to assess the functionality of an entire system. Unit Testing Programmers perform unit tests to assess the functionality of small modules of code. (vii) System documentation phase: Organizations should maintain detailed documentation for each application and application system in production. Thorough documentation enhances an organizations ability to understand functional, security, and control features and improves its ability to use and maintain the software. The documentation should contain detailed application descriptions, programming documentation, and operating instructions. System documentation should include: System Descriptions System descriptions provide narrative explanations of operating environments and the interrelated input, processing, and output functions of integrated application systems. System Documentation System documentation includes system flowcharts and models that identify the source and type of input information, processing and control actions (automated and manual), and the nature and location of output information. System File Layouts System file layouts describe collections of related records generated by individual processing applications. For example, personnel may need system file layouts to describe interim files, such as sorted deposit transaction files, in order to further define master file processing requirements. (viii) System implementation phase: The implementation phase involves installing approved applications into production environments. Primary tasks include announcing the implementation schedule, training end users, and installing the product. Additionally, organizations should input and verify data, configure and test system and security parameters, and conduct post-implementation reviews. Management should circulate implementation schedules to all affected parties and should notify users of any implementation responsibilities. (ix) System maintenance and support phase: involves making changes to hardware, software, and documentation to support its operational effectiveness. It includes making changes to improve a systems performance, correct problems, enhance security, or address user

requirements. To ensure modifications do not disrupt operations or degrade a systems performance or security, organizations should establish appropriate change management standards and procedures. Change management (sometimes referred to as configuration management) involves establishing baseline versions of products, services, and procedures and ensuring all changes are approved, documented, and disseminated. Change controls should address all aspects of an organizations technology environment including software programs, hardware and software configurations, operational standards and procedures, and project management activities. Management should establish change controls that address major, routine, and emergency software modifications and software patches. Quality assurance, security, audit, regulatory compliance, network, and end-user personnel should be appropriately included in change management processes. Risk and security review should be done whenever a system modification is implemented to ensure controls remain in place. The above software development process are all vital for a system to get developed with quality and thus to achieve customer satisfaction which is the main objective of any software development process. Systems development phases Systems Development Life Cycle (SDLC) adheres to important phases that are essential for developers, such as planning, analysis, design, and implementation, and are explained in the section below. There are several Systems Development Life Cycle Models in existence. The oldest model, that was originally regarded as "the Systems Development Life Cycle" is the waterfall model: a sequence of stages in which the output of each stage becomes the input for the next. These stages generally follow the same basic steps but many different waterfall methodologies give the steps different names and the number of steps seem to vary between 4 and 7. There is no definitively correct Systems Development Life Cycle model, but the steps can be characterized and divided in several steps.

The SDLC can be divided into ten phases during which defined IT work products are created or modified. The tenth phase occurs when the system is disposed of and the task performed is either eliminated or transferred to other systems. The tasks and work products for each phase are described in subsequent chapters. Not every project will require that the phases be sequentially executed. However, the phases are interdependent. Depending upon the size and complexity of the project, phases may be combined or may overlap. Initiation/planning To generate a high-level view of the intended project and determine the goals of the project. The feasibility study is sometimes used to present the project to upper management in an attempt to gain funding. Projects are typically evaluated in three areas of feasibility: economical, operational or organizational, and technical. Furthermore, it is also used as a reference to keep the project on track and to evaluate the progress of the MIS team.[8] The MIS is also a complement of those phases. This phase is also called the analysis phase. Requirements gathering and analysis

The goal of systems analysis is to determine where the problem is in an attempt to fix the system. This step involves breaking down the system in different pieces and drawing diagrams to analyze the situation, analyzing project goals, breaking need to be created and attempting to engage users so that definite requirements can be defined. Requirement Gathering sometimes require individual/team from client as well as service provider side to get a detailed and accurate requirements. Design Strengths and weaknesses Few people in the modern computing world would use a strict waterfall model for their Systems Development Life Cycle (SDLC) as many modern methodologies have superseded this thinking. Some will argue that the SDLC no longer applies to models like Agile computing, but it is still a term widely in use in Technology circles. The SDLC practice has advantages in traditional models of software development, that lends itself more to a structured environment. The disadvantages to using the SDLC methodology is when there is need for iterative development or (i.e. web development or e-commerce) where stakeholders need to review on a regular basis the software being designed. Instead of viewing SDLC from a strength or weakness perspective, it is far more important to take the best practices from the SDLC model and apply it to whatever may be most appropriate for the software being designed. A comparison of the strengths and weaknesses of SDLC:

Strength and Weaknesses of SDLC [9] Strengths Control. Monitor Large projects. Detailed steps. Documentation. Well defined user input. Ease of maintenance. Development and design standards. Tolerates changes in MIS staffing. An alternative to the SDLC is Rapid Application Development, which combines prototyping, Joint Application Development and implementation of CASE tools. The advantages of RAD are speed, reduced development cost, and active user involvement in the development process. It should not be assumed that just because the waterfall model is the oldest original SDLC model that it is the most efficient system. At one time the model was beneficial mostly to the world of automating activities that were assigned to clerks and accountants. However, the world of technological evolution is demanding that systems have a greater functionality that would assist help desk technicians/administrators or information technology specialists/analysts. SDLC MODELS Weaknesses Increased development time. Increased development cost. Systems must be defined up front. Hard to estimate costs, project overruns. User input is sometimes limited.

Evaluate costs and completion targets. Rigidity.

Software development projects are complex. To deal with these complexities, many developers adhere to a core set of development principles. These principles define the field of software engineering. A major component of this field is the lifecycle model. The lifecycle model describes steps to follow when developing softwarefrom the initial concept stage to the release, maintenance, and subsequent upgrading of the software. Many different lifecycle models currently exist. Each has advantages and disadvantages in terms of time-to-release, quality, and risk management. Consider how you decide what requirements and specifications the project must meet and how you deal with changes to them. Also consider when you need to meet these requirements and what happens if you do not meet a deadline. System development lifecycle models is a set of traditional methodologies that were created to develop big and complex systems that demanded teams of architects, analysts, programmers, testers and users work together to create the millions of lines of custom-written code that drove the enterprises. The Systems Development Life Cycle (SDLC) is a conceptual model used in project management that describes the stages involved in an information system development project, from an initial feasibility study through maintenance of the completed application. The lifecycle model is a foundation for the entire development process. Good decisions can improve the quality of the software you develop and decrease the time it takes to develop it. Various SDLC methodologies have been developed to guide the processes involved, including: Waterfall model V-shaped model Spiral model Build and Fix Synchronize and stabilize JAD (Joint Application Development) RAD (Rapid Application Development) - Prototyping model Incremental model. WATERFALL MODEL Waterfall model is the most well-known model in software development also known as the traditional software development lifecycle. The waterfall or the linear sequential model illustrates a sequenced systematic approach, which starts with analysis and progresses through each stage to testing and maintenance/completion. The stages in the development model can be characterized and divided up in different ways, including the following: Project planning, feasibility study: Establishes a high-level view of the intended project and determines its goals. Systems analysis, requirements definition: Refines project goals into defined functions and operation of the intended application. Analyzes end-user information needs. Systems design: Describes desired features and operations in detail, including screen layouts, business rules, process diagrams, pseudocode and other documentation. Implementation: The real code is written here.

Integration and testing: Brings all the pieces together into a special testing environment, then checks for errors, bugs and interoperability. Acceptance, installation, deployment: The final stage of initial development, where the software is put into production and runs actual business. Maintenance: What happens during the rest of the software's life: changes, correction, additions, and moves to a different computing platform and more. This, the least glamorous and perhaps most important step of all, goes on seemingly forever.

Advantages Simple and easy to use. Easy to manage due to the rigidity of the model each phase has specific deliverables and a review process. Phases are processed and completed one at a time. Works well for smaller projects where requirements are very well understood. Disadvantages Adjusting scope during the life cycle can kill a project No working software is produced until late during the life cycle. High amounts of risk and uncertainty. Poor model for complex and object-oriented projects. Poor model for long and ongoing projects. Poor model where requirements are at a moderate to high risk of changing. The waterfall model in software development is best suited to environments with stable product definition, for example, building a well-defined maintenance release of an existing product or porting an existing product to a new platform.

V-shaped MODEL The V-model is an internationally recognized development standard for IT systems which uniformly and bindingly lays down what has to be done[Procedure], how the tasks are to be performed[Methods] and what is to be used to carry this out[Tools]. Conventional V-Model represents the development process in the form of a V shape. The right side of the V represents the testing where the system is validated against the specifications defined on the left side. The meeting point of the V represents the actual development.

V-Shaped life cycle is also a sequential path of execution of processes. Each phase must be completed before the next phase begins. Testing is emphasized in this model more so than the waterfall model though. The testing procedures are developed early in the life cycle before any coding is done, during each of the phases preceding implementation. Requirements begin the life cycle model just like the waterfall model. Before development is started, a system test plan is created. The test plan focuses on meeting the functionality specified in the requirements gathering. The high-level design phase focuses on system architecture and design. An integration test plan is created in this phase as well in order to test the pieces of the software systems ability to work together.The low-level design phase is where the actual software components are designed, and unit tests are created in this phase as well. The implementation phase is, again, where all coding takes place. Once coding is complete, the path of execution continues up the right side of the V where the test plans developed earlier are now put to use. Advantages Simple and easy to use. Each phase has specific deliverables. Higher chance of success over the waterfall model due to the development of test plans early on during the life cycle. Works well for small projects where requirements are easily understood. Disadvantages Very rigid, like the waterfall model. Little flexibility and adjusting scope is difficult and expensive. Software is developed during the implementation phase, so no early prototypes of the software are produced. Model doesnt provide a clear path for problems found during testing phases.

This model is used for systems in which reliability is very important, e.g., systems developed to monitor the state of the patients, software used in radiation therapy machines. SPIRAL MODEL (Boehms model) This model of development combines the features of the prototyping model and the waterfall model. The spiral model is favoured for large, expensive, and complicated projects. In this model the software is developed in a series of incremental releases with the early stages being either paper models or prototypes. Later iterations become increasingly more complete versions of the product. Depending on the model it may have 3-6 task regions (/framework activities) our case will consider a 6-task region model. These regions are: The customer communication task to establish effective communication between developer and customer. The planning task to define resources, time lines and other project related information.. The risk analysis task to assess both technical and management risks. The engineering task to build one or more representations of the application. The construction and release task to construct, test, install and provide user support (e.g., documentation and training). The customer evaluation task to obtain customer feedback based on the evaluation of the software representation created during the engineering stage and implemented during the install stage. The evolutionary process begins at the centre position and moves in a clockwise direction. Each traversal of the spiral typically results in a deliverable. For example, the first and second spiral traversals may result in the production of a product specification and a prototype, respectively. Subsequent traversals may then produce more sophisticated versions of the software. An important distinction between the spiral model and other software models is the explicit consideration of risk. There are no fixed phases such as specification or design phases in the model and it encompasses other process models. For example, prototyping may be used in one spiral to resolve requirement uncertainties and hence reduce risks. This may then be followed by a conventional waterfall development. Note that each passage through the planning stage results in an adjustment to the project plan (e.g. cost and schedule are adjusted based on the feedback from the customer, project manager may adjust the number of iterations required to complete the software.) Each of the regions is populated by a set of work tasks called a task set that are adapted to characteristics of the project to be undertaken. For small projects the number of tasks and their formality is low. Conversely, for large projects the reverse is true.

In the spiral model, the angular component represents progress, and the radius of the spiral represents cost. Advantages High amount of risk analysis Good for large and mission-critical projects. Software is produced early in the software life cycle.

Disadvantages Can be a costly model to use. Risk analysis requires highly specific expertise. Projects success is highly dependent on the risk analysis phase. Doesnt work well for smaller projects.

This model should be considered for projects where risks are high, requirements must be refined and user needs are very important.

BUILD AND FIX Build-and-Fix Model is a trial and error approach. This is a simple approach of product construction without specification or attempt of design. This model is adequate for simple software programs (100 to 200 lines of code); however it is unacceptable for large and complex systems. Techniques used in the initial years of software development resulted into the term Build and Fix model. In fact the model resulted in a number of project failures because the product was not constructed using proper specification and design. Instead the product was reworked a number of times to satisfy the clients.

Bi 1 t uld s vr i n eso

Mdf u t l o i y ni c so e i u t mr s s t fe ais i d

Ue s

Advantages No time spent on "overhead" like planning, documentation, quality assurance, standards enforcement or other non-coding activities. Requires little experience. Disadvantages Dangerous. No means of assessing quality or identifying risks. Fundamental flaws in approach do not show up quickly, often requiring work to be thrown out. This model is of historical importance now.

SYNCHRONIZE AND STABILIZE Synchronize-and-stabilize model is to scale-up a loosely structured small-team ("hacker") style of product development. The features of this model are: Many small teams (3 - 8 developers per team) work in parallel changes are synchronize frequently so components will work together developers check-in their code by a particular time so a new build (complete recompile) is done by the end of the day or the next morning; a defect that "breaks" the build must be fixed immediately Features are evolved incrementally, with occasional innovations start with a "vision statement" select features and establish priority of features with user input developers are free to innovate or adapt to unforseen competitive opportunities or threats continual testing during development The product is stabilized at 3 or 4 milestone junctures in the project lifetime thorough internal and external testing (beta sites) fix almost all errors detected "zero-bug" release at the last milestone. The process is also called a "milestone", "daily build", "nightly build", and "zero-defect" process. The overall strategy is to quickly introduce products that are "good enough" to capture a mass market, and then improve the product, selling multiple product versions and upgrades. RAD (Rapid Application Development) or Rapid Prototyping or Evolutionary model Prototype refers to a working model of an information system application. The prototype does not contain all the features or perform all the necessary functions of the final system. Rather it includes sufficient elements to enable individuals to use the proposed system to determine what they like and dont like and to identify features to be added or changed. Rapid Application Development (RAD) is an incremental software development process model that emphasises a very short development cycle typically 60-90 days. The RAD approach encompasses the following phases: a. Business Modelling - The information flow among business functions is modeled in am way that answers the following questions: 1. What information drives the business process? 2. What information is generated? 3 Who generates it? 4. Where does the information go? 5. Who processes it? b. Data Modeling - The information flow defined as part of the business modeling phase is refined into a set of data objects that are needed to support the business. The characteristic of

each object is identified and the relationships between these objects are defined. c. Process Modeling - The data objects defined in the data-modeling phase are transformed to achieve the information flow necessary to implement a business function. Processing the descriptions is created for adding, modifying, deleting or retrieving data object. d. Application Generation - RAD assumes the use of the RAD tools like VB, VC++, Delphi, etc. rather than creating software using conventional third generation programming languages. The RAD works to reuse existing program components or create reusable components. In all cases, automated tools are used to facilitate construction of the software. e. Testing and Turnover - Since the RAD process emphasizes reuse, many of the program components have already been tested. This minimize the testing and development time. Prototyping is an effective tool for demonstrating how a design meets a set of requirements. You can build a prototype, adjust the requirements, and revise the prototype several times until you have a clear picture of the overall objectives. In addition to clarifying the requirements, a prototype also defines many areas of the design simultaneously. Advantages Customers can see steady progress. This is useful when requirements are changing rapidly, when the customer is reluctant to commit to a set of requirements or when no one fully understands the application area. Disadvantages It is impossible to know at the outset of the project how long it will take. There is no way to know the number of iterations that will be required. This model can be employed on most types of acquisitions. However, it is employed on medium to high-risk systems. It is more applicable to new systems than upgrading existing software. The developing and using organizations must be flexible and willing to work with evolving prototypes. INCREMENTAL MODEL In this model, maintenance is no longer a stage, i.e. "Maintenance" phases become subsequent cycles through the waterfall sequence. This means that risk is spread and each cycle produces a usable system, although the first may be a pre-production prototype. The product is designed, implemented, integrated and tested as a series of incremental builds Stepwise refinement and iterative enhancement (use prototyping as a means of refinement and enhancement of specification).

Incremental release is better than Waterfall but requires strict configuration management so each build is incorporated into the existing structure without destroying what has been already build and to avoid continuous change of requirements. Multiple development cycles take place here, making the life cycle a multi-waterfall cycle. Cycles are divided up into smaller, more easily managed iterations. Each iteration passes through the requirements, design, implementation and testing phases. A working version of software is produced during the first iteration, so you have working software early on during the software life cycle. Subsequent iterations build on the initial software produced during the first iteration. Advantages Generates working software quickly and early during the software life cycle. More flexible less costly to change scope and requirements. Easier to test and debug during a smaller iteration. Easier to manage risk because risky pieces are identified and handled during its iteration. Each iteration is an easily managed milestone. Disadvantages Each phase of an iteration is rigid and do not overlap each other. Problems may arise pertaining to system architecture because not all requirements are gathered up front for the entire software life cycle. This model is good for projects where requirements are known at the beginning, but which require functionality early in the project or which can benefit from the feedback of earlier cycles. It is best used on low to medium-risk programs. JAD (Joint Application Development) This model involves the client or end user in the design and development of an application, through a series of collaborative workshops called JAD sessions. The Joint Application Development (JAD) methodology aims to involve the client in the design and development of an application. This is accomplished through a series of collaborative workshops called JAD sessions. Two employees of IBM, Chuck Morris and Tony Crawford, developed the JAD methodology in the late 1970s and began teaching the approach in to the 1980s. In contrast to the Waterfall approach, JAD is thought to lead to shorter development times and greater client satisfaction, both of which stem from the constant involvement of the client

throughout the development process. On the other hand, with the traditional approach to systems development, the developer investigates the system requirements and develops an application, with client input consisting of a series of interviews. The five phases of JAD are: JAD project definition Research on user requirement Preparation for the JAD session Conducting and facilitating the JAD session itself, and Predicting and obtaining approval of the final document that incorporates all decisions made. JAD is a useful process to gather cross function information and different opinions effectively. Its usage keeps expanding thus its definition keeps changing. Although different people might have different understanding and application of JAD, the essence of JAD is the facilitated session. The basic components of JAD sessions are recognized and agreed-upon by JAD practitioners. They also provide some guide-lines for conducting JAD sessions. Properly following these guide-lines can increase the success of JAD sessions. Automated JAD, especially used in conjunction with Group Supporting Systems, looks very promising, although some experts remain skeptical. SDLC is a systems approach to problem solving and is made up of several phases. A developer may or may not use or apply SDLC, but based on my experience, it is highly advantageous to use one. We did a system before where we didnt plan much how we will execute, start and finish building a system. The result was a complete catastrophe! We suffered groping in the dark syndrome, regretted ever working with one another, and suffered too many expenses and too much time delay. Learning from the lesson, it is, thus, highly recommended to properly plan the flow of activities in building a system through SDLC as one of the tools in management. COMPUTER-AIDED SOFTWARE ENGINEERING Computer-Aided Software Engineering (CASE), in the field software engineering is the scientific application of a set of tools and methods to software which results in high-quality, defect-free, and maintainable software products. It also refers to methods for the development of information systems together with automated tools that can be used in the software development process. The term "Computer-aided software engineering" (CASE) can refer to the software used for the automated development of systems software, i.e., computer code. The CASE functions include analysis, design, and programming. CASE tools automate methods for designing, documenting, and producing structured computer code in the desired programming language. Two key ideas of Computer-aided Software System Engineering (CASE) are: the harboring of computer assistance in software development and or software maintenance processes, and An engineering approach to the software development and or maintenance. Some typical CASE tools are Configuration management tools, Data modeling tools, Model transformation tools, Refactoring tools, Source code generation tools, and Unified Modeling Language. Use of Computer-Aided Software Engineering (CASE) Tools

CASE tools automate tasks required in a system development effort and enforces adherence to the SDLC Upper CASE tools focus on activities associated with the early stages of systems development Lower CASE tools focus on the later implementation stage of systems development Integrated-CASE (I-CASE) tools provide links between upper- and lower-CASE packages, allowing lower-CASE packages to generate program code from upperCASE package generated designs.

CASE Features Diagrams Documentation Data Dictionary Team Coordination Prototyping Code Generation Reverse Engineering. CASE Types Full development integrated I CASE Analysis & Design Upper CASE Implementation & Maintenance Lower CASE. CASE Repository Data dictionary - data element definitions and descriptions Ensures consistency Repository is much more Database with linkages for all system development products and activities Integration Even across different CASE tools. CASE Tools Visio 2000 Microsoft on laptops Visible Analyst - Visible Systems ER/Studio - Embarcadero ERWin - Computer Associates Oracle Designer Oracle Power Designer Sybase. STRUCTURED METHODOLOGIES Structured methodologies have been used to document, analyze, and design information systems. Structured refers to the fact that the techniques are step by step, with each step building on the previous one.

Structured methodologies are top-down, progressing from the highest, most abstract level to the lowest level of detail from the general to the public. Structured development methods are process-oriented, focusing primarily on modeling the processes, or actions that capture, store, manipulate, and distribute data as the data flow through a system. These methods separate data from processes. A separate programming procedure must be written every time someone wants to take an action on a particular piece of data. The procedures act on data that the program passes to them. The structured methodologies are: Structured Systems Analysis Structured Systems Design Structured Programming. Characteristics of Structured Methods used for requirements specification, systems design structure a project into small, well-defined activities specify the sequence and interaction of these activities use diagrammatic and other modeling techniques; give a precise (structured) definition; are understandable by both users (clients) and developers. Logical System Specification (Business Systems Design) broad specification from systems analysis technical solutions to the requirements are evaluated detailed logical ( non-technical ) design developed which shows clearly how the new system will operate within the business narrative and system models used. Physical Design Logical design converted to a physical (technical) one File specifications or database definitions Program specifications Screen and report specifications.

SSLC(Structured System Life Cycle) or (SSADM) SSADM techniques (structured system analysis and design methods) The three most important techniques that are used in SSADM are:
Logical Data Modeling

o This is the process of identifying, modeling and documenting the data requirements of the system being designed. The data are separated into entities

(things about which a business needs to record information) and relationships (the associations between the entities).
Data Flow Modeling

This is the process of identifying, modeling and documenting how data moves around an information system. Data Flow Modeling examines processes (activities that transform data from one form to another), data stores (the holding areas for data), external entities (what sends data into a system or receives data from a system), and data flows (routes by which data can flow). Entity Behavior Modeling This is the process of identifying, modeling and documenting the events that affect each entity and the sequence in which these events occur.

Stages The SSADM method involves the application of a sequence of analysis, documentation and design tasks concerned with the following. 1. Analysis of the current system Also known as: feasibility stage. Analyze the current situation at a high level. A Data Flow Diagram (DFD) is used to describe how the current system works and to visualize known problems. The following steps are part of this stage:

Develop a Business Activity Model. A model of the business activity is built. Business events and business rules would also be investigated as an input to the specification of the new automated system. Investigate and define requirements. The objective of this step is to identify the problems associated with the current environment that are to be resolved by the new system. It also aims to identify the additional services to be provided by the new system and users of the new system. Investigate current processing. It investigates the information flow associated with the services currently provided, and describes them in the form of Data Flow Model. At this point, the Data Flow Model represents the current services with all their deficiencies. No attempt is made to incorporate required improvement, or new facilities. Investigate current data. This step is to identify and describe the structure of the system data, independently of the way the data are currently held and organized. It produces a model of data that supports the current services. Derive logical view of current services. The objective of this step is to develop a logical view of the current system that can be used to understand problems with the current system.

2. Outline business specification Also known as: logical system specification stage. This stage consists of 2 parts. The first part is researching the existing environment. In this part, system requirements are identified and the current business environment is modeled. Modeling consists of creating a DFD and LDS (Logical Data Structure) for processes and data structures that are part of the system. In the second part, BSO (Business Systems Options), 6 business options are presented. One of the options is selected and built. The following steps are part of this stage:

Define BSOs. This step is concerned with identifying a number of possible system solutions that meet the defined requirements from which the users can select. Select BSO. This step is concerned with the presentation of the BSOs to users and the selection of the preferred option. The selected option defines the boundary of the system to be developed in the subsequent stages.

3. Detailed business specification Also known as: requirements specification stage. To assist the management to make a sound choice, a number of business system options, each describing the scope and functionalities provided by a particular development/implementation approach, are prepared and presented to them. These options may be supported by technical documentation such as Work Practice Model, LDM (Logical Data Model) and DFD. They also require financial and risk assessments to be prepared, and need to be supported by outline implementation descriptions. The following steps are part of this stage:

Define required system processing. This step is to amend the requirements to reflect the selected Business System Option, to describe the required system in terms of system data flows and to define the user roles within the new system. Develop required data model. This step is undertaken in parallel with the above step. The LDM of the current environment is extended to support all the processing in the selected business system option. Derive system functions. During the parallel definition of data and processing, additional events are identified, which cause existing functions to be updated, and new functions to be defined. Service level requirements for each function are also identified in this step. Develop user job specifications. A Work Practice Model is developed to document the understanding of the user jobs in concern. Enhance required data model. Its objective is to improve the quality of the required system LDM by the application of relational data analysis (also known as normalization). Develop specification prototypes. It is used to describe selected parts of the required system in an animated form, for demonstration to the users. The purpose is to demonstrate that the requirements have been properly understood and to establish additional requirements concerning the style of the user interface. Develop processing specification. This step is principally concerned with defining the detailed update and enquiry processing for the required system. Confirm system objectives. During stage 1 and 3, the requirements will have been recorded, as they are identified, in the user requirements. This step represents the final review of the requirements before the completion of the Definition of Requirements Stage.

4. Logical data design Also known as: logical system specification stage. In this stage, technically feasible options are chosen. The development/implementation environments are specified based on this choice. The following steps are part of this stage:

Define TSOs: Up to 6 technical options (specifying the development and implementation environments) are produced, one being selected. Select TSOs. Select the most favorable TSO

5. Logical process design Also known as: logical system specification stage. In this stage, logical designs and processes are updated. Additionally, the dialogs are specified as well. The following steps are part of this stage:

Define user dialogue. This step defines the structure of each dialogue required to support the on-line functions and identifies the navigation requirements, both within the dialogue and between dialogues. Define update processes. This is to complete the specification of the database updating required for each event and to define the error handling for each event. Define inquiry processes. This is to complete the specification of the database enquiry processing and to define the error handling for each inquiry.

6.Physical design The objective of this stage is to specify the physical data and process design, using the language and features of the chosen physical environment and incorporating installation standards. The following activities are part of this stage:

Prepare for physical design Learn the rules of the implementation environment Review the precise requirements for logical to physical mapping Plan the approach Complete the specification of functions Incrementally and repeatedly develop the data and process designs Or

SSADM application development projects are divided into five modules that are further broken down into a hierarchy of stages, steps and tasks: 1. Feasibility Study -- the business area is analyzed to determine whether a system can cost effectively support the business requirements. 2. Requirements Analysis -- the requirements of the system to be developed are identified and the current business environment is modeled in terms of the processes carried out and the data structures involved. 3. Requirements Specification -- detailed functional and non-functional requirements are identified and new techniques are introduced to define the required processing and data structures.

4. Logical System Specification -- technical systems options are produced and the logical design of update and enquiry processing and system dialogues. 5. Physical Design -- a physical database design and a set of program specifications are created using the logical system specification and technical system specification.

TOOLS FOR SYSTEM ANALYSIS AND DESIGN: Data flow diagram

Data Flow Diagram example. \ A data-flow diagram (DFD) is a graphical representation of the "flow" of data through an information system. DFDs can also be used for the visualization of data processing (structured design). On a DFD, data items flow from an external data source or an internal data store to an internal data store or an external data sink, via an internal process. A DFD provides no information about the timing or ordering of processes, or about whether processes will operate in sequence or in parallel. It is therefore quite different from a flowchart, which shows the flow of control through an algorithm, allowing a reader to determine what operations will be performed, in what order, and under what circumstances, but not what kinds of data will be input to and output from the system, nor where the data will come from and go to, nor where the data will be stored (all of which are shown on a DFD). Data-flow diagrams were invented by Larry Constantine, the original developer of structured design,[2] based on Martin and Estrin's "data-flow graph" model of computation.

Data-flow diagrams (DFDs) are one of the three essential perspectives of the structured-systems analysis and design method SSADM. The sponsor of a project and the end users will need to be briefed and consulted throughout all stages of a system's evolution. With a data-flow diagram, users are able to visualize how the system will operate, what the system will accomplish, and how the system will be implemented. The old system's dataflow diagrams can be drawn up and compared with the new system's data-flow diagrams to draw comparisons to implement a more efficient system. Data-flow diagrams can be used to provide the end user with a physical idea of where the data they input ultimately has an effect upon the structure of the whole system from order to dispatch to report. How any system is developed can be determined through a data-flow diagram. In the course of developing a set of levelled data-flow diagrams the analyst/designers is forced to address how the system may be decomposed into component sub-systems, and to identify the transaction data in the data model. There are different notations to draw data-flow diagrams, defining different visual representations for processes, data stores, data flow, and external entities.[3] Developing a data-flow diagram Top-down approach 1. The system designer makes "a context level DFD" or Level 0, which shows the "interaction" (data flows) between "the system" (represented by one process) and "the system environment" (represented by terminators). 2. The system is "decomposed in lower-level DFD (Level 1)" into a set of "processes, data stores, and the data flows between these processes and data stores". 3. Each process is then decomposed into an "even-lower-level diagram containing its subprocesses". 4. This approach "then continues on the subsequent subprocesses", until a necessary and sufficient level of detail is reached which is called the primitive process (aka chewable in one bite). DFD is also a virtually designable diagram that technically or diagrammatically describes the inflow and outflow of data or information that is provided by the external entity. Event partitioning approach Event partitioning was described by Edward Yourdon in Just Enough Structured Analysis.[4] A context level Data flow diagram created using Select SSADM. This level shows the overall context of the system and its operating environment and shows the whole system as just one process. It does not usually show data stores, unless they are "owned" by external systems, e.g. are accessed by but not maintained by this system, however, these are often shown as external entities.[5] Level 1 (high level diagram) A Level 1 Data flow diagram for the same system.

This level (level 1) shows all processes at the first level of numbering, data stores, external entities and the data flows between them. The purpose of this level is to show the major highlevel processes of the system and their interrelation. A process model will have one, and only one, level-1 diagram. A level-1 diagram must be balanced with its parent context level diagram, i.e. there must be the same external entities and the same data flows, these can be broken down to more detail in the level 1, e.g. the "inquiry" data flow could be split into "inquiry request" and "inquiry results" and still be valid.[5] Level 2 (low level diagram) A Level 2 Data flow diagram showing the "Process Enquiry" process for the same system. This level is a decomposition of a process shown in a level-1 diagram, as such there should be a level-2 diagram for each and every process shown in a level-1 diagram.

Object-modeling technique The object-modeling technique (OMT) is an object modeling language for software modeling and designing. It was developed circa 1991 by Rumbaugh, Blaha, Premerlani, Eddy and Lorensen as a method to develop object-oriented systems, and to support object-oriented programming. OMT was developed as an approach to software development. Or The entire OMT software development process has four phases: Analysis, system design, object design, and implementation of the software. Most of the modeling is performed in the analysis phase. The purposes of modeling according to Raumbaugh (1991)

testing physical entities before building them (simulation), communication with customers, visualization (alternative presentation of information), and reduction of complexity.

OMT has proposed three main types of models:

Object model : The object model represents the static and most stable phenomena in the modeled domain [3]. Main concepts are classes and associations, with attributes and operations. Aggregation and generalization (with multiple inheritance) are predefined relationships.[2] Dynamic model : The dynamic model represents a state/transition view on the model. Main concepts are states, transitions between states, and events to trigger transitions. Actions can be modeled as occurring within states. Generalization and aggregation (concur-rency) are predefined relationships.[2] Functional model : The functional model handles the process perspective of the model, corresponding roughly to data flow diagrams. Main concepts are process, data store, data flow, and actors.[2]

OMT is a predecessor of the Unified Modeling Language (UML). Many OMT modeling elements are common to UML. The entire OMT software development process has four phases: Analysis, system design, object design, and implementation of the software. Most of the modeling is performed in the analysis phase. The recommended method incorporates the following activities (Rumbaugh et al., 1991:261ff): 1. Develop a Problem Statement. 2. Build an Object Model: 1. Identify object classes. 2. Develop a data dictionary for classes, attributes, and associations. 3. Add associations between classes. 4. Add attributes for objects and links. 5. Organize and simplify object classes using inheritance. 6. Test access paths using scenarios and iterate the above steps as necessary. 7. Group classes into modules, based on close coupling and related function. 3. Build a Dynamic Model: 1. Prepare scenarios of typical interaction sequences. 2. Identify events between objects and prepare an event trace for each scenario. 3. Prepare an event flow diagram for the system. 4. Develop a state diagram for each class that has important dynamic behavior. 5. Check for consistency and completeness of events shared among the state diagrams. 4. Build a Functional Model: 1. Identify input and output values. 2. Use data flow diagrams as needed to show functional dependencies. 3. Describe what each function does. 4. Identify constraints. 5. Specify optimization criteria. 5. Verify, iterate, and refine the three models: 1. Add most important operations to the object model. 2. Verify that classes, associations, attributes and operations are consistent and complete, check with problem statement. 3. Iterate steps to complete the analysis. A remark concerning the method is that it exclusively refers to activities using concepts from the modeling language, i.e., classes, attributes, etc. Hence, the focus is on the representation of enterprise models.

STRUCTURED SYSTEMS ANALYSIS Analysis is the study of a problem, prior to taking some action. It is the detailed study of a business activity or area or application, usually leading to the specification of a new system along with its boundaries. Structured analysis is the use of graphical documentation tools to produce a new kind of functional specification a structured specification. The objective is to determine exactly what must be done to solve the problem. System analysis involves a detailed study of:

The information needs of the organization and its end users Existing information systems (their activities, resources and products) The expected information system (in terms of capabilities of IS required to meet the information needs of users. System analysis means identification, understanding and examining the system for achieving predetermined goals/objectives of the system. System analysis is carried out with the following objectives: To know how a system currently operates, and To identify the users requirements in the proposed system. Structured Analysis tools Structured analysis is a set of techniques and graphical tools that allow the analyst to develop a new kind of system specification that is easily understandable to the user. Analysts work primarily with their wits, pencil and paper. Structured analysis tools help the system analyst to document the system specification of a system to be built. The main tools which are used for the purpose are: Data Flow Diagram (DFD) Data Dictionary Process Specifications Structured English Decision tree Decision table Action diagram Entity-Relationship diagrams (E-R diagram) and State Transition diagrams. DATA FLOW DIAGRAM (DFD) A DFD is a structured, diagrammatic technique for showing the functions performed by a system and the data flowing into, out of, and within it. A Data Flow Diagram (DFD) is a graphical representation of the "flow" of data through an information system. A data flow diagram can also be used for the visualization of data processing (structured design). It is common practice for a designer to draw a context-level DFD first which shows the interaction between the system and outside entities. This context-level DFD is then "exploded" to show more detail of the system being modeled. In analyzing a business, several sets of DFDs are drawn. Initial DFDs might model the existing system (flaws and all), while later DFDs may model a solution to the problem being analyzed. For these solutions DFDs, a physical and logical DFD is drawn. Physical DFDs represent physical files and transactions. Often used to describe or analyze the current system. Show actual implementation in organization. Show how things occur or physical processes work processes rather than system processes. Uses names and terms from usage world. Things that can occur in a physical flow people, departments, physical objects such as packages. Logical or conceptual DFDs can be used to represent business functions or processes. Shows what happens rather than how it happens.

Allows clear perception without implementation details. It is easier to perform analysis without limitations of physical devices, but logical flows are harder to produce. WHAT the system does - Current Physical DFD HOW it does it WHAT it should do HOW it should do it - Current Logical DFD - Required Logical DFD - Required Physical DFD

The 'Context Diagram ' is an overall, simplified, view of the target system, which contains only one process box and the primary inputs and outputs.

DFD Principles The general principle in Data Flow Diagramming is that a system can be decomposed into subsystems, and subsystems can be decomposed into lower level subsystems, and so on. Each subsystem represents a process or activity in which data is processed. At the lowest level, processes can no longer be decomposed. Each 'process' (and from now on, by 'process' we mean subsystem and activity) in a DFD has the characteristics of a system. Just as a system must have input and output (if it is not dead), so a process must have input and output. Data enters the system from the environment; data flows between processes within the system; and data is produced as output from the system. Symbols for drawing DFD

General rules for DFDs External Entities It is normal for all the information represented within a system to have been obtained from, and/or to be passed onto, an external source or recipient. These external entities may be duplicated on a diagram, to avoid crossing data flow lines. Where they are duplicated a stripe is drawn across the left hand corner, like this. The addition of a lowercase letter to each entity on the diagram is a good way to uniquely identify them. Processes When naming processes, avoid glossing over them, without really understanding their role. Indications that this has been done are the use of vague terms in the descriptive title area - like 'process' or 'update'. The most important thing to remember is that the description must be meaningful to whoever will be using the diagram. Data Flows Double headed arrows can be used (to show two-way flows) on all but bottom level diagrams. Furthermore, in common with most of the other symbols used, a data flow at a particular level of a diagram may be decomposed to multiple data flows at lower levels. Data Stores Each data store should be given a reference letter, followed by an arbitrary number. These reference letters are allocated as follows: 'D' - indicates a permanent computer file 'M' - indicates a manual file 'T' - indicates a transient store, one that is deleted after processing. In order to avoid complex flows, the same data store may be drawn several times on a diagram. Multiple instances of the same data store are indicated by a double vertical bar on their left hand edge. The procedure for producing a Data Flow Diagram is to: 1. Identify and list external entities providing inputs/receiving outputs from system; 2. Identify and list inputs from/outputs to external entities; 3. Create a context diagram with system at center and external entities sending and receiving data flows; 4. Identify the business functions included within the system boundary;

5. Identify the data connections between business functions; 6. Confirm through personal contact sent data is received and vice-versa; 7. Trace and record what happens to each of the data flows entering the system (data movement, data storage, data transformation/processing) 8. Attempt to connect any diagram segments into a rough draft; 9. Verify all data flows have a source and destination; 10. Verify data coming out of a data store goes in; 11. Redraw to simplify--ponder and question result; 12. Review with "informed"; 13. Explode and repeat above steps as needed.

Example

DATA DICTIONARY A Data Dictionary is a repository of data about data. A Data Dictionary is an organized collection of logical definitions and representations of all data elements that occur in a system. The data dictionary is used to define the system clearly and precisely. A data dictionary is (like

any other dictionary) a compilation of all the terms, words, data stores, data flows etc. used in the system. Each term is defined clearly and all the terms are listed alphabetically. Within the context of a DBMS, a data dictionary is a read-only set of tables and views. The data dictionary is a database in its own right. Amongst other things, a data dictionary holds the following information: Precise definition of data elements Usernames, roles and privileges Schema objects Integrity constraints Stored procedures and triggers General database structure Audit information Space allocations. One benefit of a well-prepared data dictionary is a consistency between data items across different tables. For ex: Several tables may hold telephone numbers; using a data dictionary the format of this telephone number field will be consistent. When an organization builds an enterprise-wide data dictionary, it may include both semantics and representational definitions for data elements. The semantic components focus on creating precise meaning of data elements. Representation definitions include how data elements are stored in a computer structure such as an integer, string or date format. Data dictionaries are one step along a pathway of creating precise semantic definitions for an organization. Contents of Data Dictionary Data element description Data structure description Data flow description Data Store description Process description Entity description Glossary entries description

Initially, data dictionaries are sometimes simply a collection of database columns and the definitions of what the meaning and types the columns contain. Data dictionaries are more precise than glossaries (terms and definitions) because they frequently have one or more representations of how data is structured. Data dictionaries are usually separate from data models since data models usually include complex relationships between data elements. Example Consider a data item yrs_in_co. The data dictionary notation for this item would be: Name: yrs_in_co Alias: yrs_wrked Where used/how used: input to Employee_details process Description: number of years for which the employee has been working in the organization Additional information: maximum value 25 (since company was set up 25 years back). PROCESS SPECIFICATIONS The Process Specification or miniature-specification (mini-specs) describes the exact events that take place inside a process. Its context is not unique to "business activity" but can be applied to

any organizational activity. It is the description of whats happening inside each bottom-level, primitive bubble in a dtaflow diagram. The goals of producing process specifications are: Reduce process ambiguity. Obtain a precise description of what is accomplished. Validate the system design, including data flow diagrams and the data dictionary. The process specification defines what must be done in order to transform inputs into outputs. It is a detailed set of instructions outlining a business procedure that each elementary level business activity is expected to carry out. Process specifications are commonly included as an integral component of a requirements document in systems development. The various tools that can be used to produce a process specification are: (i) Structured English: is used during the analysis stage of a project to identify business processes. It is the use of the English language with the syntax of structured programming. It aims at getting the benefits of both the programming logic and natural language. Structured English consists of the following elements: Operation statements written as English phrases executed from the top down Conditional blocks indicated by keywords such as IF, THEN, and ELSE Repetition blocks indicated by keywords such as DO, WHILE, and UNTIL. Use the following guidelines when writing Structured English: Statements should be clear and unambiguous Use one line per logical element All logic should be expressed in operational, conditional, and repetition blocks Logical blocks should be indented to show relationship Keywords should be capitalized Examples of common keywords START, BEGIN, END, STOP, DO, WHILE, DO WHILE, FOR, UNTIL, DO UNTIL, REPEAT, END WHILE, END UNTIL, END REPEAT, IF, IF THEN, ELSE, IF ELSE, END IF, THEN, ELSE THEN, ELSE IF, SO, CASE, EQUAL, LT, LE, GT, GE, NOT, TRUE, FALSE, AND, OR, XOR, GET, WRITE, PUT, UPDATE, CLOSE, OPEN, CREATE, DELETE, EXIT, FILE, READ, EOF, EOT Example A bank will grant loan under the following conditions: If a customer has an account with the bank and had no loan outstanding, loan will be granted. If a customer has an account with the bank but some amount is outstanding from previous loans then loan will be granted if special approval is needed. Reject all loan applications in all other cases. IF customer has a Bank Account THEN IF Customer has no dues from previous account THEN Allow loan facility ELSE IF Management Approval is obtained THEN

Allow loan facility ELSE Reject ENDIF ENDIF ELSE Reject ENDIF.

(ii) Decision tree: A decision tree is a diagram that represents the conditions and actions sequentially and thus shows which conditions to consider first, which second, and so on. Decision trees are constructed in order to help with making decisions. A decision tree is a special form of tree structure. Another use of trees is as a descriptive means for calculating conditional probabilities. A decision tree is a tree in which each branch node represents a choice between a number of alternatives, and each leaf node represents a classification or decision. Example Lets draw a decision tree to help a financial institution decide whether a person should be offered a loan.

(iii) Decision table: A decision table is a table consisting of rows and columns and showing the various actions to be taken for different combinations of conditions. Decision tables provide a way to examine, describe, and document decisions using a table. They are used to: Describe the conditions. Identify possible decision alternatives.

Indicate actions should be performed. Describe actions. Decision tables help analysts ensure completeness and accuracy. Four main problems that can occur in developing decision tables: Incompleteness. Impossible situations. Contradictions. Redundancy. Decision tables are typically divided into four quadrants, as shown below. The four quadrants Conditions Condition alternatives Actions Action entries

Each condition corresponds to a variable, relation or predicate whose possible values are listed among the condition alternatives. Each action is a procedure or operation to perform, and the entries specify whether (or in what order) the action is to be performed for the set of condition alternatives the entry corresponds to. Many decision tables include in their condition alternatives the don't care symbol, a hyphen. Using don't cares can simplify decision tables, especially when a given condition has little influence on the actions to be performed. In some cases, entire conditions thought to be important initially are found to be irrelevant when none of the conditions influence which actions are performed. There are two types limited entry and extended entry. In a LIMITED ENTRY decision table the conditions are expressed as simple YES/NO questions whereas in an EXTENDED ENTRY table the conditions have more than two possible states. A decision table helps to simplify and organize logic. Example The following example has eight simple rules because the matrix provides each of the three conditions with two possible values. Therefore the total number of rules in this example is (2*2*2) = 8. It is an example of a limited entry table.

(iv) Action diagram: Action diagrams show program overview as well as detailed program logic. They are pseudocode with square brackets of different kinds added in the left margin to emphasize the control structure. Action diagram have five important characteristics. They are:

Relatively easy to learn and teach Suitable for automation Action diagram editor is available and particularly suitable for using with 4GLs Can be easily customized to reflect the syntax of different languages They are a tree like structure.

Example The following figure shows the action diagram for checking of students test results. The same tool can be used in different levels. A single technique extends from a general overview down to program coding level.

(v) Pre/Post condition: Pre/post condition is a convenient way of describing the function that must be carried out by a process, without saying very much at all about the algorithm or procedure that will be used. It is a particularly useful approach when: The user has a tendency to express the policy carried out by a bubble in terms of a particular, idiosyncratic algorithm that he or she has been using for decades. The systems analyst is reasonably sure that there are many different algorithms that could be used. The systems analyst wants to let the programmer explore several such algorithms, but does not want to get involved in such details himself, and especially does not want to engage in arguments with the user about the relative merits of such algorithms. Example

There are two main parts of the specification: Preconditions describe all the things (if any) that must be true before the process begins operating. Its sometimes convenient to think of the process as a sleeping princess, and the pre-conditions represent the magic kiss that will awaken the process and set it to work. Alternatively, you can think of the preconditions as a guarantee from the user: I guarantee that when this process is activated the following things will be true. Typically, the preconditions will describe the following: What inputs must be available These inputs will arrive via a flow connected to the process, as shown on the dataflow diagram What relationships must exist between inputs or within inputs ex: order details and shipping details with the same account number. What relationships must exist between inputs and data stores ex: the precondition might say, There is a customer-order with customer-account-number matching a customer-account-number in the customers store. What relationships must exist between different stores or within a single store ex: There is an order in the orders store whose customer-account-number matches the customer-account-number in the customers store. (OR) There is an order within the orders store with a shipping-date equal to the current date. Postconditions describe what must be true when the process has finished doing its job. This can be thought of as a guarantee: I guarantee that when the process is finished the following will be true. Postconditions typically describe the following: The outputs that will be generated or produced by the process ex: An invoice will be produced. The relationships that will exist between output values and the original input values ex: The invoice-total will be calculated as the sum of unit-item-prices plus shipping-charges. The relationships that will exist between output values and values in one or more stores ex: The on-hand-balance in the INVENTORY store will be increased by amountreceived, and the new on-hand balance will be produced as output from this process. The changes that will have been made to stores The order will be appended to the ORDERS store. (OR) The customer record will be deleted from the CUSTOMERS store. As with all the forms of process specification, one should let ones own judgment and the user's reactions guide you; if the user finds it difficult to read the precondition/postcondition specification, choose another format. ENTITY-RELATIONSHIP DIAGRAMS (E-R DIAGRAM)

An entity-relationship (ER) diagram is a specialized graphic that illustrates the interrelationships between entities in a database. An Entity Relationship Diagram (ERD) is a snapshot of data structures. ERDs show entities in a database and relationships between tables within that database. It is essential to have one of these if you want to create a good database design. The patterns help focus on how the database actually works with all of the interactions and data flows. Data models are tools used in analysis to describe the data requirements and assumptions in the system from a top-down perspective. They also set the stage for the design of databases later on in the SDLC. There are three basic elements in ER models: Entities are the "things" about which we seek information. Attributes are the data we collect about the entities. Relationships provide the structure needed to draw information from multiple entities.

Basic E-R No
Example

STATE TRANSITION DIAGRAMS (STDs)

An STD is a way of describing the time-dependent behaviour of a system. The basic consistency rule is: "A system's behaviour in any state must be the same no matter by which path the state is arrived at". If the entire system is represented with a one-bubble dataflow diagram, a statetransition diagram is used to show the sequence of activities within the system. The statetransition diagram is a powerful modeling tool for describing the required behavior of real-time systems, as well as the human interface portion of many on-line systems. States:

Entity symbols

A state is an observable mode of behaviour of the system. At any time a particular STD can only be in one state. .. but a system's behaviour could be described by more than one state transition diagram. Transition conditions: internal events or events external to the system. Transition actions: actions in response to the events triggering one-shot actions synchronizing between different STDs and producing control outputs. Drawing STD's: Identify observable states of the system Select the states with normal behaviour Specify the conditions that mark a transition Specify the actions to produce the observable behaviour in the destination state for each transition and If the system is complex, partition the diagram in several STDs. Example An example state-transition diagram for the automated teller machine now found in most banks.

STRUCTURED SYSTEMS DESIGN Systems design is the evaluation of alternative solutions and the specification of a detailed computer-based solution. It is also called physical design. Whereas systems analysis primarily

focused on the logical, implementation-independent aspects of a system (the requirements), systems design deals with the physical or implementation-dependent aspects of a system (the system's technical specifications). It specifies HOW the system will accomplish the objective. It should stress on the following three activities: User interface design focuses on designing the interactions between end uses and computer systems; Data design focuses on the design of the logical structure of the database and files to be used by the proposed information system; Process design focuses on the design of the software resources, i.e., the programs and procedures needed by the proposed information system. A system is designed with the following objectives: Practicality design should be use-oriented Flexibility system must be responsive to change Efficiency perform jobs within the specified time (considering throughput time, response time and run time) Security relates to hardware reliability, physical security of data and detection and prevention of fraud and abuse of data. Structured design is a process-oriented technique for breaking up a large program into a hierarchy of modules that result in a computer program that is easier to implement and maintain (change). Synonyms are top-down program design and structured programming. Design a program as a top-down hierarchy of modules. A module is a group of instructions-a paragraph, block, subprogram, or subroutine. The top-down structure of these modules is developed according to various design rules and guidelines. Structured design is considered a process technique because its emphasis is on the PROCESS building blocks in our information systemspecifically, software processes. Structured design seeks to factor a program into the top-down hierarchy of modules that have the following properties: - Modules should be highly cohesive; that is, each module should accomplish one and only one function. Theoretically this makes the modules reusable in future programs. - Modules should be loosely coupled; in other words, modules should be minimally dependent on one another. This minimizes the effect that future changes in one module will have on other modules. Structured design was developed by Ed Yourdon and Larry Constantine. This technique deals with the size and complexity of a program by breaking up a program into a hierarchy of modules that result in a computer program that is easier to implement and maintain. System design is carried out at two levels, namely: Conceptual level (conceptual design or external design or general design) In the conceptual design, the feasibility of meeting the management objectives for the MIS is assessed and a broad-brush picture of the system is painted. It is a prerequisite for a detailed design. Physical level (physical design or internal design or detailed design) The performance requirements specified by the conceptual design become inputs to the detailed design phase, which is further refined, detailed and finalized to be called the system specifications.

Structured Design tools Structured design is the art of designing components of a system and the interrelationship between those components in the best possible way. It is the process of deciding the way the components are to be interconnected to solve some well-specified problem. The tools of structured design are: Structure charts Flow charts Nassi-Shneiderman diagrams Transform analysis Transaction analysis STRUCTURE CHARTS A Structure Chart (SC) in software engineering and organizational theory is a chart, which shows the breakdown of the configuration system to the lowest manageable levels. It is used to show the hierarchical arrangement of the modules in a structured program. Each rectangular box represents a module. The names of the modules are written inside the box. An arrow joins two modules that have an invocation relationship. A structure chart is a top-down modular design tool, constructed of squares representing the different modules in the system, and lines that connect them. The lines represent the connection and or ownership between activities and subactivities as they are used in organization charts. As a design tool, they aid the programmer in dividing and conquering a large software problem, that is, recursively breaking a problem down into parts that are small enough to be understood by a human brain. The process is called top-down design, or functional decomposition. A structure chart depicts the size and complexity of the system, the number of readily identifiable functions and modules within each function and whether each identifiable function is a manageable entity or should be broken down into smaller components. The primary tool used in structured design is the structure chart. Structure charts are used to graphically depict a modular design of a program. Specifically, they show how the program has been partitioned into smaller more manageable modules, the hierarchy and organization of those modules, and the communication interfaces between modules. Structure charts, however, do not show the internal procedures performed by the module or the internal data used by the module. Types of Modules AFFERENT MODULE - Accepts data from subordinate modules and passes this data to superordinates. EFFERENT MODULE - Accepts data from superordinate and passes it to subordinate modules. TRANSFORM MODULE - Exists solely to transform data into some other form. Takes data from superordinate, transforms it, and passes transformed data to superordinate.

COORDINATE MODULE - Takes data from one or more subordinates and passes it to other subordinate modules. Example

FLOW CHART A flow chart, or flow diagram, is a graphical representation of a process or system that details the sequencing of steps required to create output. A typical flow chart uses a set of basic symbols to represent various functions, and shows the sequence and interconnection of functions with lines and arrows. Flow charts are easy-to-understand diagrams showing how steps in a process fit together. This makes them useful tools for communicating how processes work, and for clearly documenting how a particular job is done. A flow chart can therefore be used to: Define and analyze processes; Build a step-by-step picture of the process for analysis, discussion, or communication; and Define, standardize or find areas for improvement in a process There are four general types of flow charts, namely: Document flowcharts showing a document flow through system Data flowcharts showing data flows in a system System flowcharts showing controls at a physical or resource level Program flowchart showing the controls in a program within a system.

Symbol

Symbol Name

Symbol Description

(Alternate Shape Name) Terminators show the start and stop points in a process. When used as a Start symbol, terminators depict a trigger action that sets the process flow into motion. Show a Process or action step. This is the most Process common symbol in both process flowcharts and business process maps. A Predefined Process symbol is a marker for another process step or series of process flow steps that are formally defined elsewhere. This shape commonly Predefined Process (Subroutine) depicts sub-processes (or subroutines in programming flowcharts). If the sub-process is considered "known" but not actually defined in a process procedure, work instruction, or some other process flowchart or documentation, then it is best not to use this symbol since it implies a formally defined process. As the shape name suggests, this flowchart symbol is Alternate Process used when the process flow step is an alternate to the normal process step. Flow lines into an alternate process flow step are typically dashed. Indicates a question or branch in the process flow. Decision Typically, a Decision flowchart shape is used when there are 2 options (Yes/No, No/No-Go, etc.) The Data flowchart shape indicates inputs to and outputs from a process. As such, the shape is more often referred to as an I/O shape than a Data shape.

Terminator (Terminal Point, Oval)

Data (I/O)

Pretty self explanatory - the Document flowchart Document shapes any process flow step that produces a document. Same as Document, except well, multiple documents. Multi-Document This shape is not as commonly used as the Document flowchart shape, even when multiple documents are implied. As the names states, any process step that is a Preparation Preparation process flow step, such as a set-up operation. Indicates a process flow step where information is displayed to a person (e.g., PC user, machine operator). Manual Input flowchart shapes show process flow Manual Input steps where the operator/ user is prompted for information that must be manually input into a system. Manual Operations flowchart shapes show which process steps are not automated. In data processing Manual Operation flowcharts, this data flow shape indicates a looping operation along with a loop limit symbol (which is not supported by Microsoft Office, but a Manual Operation symbol rotated 180 will do the trick.) This is the companion to the punched tape flowchart shapes. This shape is seldom used. If you're very good at stretching all the life out of a Punched Tape machine, you may still have use for the Punched Tape symbol - used for input into old computers and CNC machines.

Display

Card

Process Flowchart: In process flowcharts, this symbol is typically small and is used as a Connector to show a jump from one point in the process flow to another. Connectors are usually labeled with capital letters (A, B, AA) to show matching jump points. They are handy Connector (Inspection) for avoiding flow lines that cross other shapes and flow lines. They are also handy for jumping to and from a sub-process defined in a separate area than the main flowchart. Business Process Map: In process maps, this symbol is full sized and shows an Inspection point in the process flow.

Advantages of using flowcharts Communication: Flowcharts are better way of communicating the logic of a system to all concerned. Effective analysis: With the help of flowchart, problem can be analysed in more effective way. Proper documentation: Program flowcharts serve as a good program documentation, which is needed for various purposes. Efficient Coding: The flowcharts act as a guide or blueprint during the systems analysis and program development phase. Proper Debugging: The flowchart helps in debugging process. Efficient Program Maintenance: The maintenance of operating program becomes easy with the help of flowchart. It helps the programmer to put efforts more efficiently on that part Limitations of using flowcharts Complex logic: Sometimes, the program logic is quite complicated. In that case, flowchart becomes complex and clumsy. Alterations and Modifications: If alterations are required the flowchart may require re-drawing completely. Reproduction: As the flowchart symbols cannot be typed, reproduction of flowchart becomes a problem.

The essentials of what is done can easily be lost in the technical details of how it is done. Example This is a flow chart for calculating n! (n factorial)

NASSI-SHNEIDERMAN DIAGRAMS Nassi-Shneiderman (NS) diagrams (developed by Ike Nassi and Ben Shneiderman) illustrate algorithms and program functions as a flowchart. The main purpose of a Nassi-Shneiderman diagram is to create a logical structure (a blueprint) for the program. These are generally more organized, more structured, and more comprehensible than the typical flowchart; for that reason, they are sometimes preferred as a tool for creating process specifications. However, they do still require a nontrivial amount of graphics, and it is not clear that the graphics add that much value. Nassi-Shneiderman Notations

Process A process describes a program function as pseudocode. You can stack processes on top of each other to illustrate a sequence.

Parallel Process Processes that are executed at the same time inside a trapezium created by drawing two diagonal lines in the upper and lower border of the table.

Parallel Process Use loop notations when processes are repeated until a certain condition is met.

Decision The selection symbol is a rectangle divided into three parts by diagonal lines. Write the condition or decision in the uppermost triangle and place the two possible outcomes on either side of the decision. The two outcomes don't need to be the same size.

Case statement List multiple cases next to each other in a table format. TRANSFORM ANALYSIS (OR) TRANSFORMATION ANALYSIS

Transform analysis is a design method appropriate for systems whose primary task is to receive a flow of similar input transactions and turn them into an output flow of data, after having taken some action on each of the input transactions. It is a design strategy for deriving initial structural designs that are good with respect to modularity. Primary purpose of transform analysis strategy is to identify the primary processing functions of the system the high-level inputs to those functions the high-level outputs of those functions. This strategy requires a dataflow model of the problem to be created first using Structured Analysis. Transform analysis is the development of a structure chart based on a DFD that describes the input-process-output data flow. Afferent data flow incoming data Efferent data flow outgoing data flow Central transform set of DFD processes that are located between the input and output processes.

Example

TRANSACTION ANALYSIS Transaction analysis is another design strategy for deriving structure charts from dataflow diagrams. This strategy is useful when a DFD diagram contains processes that split an input flow into several discrete output flows. Transaction analysis can be helpful in PART of the design process.

user

A TRANSACTION is defined (loosely) to be any element of data, control, signal, event, or change of state that causes, triggers, or initiates some action or sequence of actions. This strategy simply recognizes that data flow graphs of the form can be mapped into a particular modular structure. A transaction center (is a module) that must be able to: get (obtain or respond to) transactions in raw form analyze each transaction to determine its type dispatch each type of transaction and complete the processing of each transaction type. Steps in Transaction analysis STEP 1 Identify the sources of transactions. * These may be apparent from the inputs to the DFD, or the design may recognize afferent, transform, or efferent modules that generate transactions. (Detected during transform analysis.) STEP 2 Specify the appropriate transaction-centered organization. * Slide shown in class gave a good structure, but others are possible. STEP 3 Identify the transactions and their defining actions. * define carefully the processing that must take place for each transaction STEP 4 Note potential situations in which modules can be combined. * Try to detect when an intermediate-level module can be created from a functionally cohesive group of low-level modules. * Perhaps a module can be called by different transactions. STEP 5 For each transaction, or cohesive collection of transactions, specify a transaction module to completely process it. * Avoid the temptation to group the processing of several transactions into one module. This should be avoided if the resulting module has low cohesion. STEP 6 For each action in a transaction, specify an action module subordinate to the appropriate transaction module(s). * This is a factoring step (Factoring is the process of breaking functional components into subcomponents. The factoring process should be repeated until all bubbles in the DFD are represented in the structure chart). STEP 7 For each detailed step in an action module, specify an appropriate detail module subordinate to any action module that needs it. * This continues the factoring step. Note several levels of detail modules are possible for large systems. Transaction analysis will be the guiding influence on the designer for most systems. There are numerous situations in which additional strategies can be used to supplement and occasionally even replace the basic approach of transform analysis. A great deal of usefulness of transaction analysis depends on how a transaction is defined.

Example

STRUCTURED PROGRAMMING Structured programming (sometimes known as modular programming) is a subset of procedural programming that enforces a logical structure on the program being written to make it more efficient and easier to understand and modify. Structured programming frequently employs a top-down design model, in which developers map out the overall program structure into separate subsections. A defined function or set of similar functions is coded in a separate module or sub module, which means that code can be loaded into memory more efficiently and that modules can be reused in other programs. Almost any language can use structured programming techniques to avoid common pitfalls of unstructured languages. Unstructured programming must rely upon the discipline of the developer to avoid structural problems, and as a consequence may result in poorly organized programs. Most modern procedural languages include features that encourage structured programming. A structured program may be written out using pseudo code prior to being translated into whatever programming language that the program is to be written in. This pseudo code forms part of the program specification and is readable by anyone who understands structured programming regardless of whether or not they know the specific language in which the program has been written. Sequence

Structured programming provides a number of constructs that are used to define the sequence in which the program statements are to be executed. Consecutive

Statements within a structured program are normally executed in the same sequence as they are listed within source code. If a code fragment consists of three statements following one another then statement one will execute first, statement two second, and statement three last. To change from this straight consecutive execution sequence requires the use of one of the other structured programming constructs which are described below. Block Statements may be blocked together. A block of statements may be substituted wherever a single statement is allowed. The symbol or keyword used to indicate the start and end of each block differs depending on the programming language used. Subroutine

A subroutine is a code segment that has been separated from the preceding and following code. A subroutine usually consists of a series of statements that perform a particular task. The task performed is usually identified by the name given to the subroutine. Once a subroutine has been defined it can then be called from one or more places within the program. This allows a program to perform the same task a number of times without having to repeat the same code. A single call statement replaces (stands in for) all of the statements contained within the subroutine. Parameters can be passed to a subroutine that will supply the data required to perform the task and perhaps to return values for use by the subsequent processing. A subroutine can either be compiled with (internal to) the calling program or separately (external).

Function A function is similar to a subroutine except that a function always returns a value to the calling program. A function is usually called implicitly by embedding the function call into another statement in place of the returned value rather than having a separate call statement. A function works in the same way as a subroutine except in the way that it is called. A function can be compiled internally or externally. Some programming languages also provide functions built into the language itself. Loops Loops allow for the same statement to be executed a number of times in succession. There are three different loop constructs that can be used depending on whether the number of repetitions is known and also (where the number of repetitions is not known and is dependent on a condition) whether the loop is allowed to be bypassed if the termination condition is met before the loop is first executed. For

A for loop allows a statement to be executed a specified number of times. The for loop begins with a loop control variable assigned a specific initial value. This control variable in then incremented (or decremented) by a specified amount each time around the loop until a specified terminating value is reached at which time the statement following the loop is then executed. While

A while loop allows a statement to be executed until a given condition is met. If the condition is met prior to executing the loop then the loop will not be executed. As soon as the condition is met, execution continues with the statement following the loop. Until

An until loop also allows a statement to be executed until a given condition is met but the condition will not be tested until after the loop has been executed once. Once the condition is met the statement following the loop will be executed.

DATABASE MANAGEMENT SYSTEM: A Database Management System (DBMS) is a set of computer programs that controls the creation, maintenance, and the use of the database with computer as a platform or of an organization and its end users. It allows organizations to place control of organization-wide database development in the hands of database administrators (DBAs) and other specialists. A DBMS is a system software package that helps the use of integrated collection of data records and files known as databases. It allows different user application programs to easily access the same database. DBMSs may use any of a variety of database models, such as the network model or relational model. In large systems, a DBMS allows users and other software to store and retrieve data in a structured way. Instead of having to write computer programs to extract information, user can ask simple questions in a query language. Thus, many DBMS packages provide Fourth-generation programming language (4GLs) and other application development features. It helps to specify the logical organization for a database and access and use the information within a database. It provides facilities for controlling data access, enforcing data integrity, managing concurrency controlled, and restoring database. Overview A DBMS is a set of software programs that controls the organization, storage, management, and retrieval of data in a database. DBMSs are categorized according to their data structures or types. The DBMS accepts requests for data from an application program and instructs the operating system to transfer the appropriate data. The queries and responses must be submitted and received according to a format that conforms to one or more applicable protocols. When a DBMS is used, information systems can be changed much more easily as the organization's

information requirements change. New categories of data can be added to the database without disruption to the existing system. Database servers are computers that hold the actual databases and run only the DBMS and related software. Database servers are usually multiprocessor computers, with generous memory and RAID disk arrays used for stable storage. Hardware database accelerators, connected to one or more servers via a high-speed channel, are also used in large volume transaction processing environments. DBMSs are found at the heart of most database applications. Sometimes DBMSs are built around a private multitasking kernel with built-in networking support although nowadays these functions are left to the operating system. 1970s Relational DBMS Edgar Codd worked at IBM in San Jose, California, in one of their offshoot offices that was primarily involved in the development of hard disk systems. He was unhappy with the navigational model of the Codasyl approach, notably the lack of a "search" facility which was becoming increasingly useful. In 1970, he wrote a number of papers that outlined a new approach to database construction that eventually culminated in the groundbreaking A Relational Model of Data for Large Shared Data Banks.[1] In this paper, he described a new system for storing and working with large databases. Instead of records being stored in some sort of linked list of free-form records as in Codasyl, Codd's idea was to use a "table" of fixed-length records. A linked-list system would be very inefficient when storing "sparse" databases where some of the data for any one record could be left empty. The relational model solved this by splitting the data into a series of normalized tables, with optional elements being moved out of the main table to where they would take up room only if needed. In the relational model, related records are linked together with a "key". For instance, a common use of a database system is to track information about users, their name, login information, various addresses and phone numbers. In the navigational approach all of these data would be placed in a single record, and unused items would simply not be placed in the database. In the relational approach, the data would be normalized into a user table, an address table and a phone number table (for instance). Records would be created in these optional tables only if the address or phone numbers were actually provided. Linking the information back together is the key to this system. In the relational model, some bit of information was used as a "key", uniquely defining a particular record. When information was being collected about a user, information stored in the optional (or related) tables would be found by searching for this key. For instance, if the login name of a user is unique, addresses and phone numbers for that user would be recorded with the login name as its key. This "re-linking" of related data back into a single collection is something that traditional computer languages are not designed for. DBMS building blocks A DBMS includes four main parts: modeling language, data structure, database query language, and transaction mechanisms:

Components of DBMS

DBMS Engine accepts logical request from the various other DBMS subsystems, converts them into physical equivalent, and actually accesses the database and data dictionary as they exist on a storage device. Data Definition Subsystem helps user to create and maintain the data dictionary and define the structure of the files in a database. Data Manipulation Subsystem helps user to add, change, and delete information in a database and query it for valuable information. Software tools within the data manipulation subsystem are most often the primary interface between user and the information contained in a database. It allows user to specify its logical information requirements. Application Generation Subsystem contains facilities to help users to develop transactions-intensive applications. It usually requires that user perform a detailed series of tasks to process a transaction. It facilities easy-to-use data entry screens, programming languages, and interfaces. Data Administration Subsystem helps users to manage the overall database environment by providing facilities for backup and recovery, security management, query optimization, concurrency control, and change management.

(iii) Network Model The network model (defined by the CODASYL specification) organizes data using two fundamental constructs, called records and sets. Records contain fields (which may be organized hierarchically, as in the programming language COBOL). Sets (not to be confused with mathematical sets) define one-to-many relationships between records: one owner, many members. A record may be an owner in any number of sets, and a member in any number of sets. The network model is a variation on the hierarchical model, to the extent that it is built on the concept of multiple branches (lower-level structures) emanating from one or more nodes (higher-level structures), while the model differs from the hierarchical model in that branches can be connected to multiple nodes. The network model is able to represent redundancy in data more efficiently than in the hierarchical model.

(iv) Relational Model: The relational model was introduced by E. F. Codd in 1970 as a way to make database management systems more independent of any particular application. It is a mathematical model defined in terms of predicate logic and set theory. The products that are generally referred to as relational databases in fact implement a model that is only an approximation to the mathematical model defined by Codd. Three key terms are used extensively in relational database models: relations, attributes, and domains. A relation is a table with columns and rows. The named columns of the relation are called attributes, and the domain is the set of values the attributes are allowed to take. The basic data structure of the relational model is the table, where information about a particular entity (say, an employee) is represented in columns and rows (also called tuples). Thus, the "relation" in "relational database" refers to the various tables in the database; a relation is a set of tuples. The columns enumerate the various attributes of the entity (the employee's name, address or phone number, for example), and a row is an actual instance of the entity (a specific employee) that is represented by the relation. As a result, each tuple of the employee table represents various attributes of a single employee. All relations (and, thus, tables) in a relational database have to adhere to some basic rules to qualify as relations. First, the ordering of columns is immaterial in a table. Second, there can't be identical tuples or rows in a table. And third, each tuple will contain a single value for each of its attributes. A relational database contains multiple tables, each similar to the one in the "flat" database model. One of the strengths of the relational model is that, in principle, any value occurring in two different records (belonging to the same table or to different tables), implies a relationship among those two records. Yet, in order to enforce explicit integrity constraints, relationships between records in tables can also be defined explicitly, by identifying or non-identifying parent-child relationships characterized by assigning cardinality (1:1, (0)1:M, M:M). Tables can also have a designated single attribute or a set of attributes that can act as a "key", which can be used to uniquely identify each tuple in the table. A key that can be used to uniquely identify a row in a table is called a primary key. Keys are commonly used to join or combine data from two or more tables. For example, an Employee table may contain a column named Location which contains a value that matches the key of a Location table. Keys are also critical in the creation of indexes, which facilitate fast retrieval of data from large tables. Any column can be a key, or multiple columns can be grouped together into a compound key. It is not necessary to define all the keys in advance; a column can be used as a key even if it was not originally intended to be one.

RDBMS: Relational database management system A relational database management system (RDBMS) is a database management system (DBMS) that is based on the relational model as introduced by E. F. Codd. Most popular commercial and open source databases currently in use are based on the relational model. A short definition of an RDBMS may be a DBMS in which data is stored in the form of tables and the relationship among the data is also stored in the form of tables. Normalization A relational database aims to achieve database normalization|normalization of data. Normalization helps to reduce redundancy and update anomalies. For normalization of data there are some normal forms like Domain/key normal form,First normal form, Second normal form, Third normal form, Boyce-Codd normal form, Fourth normal form, and Fifth normal form. Historical usage of the term E. F. Codd introduced the term in his wikt:seminal|seminal paper published in 1970. In this paper and later papers he defined what he meant by relational. One well-known definition of what constitutes a relational database system is Codd's 12 rules. However, many of the early implementations of the relational model did not conform to all of Codd's rules, so the term gradually came to describe a broader class of database systems. At a minimum, these systems:

presented the data to the user as relation (database)|relations (a presentation in tabular form, i.e. as a collection of tables with each table consisting of a set of rows and columns, can satisfy this property) provided relational operators to manipulate the data in tabular form

The first systems that were relatively faithful implementations of the relational model were from the University of Michigan; Micro DBMS (1969) and from IBM UK Scientific Centre at Peterlee; (197072) and its followon (197379). The first system sold as an RDBMS was first sold in 1978. The most popular definition of an RDBMS is a product that presents a view of data as a collection of rows and columns, even if it is not based strictly upon Relational model|relational theory. By this definition, RDBMS products typically implement some but not all of Codd's 12 rules. A second, theory-based school of thought argues that if a database does not implement all of Codd's rules (or the current understanding on the relational model, as expressed by Christopher J Date, Hugh Darwen and others), it is not relational. This view, shared by many theorists and other strict adherents to Codd's principles, would disqualify most DBMSs as not relational. For clarification, they often refer to some RDBMSs as Truly-Relational Database Management Systems (TRDBMS), naming others Pseudo-Relational Database Management Systems (PRDBMS). As of 2009, all commercial relational DBMSes employ as their query language. Alternative query languages have been proposed and implemented, notably the pre-1996 implementation of

Berkeley Ingres QUEL. With standardization of the SQL, both commercial and open source DBMSes have adopted some degree of standards compliance.

(v) Object-Oriented Model: In recent years, the object-oriented paradigm has been applied to database technology, creating a new programming model known as object databases. These databases attempt to bring the database world and the application programming world closer together, in particular by ensuring that the database uses the same type system as the application program. This aims to avoid the overhead (sometimes referred to as the impedance mismatch) of converting information between its representation in the database (for example as rows in tables) and its representation in the application program (typically as objects). At the same time, object databases attempt to introduce the key ideas of object programming, such as encapsulation and polymorphism, into the world of databases. A variety of these ways have been tried for storing objects in a database. Some products have approached the problem from the application programming end, by making the objects manipulated by the program persistent. This also typically requires the addition of some kind of query language, since conventional programming languages do not have the ability to find objects based on their information content. Others have attacked the problem from the database end, by defining an object-oriented data model for the database, and defining a database programming language that allows full programming capabilities as well as traditional query facilities. Structured Query Language (SQL) SQL is a programming language for querying and modifying data and managing databases. SQL was standardized first by the ANSI and later by the ISO. Most database management systems implement a majority of one of these standards and add their proprietary extensions. SQL allows the retrieval, insertion, updating, and deletion of data. A database management system also includes management and administrative functions. Common criticisms of SQL include a perceived lack of cross-platform portability between vendors, inappropriate handling of missing data and unnecessarily complex and occasionally ambiguous language grammar and semantics. SQL Structured Query Language is the set of commands that are used to create, manipulate the structures in a relational database. Also, SQL is the most powerful way of retrieving the data from DB. SQL supports different types of operations. Based on this the commands can be broadly classified into following types. Data Definition Language Data Manipulation Language Transaction Control Language

Session Control Language and System Control Language. NORMALIZATION Normalization is a process or set of guidelines used to optimally design a database to reduce redundant data. It is a systematic way of ensuring that a database structure is suitable for general-purpose querying and free of certain undesirable characteristicsinsertion, update, and deletion anomaliesthat could lead to a loss of data integrity. Normalization is the process of efficiently organizing data in a database. There are two goals of the normalization process: eliminating redundant data (for ex: storing the same data in more than one table) and ensuring data dependencies make sense (only storing related data in a table). Both of these are worthy goals as they reduce the amount of space a database consumes and ensure that data is logically stored. It is a technique that is used when designing and redesigning a database. The Raw Database A database that is not normalized may include data that is contained in one or more different tables for no apparent reason. This could be bad for security reasons, disk space usage, speed of queries, efficiency of database updates, and, maybe most importantly, data integrity. A database before normalization is one that has not been broken down logically into smaller, more manageable tables. Logical Database Design Any database should be designed with the end user in mind. Logical database design, also referred to as the logical model, is the process of arranging data into logical, organized groups of objects that can easily be maintained. The logical design of a database should reduce data repetition or go so far as to completely eliminate it. Naming conventions used in a database should also be standard and logical. Fig. 4 represents the raw database. What Are the End User's Needs? The needs of the end user should be one of the top considerations when designing a database. Remember that the end user is the person who ultimately uses the database. There should be ease of use through the user's front-end tool (a client program that allows a user access to a database), but this, along with optimal performance, cannot be achieved if the user's needs are not taken into consideration. Some user-related design considerations include the following: What data should be stored in the database? How will the user access the database? What privileges does the user require? How should the data be grouped in the database? What data is the most commonly accessed? How is all data related in the database? What measures should be taken to ensure accurate data?

Fig. 4 Raw database

The Normal Forms The database community has developed a series of guidelines for ensuring that databases are normalized. These are referred to as normal forms and are numbered from one (the lowest form of normalization, referred to as first normal form or 1NF) through five (fifth normal form or 5NF). In practical applications, only 1NF, 2NF, and 3NF along with the occasional 4NF are seen. Fifth normal form is very rarely seen. The stages of normalization are referred to as normal forms and progress from the least restrictive (First Normal Form - 1NF) through the most restrictive (Fifth Normal Form 5NF). Generally, most database designers do not attempt to implement anything higher than Third Normal Form (3NF) or Boyce-Codd Normal Form. Its important to point out that they are guidelines and guidelines only. Occasionally, it becomes necessary to stray from them to meet practical business requirements. First Normal Form (1NF) A relation is said to be in First Normal Form (1NF) if and only if each attribute of the relation is atomic. More simply, to be in 1NF, each column must contain only a single value and each row must contain the same columns. First normal form (1NF) sets the very basic rules for an organized database: Eliminate duplicative columns from the same table. Create separate tables for each group of related data and identify each row with a unique column or set of columns (the primary key). The objective of the first normal form is to divide the base data into logical units called tables. When each table has been designed, a primary key is assigned to most or all tables. Examine Fig 4.2, which illustrates how the raw database shown in the previous figure has been redeveloped using the first normal form. You can see that to achieve the first normal form, data had to be broken into logical units of related information, each having a primary key and ensuring that there are no repeated groups in any of the tables. Instead of one large table, there are now smaller, more manageable tables: EMPLOYEE_TBL, CUSTOMER_TBL, and PRODUCTS_TBL. The primary keys are normally the first columns listed in a table, in this case: EMP_ID, CUST_ID, and PROD_ID.

Fig. 4.1 1NF

Second Normal Form (2NF) Second normal form (2NF) further addresses the concept of removing duplicative data: Meet all the requirements of the first normal form. Remove subsets of data that apply to multiple rows of a table and place them in separate tables. Create relationships between these new tables and their predecessors through the use of foreign keys. The objective of the second normal form is to take data that is only partly dependent on the primary key and enters that data into another table. Fig 4.3 illustrates the second normal form. The second normal form is derived from the first normal form by further breaking two tables down into more specific units. EMPLOYEE_TBL split into two tables called EMPLOYEE_TBL and EMPLOYEE_PAY_TBL. Personal employee information is dependent on the primary key (EMP_ID), so that information remained in the EMPLOYEE_TBL (EMP_ID, LAST_NAME, FIRST_NAME, MIDDLE_NAME, ADDRESS, CITY, STATE, ZIP, PHONE, and PAGER). On the other hand, the information that is only partly dependent on the EMP_ID (each individual employee) is used to populate EMPLOYEE_PAY_TBL (EMP_ID, POSITION, POSITION_DESC, DATE_HIRE, PAY_RATE, and DATE_LAST_RAISE). Notice that both tables contain the column EMP_ID. This is the primary key of each table and is used to match corresponding data between the two tables. CUSTOMER_TBL split into two tables called CUSTOMER_TBL and ORDERS_TBL. What took place is similar to what occurred in the EMPLOYEE_TBL. Columns that were partly dependent on the primary key were directed to another table. The order information for a customer is dependent on each CUST_ID, but does not directly depend on the general customer information in the original table.

Fig. 4.2 2NF

Third Normal Form (3NF) Third normal form (3NF) goes one large step further: Meet all the requirements of the second normal form. Remove columns that are not dependent upon the primary key. Remember, these normalization guidelines are cumulative. For a database to be in 2NF, it must first fulfill all the criteria of a 1NF database. The third normal form's objective is to remove data in a table that is not dependent on the primary key. In the third normal form EMPLOYEE_PAY_TBL is split into two tables, one table containing the actual employee pay information and the other containing the position descriptions, which really do not need to reside in EMPLOYEE_PAY_TBL. The POSITION_DESC column is totally independent of the primary key, EMP_ID.

Fig. 4.3 3NF

Benefits of Normalization Normalization provides numerous benefits to a database. Some of the major benefits include the following: Greater overall database organization Reduction of redundant data Data consistency within the database A much more flexible database design and A better handle on database security. Organization is brought about by the normalization process, making everyone's job easier, from the user who accesses tables to the database administrator (DBA) who is responsible for the overall management of every object in the database. Data redundancy is reduced, which simplifies data structures and conserves disk space. Because duplicate data is minimized, the possibility of inconsistent data is greatly reduced. For example, in one table an individual's name could read STEVE SMITH, whereas the name of the same individual reads STEPHEN R. SMITH in another table. Because the database has been normalized and broken into smaller tables, you are provided with more flexibility as far as modifying existing structures. It is much easier to modify a small table with little data than to modify one big table that holds all the vital data in the database. Lastly, security is also provided in the sense that the DBA can grant access to limited tables to certain users. Security is easier to control when normalization has occurred. Data integrity is the assurance of consistent and accurate data within a database. Limitations of Normalization Some of the major limitations of the normalization process include the following: More complicated SQL required for multi-table sub-queries and joins and Extra work for DBMS which slows down the application. Denormalizing a Database Denormalization is the process of taking a normalized database and modifying table structures to allow controlled redundancy for increased database performance. Attempting to improve performance is the only reason to ever denormalize a database. A denormalized database is not the same as a database that has not been normalized. Denormalizing a database is the process of taking the level of normalization within the database down a notch or two. Remember,

normalization can actually slow performance with its frequently occurring table join operations. Denormalization may involve recombining separate tables or creating duplicate data within tables to reduce the number of tables that need to be joined to retrieve the requested data, which results in less I/O and CPU time. OODBMS (Object Database) Advantages Using an OODBMS / ODBMS (object database management system, object-oriented data management system) for data storage brings powerful advantages to applications that use complex object models, have high concurrency requirements, and large data sets. It is difficult, time consuming, expensive in development, and expensive at run time, to map the objects into a relational database and performance can suffer. Versant's object database solutions (ODBMS) are designed to handle the navigational access, seamless data distribution, and scalability often required by these applications:
o o

Object Database OODBMS FastObjects .NET OODBMS

Why OODBMS solutions instead of traditional RDBMS? Where data handling requirements are simple and suitable to rigid row and column structures an RDBMS might be an appropriate solutiuon. However,for many applications, today's most challenging aspect is controlling the inherent complexity of the subject matter itself - the complexity must be tamed. And tamed in a way that enables continual evolution of the application as the environment and needs change. For these applications, an OODBMS is the best answer:

COMPLEX (INTER-) RELATIONSHIPS If there are a lot of many-to-many relationships, tree structures or network (graph) structures then Versant's OODBMS solutions will handle those relationships much faster than a relational database. COMPLEX DATA For many applications, the most challenging aspect is controlling the inherent complexity of the subject matter itself - the complexity must be tamed. For these applications, a Versant OODBMS is the best answer. Architectures that mix technical needs such as persistence (and SQL) with the domain model are an invitation to disaster. Versant's OODBMS solutions let you develop using objects that need only contain the domain behaviour, freeing you from persistence concerns. NO MAPPING LAYER It is difficult, time consuming, expensive in development, and expensive at run time, to map the objects into a relational database and performance can suffer. Versant's OODBMS solutions store objects as objects - yes, it's as easy as 1, 2, 3. Versant's object database solutions are designed to store many-to-many, tree and network relationships as named bi-directional associations without having the need for JOIN tables. Hence, Versant's object database solutions save programming time, and objects can be stored and retrieved faster. Modern O/R mapping

tools may simplify many mapping problems, however they dont provide seamless data distribution or the performance of Versant's OODBMS solutions. FAST AND EASY DEVELOPEMENT, ABILITY TO COPE WITH CONTINOUS EVOLUTION The complexity of telecommunications infrastructure, transportation networks, simulations, financial instruments and other domains must be tamed. And tamed in a way that enables continual evolution of the application as the environment and needs change. Architectures that mix technical needs such as persistence (and SQL) with the domain model are an invitation to disaster. Versant's OODBMS solutions let you develop using objects that need only contain the domain behaviour, freeing you from persistence concerns.

DATA PROCESSING Batch Processing: Transaction data stored until convenient to process as a group. Useful for less time-sensitive actions.

B T A C
On-line Processing: Transaction data entered directly into system, constantly updating files. Requires direct-access devices.

B T HO AC F T A S C IO S R N A T N
. OODBMS: Definition -

O N

An object-oriented database management system (OODBMS), sometimes shortened to ODBMS for object database management system), is a database management system (DBMS) that

S RE O T D T A S C IO F R N A T N IL

supports the modelling and creation of data as objects. This includes some kind of support for classes of objects and the inheritance of class properties and methods by subclasses and their objects. There is currently no widely agreed-upon standard for what constitutes an OODBMS, and OODBMS products are considered to be still in their infancy. In the meantime, the objectrelational database management system (ORDBMS), the idea that object-oriented database concepts can be superimposed on relational databases, is more commonly encountered in available products. An object-oriented database interface standard is being developed by an industry group, the Object Data Management Group (ODMG). The Object Management Group (OMG) has already standardized an object-oriented data brokering interface between systems in a network. In their influential paper, The Object-Oriented Database Manifesto, Malcolm Atkinson and others define an OODBMS as follows: An object-oriented database system must satisfy two criteria: it should be a DBMS, and it should be an object-oriented system, i.e., to the extent possible, it should be consistent with the current crop of object-oriented programming languages. The first criterion translates into five features: persistence, secondary storage management, concurrency, recovery and an ad hoc query facility. The second one translates into eight features: complex objects, object identity, encapsulation, types or classes, inheritance, overriding combined with late binding, extensibility and computational completeness.

Вам также может понравиться