Вы находитесь на странице: 1из 41

Q) What is a system?

: - System is a word derived from Greek word means an organized relationship among many
components.
Or
We can say a system as orderly grouping of interdependent components linked together according to plan
to achieve a specific goal. Each component is a part of total system and has to do its own work for the
system to achieve the desired goal. When we talk about system, we can say manual system or
computerized but now days the manual system has been obsolete and we are not more using this
anywhere but all the manual systems we have been converting into a complete automatic platform i.e.
computerized system. But to have this type of system we have to apply some phases under the life cycle of
system development (software development).
Similarly an information system is an arrangement of people, data, processes information
presentation and information technology that interacts to support and improve day to day operation in a
business as well as support the problem solving and decision making needs of management.
A system carries following characteristics:-
• Organization: implying structure and order.
• Interaction :-refers procedure in which in which each component functions with other
units.
• Interdependence:-means that one component of the system depending on others.
• Integration: - saying how a system is tied together. it is more than just sharing a physical
part.
• Central objective: - is quite common that an organization may set one objective and
operative together to achieve together. Since it is a common goal for all the units or
components so all the users or units have to be aware about this.
We have many systems like banking system (with out ATM), airline system, library system, train system,
billing system. Now days all these have been changed into computerized system.
Q) Why there is need of system or (why to use system)?
There are many reasons for which we need a system. To understand this “why", let's see some
examples.
When you pay your telephone bill your payment is processed by a system. That
system has evolved over many years and continues to evolve in order to meet the changing needs of the
business. When the phone company cashes your check that check is also processed by a system which
itself is evolving. These two systems are composed of manual activities and automated components. They
also exist in the context of many other systems with which they must interface. Each system works so well
individually because it is composed of a rigorous set of tasks which result in well-defined outputs.
Regardless of who is doing the task, the result is essentially the same. As staff turnover occurs the system
provides continuity to the way of doing business. Each system can interface with the other because the
division of activities between the bank and the phone company are well defined, as are the interfaces.
Thus, no matter which bank the check is drawn on, the process is the same; no matter which Phone
Company sends in the check, the process is the same. But mostly in manual system we can rely now days
because:-
Possible reasons for change or need of system can be:-
I. To create a better interface between the user and the system.
II. For good accuracy and speed of processing.
III. High security and backup of data used in system.
IV. Sharing of data all over the world in very less and real time.
V. new laws that force organizations to do new things, or do old things differently
VI. changes in society, such as growing demand for better security over personal data
VII. a desire to improve competitiveness in the fact of reducing profits and market share
VIII. changes in technology - e.g. a new operating system may force upgrades to computers
IX. a need to increase productivity, quality or efficiency
X. concern that existing equipment is a health and safety menace
XI. Changes in work processes, expansion of the business, changes in business requirements or the
environment in which the organization operates may all lead to a reassessment of information
system requirements.
XII. The current system may be too inflexible or expensive to maintain, or may reduce the organization's
ability to respond quickly enough to customer's demands.

1
Etc.
Q) Why are the objectives of system?
When we plan to develop, acquire or revise a system we must be absolutely clear on the
objectives of that system. The objectives must be stated in terms of the expected benefits that the business
expects from investing in that system.
The objectives define the expected return on investment.
An SDLC has three primary business objectives:
- Ensure the delivery of high quality systems;
- Provide strong management controls;
- Maximize productivity.
In other words, the SDLC should ensure that we can produce more function, with higher quality, in
less time, with less resource and in a predictable manner. Additionally a good system has to be with
1) Controlling over project 2) capability to monitor large projects. 3) Detailed steps 3)
Evaluate costs and completion targets 4) Documentation 5) Well defined user input 6)
Ease of maintenance 7) Development and design standards 8) Tolerates changes in
MIS staffing.
System types: - A system can be of following types:-
1) Formal or informal system
2) Physical or abstract system
3) Open or closed system
4) Manual or automated system
Information system (I.S.) types:
1.) Transaction processing system (TPS):
A TPS collects and stores information about transactions, and controls some aspects of transactions. A
transaction is an event of interest to the organization. e.g. a sale at a store.

A TPS is a basic business system. It:


• is often tied to other systems such as the inventory system which tracks stock supplies and triggers
reordering when stocks get low;
• serves the most elementary day-to-day activities of an organization;
• supports the operational level of the business;
• supplies data for higher-level management decisions (e.g. MIS, EIS);
• is often critical to survival of the organization;
• mostly for predefined, structured tasks;
• can have strategic consequences (eg airline reservation system);
• usually has high volumes of input and output;
• provides data which is summarized into information by systems used by higher levels of
management;
• need to be fault-tolerant.
On-line transaction processing: A transaction processing mode in which transactions entered on-line are
immediately processed by the CPU.
2. Office Information Systems:-
An office information system, or OIS (pronounced oh-eye-ess), is an information system that uses hardware,
software and networks to enhance work flow and facilitate communications among employees. Win an office
information system, also described as office automation; employees perform tasks electronically using computers
and other electronic devices, instead of manually. With an office information system, for example, a registration
2
department might post the class schedule on the Internet and e-mail students when the schedule is updated. In a
manual system, the registration department would photocopy the schedule and mail it to each student’s house.
An office information system supports a range of business office activities such as creating and distributing graphics
and/or documents, sending messages, scheduling, and accounting. All levels of users from executive management
to no management employees utilize and benefit from the features of an OIS.
The software an office information system uses to support these activities include word processing, spreadsheets,
databases, presentation graphics, e-mail, Web browsers, Web page authoring, personal information management,
and groupware. Office information systems use communications technology such as voice mail, facsimile (fax),
videoconferencing, and electronic data interchange (EDI) for the electronic exchange of text, graphics, audio, and
video. An office information system also uses a variety of hardware, including computers equipped with modems,
video cameras, speakers, and microphones; scanners; and fax machines.
3.) Decision support system (DSS):
Transaction processing and management information systems provide information on a regular basis. Frequently,
however, users need information not provided in these reports to help them make decisions. A sales manager, for
example, might need to determine how high to set yearly sales quotas based on increased sales and lowered product
costs. Decision support systems help provide information to support such decisions.
A decision support system (DSS) is an information system designed to help users reach a decision when a
decision-making situation arises. A variety of DSSs exist to help with a range of decisions.
A decision support system uses data from internal and/or external sources.
Internal sources of data might include sales, manufacturing, inventory, or financial data from an organization’s
database. Data from external sources could include interest rates, population trends, and costs of new housing
construction or raw material pricing. Users of a DSS, often managers, can manipulate the data used in the DSS to
help with decisions.
Some decision support systems include query language, statistical analysis capabilities, spreadsheets, and graphics
that help you extract data and evaluate the results. Some decision support systems also include capabilities that
allow you to create a model of the factors affecting a decision. A simple model for determining the best product
price, for example, would include factors for the expected sales volume at each price level. With the model, you can
ask what-if questions by changing one or more of the factors and viewing the projected results. Many people use
application software packages to perform DSS functions. Using spreadsheet software, for example, you can
complete simple modeling tasks or what-if scenarios.
A special type of DSS, called an executive information system (EIS), is designed to support the information needs
of executive management. Information in an EIS is presented in charts and tables that show trends, ratios, and other
managerial statistics. Because executives usually focus on strategic issues, EISs rely on external data sources such
as the Dow Jones News/Retrieval service or the Internet. These external data sources can provide current
information on interest rates, commodity prices, and other leading economic indicators.
To store all the necessary decision-making data, DSSs or EISs often use extremely large databases, called data
warehouses. A data warehouse stores and manages the data required to analyze historical and current business
circumstances.
4.) Management Information Systems:-

While computers were ideal for routine transaction processing, managers soon realized that the computers’
capability of performing rapid calculations and data comparisons could produce meaningful information for
management. Management information systems thus evolved out of transaction processing systems. A
management information system, or MIS (pronounced em-eye-ess), is an information system that generates
accurate, timely and organized information so managers and other users can make decisions, solve problems,
supervise activities, and track progress. Because it generates reports on a regular basis, a management information
system sometimes is called a management reporting system (MRS).
Management information systems often are integrated with transaction processing systems. To process a sales
order, for example, the transaction processing system records the sale, updates the customer’s account balance, and
makes a deduction from inventory. Using this information, the related management information system can produce
reports that recap daily sales activities; list customers with past due account balances; graph slow or fast selling
products; and highlight inventory items that need reordering. A management information system focuses on
generating information that management and other users need to perform their jobs.
An MIS generates three basic types of information: detailed, summary and exception. Detailed information
typically confirms transaction processing activities. A Detailed Order Report is an example of a detail report.
Summary information consolidates data into a format that an individual can review quickly and easily. To help
synopsize information, a summary report typically contains totals, tables, or graphs. An Inventory Summary Report
is an example of a summary report.

3
Exception information filters data to report information that is outside of a normal condition. These conditions,
called the exception criteria, define the range of what is considered normal activity or status. An example of an
exception report is an Inventory Exception Report is an Inventory Exception Report that notifies the purchasing
department of items it needs to reorder. Exception reports help managers save time because they do not have to
search through a detailed report for exceptions. Instead, an exception report brings exceptions to the manager’s
attention in an easily identifiable form. Exception reports thus help them focus on situations that require immediate
decisions or actions.

5. Expert Systems:-
An expert system is an information system that captures and stores the knowledge of human experts and then
imitates human reasoning and decision-making processes for those who have less expertise. Expert systems are
composed of two main components: a knowledge base and inference rules. A knowledge base is the combined
subject knowledge and experiences of the human experts. The inference rules are a set of logical judgments
applied to the knowledge base each time a user describes a situation to the expert system.

Although expert systems can help decision-making at any level in an organization, non-management employees are
the primary users who utilize them to help with job-related decisions. Expert systems also successfully have
resolved such diverse problems as diagnosing illnesses, searching for oil and making soup.
Expert systems are one part of an exciting branch of computer science called artificial intelligence. Artificial
intelligence (AI) is the application of human intelligence to computers. AI technology can sense your actions and,
based on logical assumptions and prior experience, will take the appropriate action to complete the task. AI has a
variety of capabilities, including speech recognition, logical reasoning, and creative responses.
Experts predict that AI eventually will be incorporated into most computer systems and many individual software
applications. Many word processing programs already include speech recognition.

Advantages of expert systems:

• The computer can store far more information than a human.


• The computer does not 'forget', make silly mistakes or get drunk when it is most needed.
• Data can be kept up-to-date.
• The expert system is always available 24 hours a day and will never 'retire'.
• The system can be used at a distance over a network.

Waterfall model:-
The simplest software development life cycle model is the waterfall model, which states that the phases are
organized in a linear order. A project begins with feasibility analysis. On the successful demonstration of the
feasibility analysis, the requirements analysis and project planning begins.

The design starts after the requirements analysis is done. And coding begins after the design is done. Once
the programming is completed, the code is integrated and testing is done. On succeeful completion of
testing, the system is installed. After this the regular operation and maintenance of the system takes place.
The following figure demonstrates the steps involved in waterfall life cycle model.

The Waterfall Software Life Cycle Model

4
With the waterfall model, the activities performed in a software development project are requirements
analysis, project planning, system design, detailed design, coding and unit testing, system integration and
testing. Linear ordering of activities has some important consequences. First, to clearly identify the end of a
phase and beginning of the others. Some certification mechanism has to be employed at the end of each
phase. This is usually done by some verification and validation. Validation means confirming the output of a
phase is consistent with its input (which is the output of the previous phase) and that the output of the phase
is consistent with overall requirements of the system.

The consequences of the need of certification is that each phase must have some defined output that can be
evaluated and certified. Therefore, when the activities of a phase are completed, there should be an output
product of that phase and the goal of a phase is to produce this product. The outputs of the earlier phases are
often called intermediate products or design document. For the coding phase, the output is the code. From
this point of view, the output of a software project is to justify the final program along with the use of
documentation with the requirements document, design document, project plan, test plan and test results.

Another implication of the linear ordering of phases is that after each phase is completed and its outputs are
certified, these outputs become the inputs to the next phase and should not be changed or modified.
However, changing requirements cannot be avoided and must be faced. Since changes performed in the
output of one phase affect the later phases, that might have been performed. These changes have to made in
a controlled manner after evaluating the effect of each change on the project.This brings us to the need for
configuration control or configuration management.

The certified output of a phase that is released for the best phase is called baseline. The configuration
management ensures that any changes to a baseline are made after careful review, keeping in mind the
interests of all parties that are affected by it. There are two basic assumptions for justifying the linear
ordering of phase in the manner proposed by the waterfall model.

For a successful project resulting in a successful product, all phases listed in the waterfall model must be
performed anyway.

Any different ordering of the phases will result in a less successful software product.

Project Output in a Waterfall Model


As we have seen, the output of a project employing the waterfall model is not just the final program along
with documentation to use it. There are a number of intermediate outputs, which must be produced in order
to produce a successful product.

The set of documents that forms the minimum that should be produced in each project are:

• Requirement document
• Project plan
• System design document
• Detailed design document
• Test plan and test report
• Final code
• Software manuals (user manual, installation manual etc.)
• Review reports

Except for the last one, these are all the outputs of the phases. In order to certify an output product of a phase
before the next phase begins, reviews are often held. Reviews are necessary especially for the requirements
and design phases, since other certification means are frequently not available. Reviews are formal meeting
to uncover deficiencies in a product. The review reports are the outcome of these reviews.

5
Prototyping Software Life Cycle Model
The goal of prototyping based development is to counter the first two limitations of the waterfall model
discussed earlier. The basic idea here is that instead of freezing the requirements before a design or coding
can proceed, a throwaway prototype is built to understand the requirements. This prototype is developed
based on the currently known requirements. Development of the prototype obviously undergoes design,
coding and testing. But each of these phases is not done very formally or thoroughly. By using this
prototype, the client can get an "actual feel" of the system, since the interactions with prototype can enable
the client to better understand the requirements of the desired system.

Prototyping is an attractive idea for complicated and large systems for which there is no manual process or
existing system to help determining the requirements. In such situations letting the client "plan" with the
prototype provides invaluable and intangible inputs which helps in determining the requirements for the
system. It is also an effective method to demonstrate the feasibility of a certain approach. This might be
needed for novel systems where it is not clear that constraints can be met or that algorithms can be
developed to implement the requirements. The process model of the prototyping approach is shown in the
figure below.

Prototyping Model

The basic reason for little common use of prototyping is the cost involved in this built-it-twice approach.
However, some argue that prototyping need not be very costly and can actually reduce the overall
development cost. The prototype are usually not complete systems and many of the details are not built in
the prototype. The goal is to provide a system with overall functionality. In addition, the cost of testing and
writing detailed documents are reduced. These factors helps to reduce the cost of developing the prototype.
On the other hand, the experience of developing the prototype will very useful for developers when
developing the final system. This experience helps to reduce the cost of development of the final system and
results in a more reliable and better designed system.

Advantages of Prototyping
1. Users are actively involved in the development
2. It provides a better system to users, as users have natural tendency to change their mind in specifying
requirements and this method of developing systems supports this user tendency.
3. Since in this methodology a working model of the system is provided, the users get a better
understanding of the system being developed.
4. Errors can be detected much earlier as the system is mode side by side.
5. Quicker user feedback is available leading to better solutions.

The Spiral Life Cycle Model:-


This is a recent model that has been proposed by Boehm. As the name suggests, the activities in this model
can be organized like a spiral. The spiral has many cycles. The radial dimension represents the cumulative
cost incurred in accomplishing the steps dome so far and the angular dimension represents the progress
6
made in completing each cycle of the spiral. The structure of the spiral model is shown in the figure given
below. Each cycle in the spiral begins with the identification of objectives for that cycle and the different
alternatives are possible for achieving the objectives and the imposed constraints.

The next step in the spiral life cycle model is to evaluate these different alternatives based on the objectives
and constraints. This will also involve identifying uncertainties and risks involved. The next step is to
develop strategies that resolve the uncertainties and risks. This step may involve activities such as
benchmarking, simulation and prototyping. Next, the software is developed by keeping in mind the risks.
Finally the next stage is planned.

The next step is determined by remaining risks. For example, its performance or user-interface risks are
considered more important than the program development risks. The next step may be evolutionary
development that involves developing a more detailed prototype for resolving the risks. On the other hand, if
the program development risks dominate and previous prototypes have resolved all the user-interface and
performance risks; the next step will follow the basic waterfall approach.

The risk driven nature of the spiral model allows it to accommodate any mixture of specification-oriented,
prototype-oriented, simulation-oriented or some other approach. An important feature of the model is that
each cycle of the spiral is completed by a review, which covers all the products developed during that cycle,
including plans for the next cycle. The spiral model works for developed as well as enhancement projects.

Spiral Model Description


The development spiral consists of four quadrants as shown in the figure above

Quadrant 1: Determine objectives, alternatives, and constraints.

Quadrant 2: Evaluate alternatives, identify, resolve risks.

Quadrant 3: Develop, verify, next-level product.


7
Quadrant 4: Plan next phases.

Although the spiral, as depicted, is oriented toward software development, the concept is equally applicable
to systems, hardware, and training, for example. To better understand the scope of each spiral development
quadrant, let’s briefly address each one.

Quadrant 1: Determine Objectives, Alternatives, and Constraints

Activities performed in this quadrant include:

1. Establish an understanding of the system or product objectives—namely performance, functionality,


and ability to accommodate change.
2. Investigate implementation alternatives—namely design, reuse, procure, and procure/ modify
3. Investigate constraints imposed on the alternatives—namely technology, cost, schedule, support, and
risk. Once the system or product’s objectives, alternatives, and constraints are understood, Quadrant
2 (Evaluate alternatives, identify, and resolve risks) is performed.

Quadrant 2: Evaluate Alternatives, Identify, Resolve Risks

Engineering activities performed in this quadrant select an alternative approach that best satisfies technical,
technology, cost, schedule, support, and risk constraints. The focus here is on risk mitigation. Each
alternative is investigated and prototyped to reduce the risk associated with the development decisions.
Boehm describes these activities as follows:

. . . This may involve prototyping, simulation, benchmarking, reference checking, administering user
questionnaires, analytic modeling, or combinations of these and other risk resolution techniques.

The outcome of the evaluation determines the next course of action. If critical operational and/or technical
issues (COIs/CTIs) such as performance and interoperability (i.e., external and internal) risks remain, more
detailed prototyping may need to be added before progressing to the next quadrant. Dr. Boehm notes that if
the alternative chosen is “operationally useful and robust enough to serve as a low-risk base for future
product evolution, the subsequent risk-driven steps would be the evolving series of evolutionary prototypes
going toward the right (hand side of the graphic) . . . the option of writing specifications would be addressed
but not exercised.” This brings us to Quadrant 3.

Quadrant 3: Develop, Verify, Next-Level Product

If a determination is made that the previous prototyping efforts have resolved the COIs/CTIs, activities to
develop, verify, next-level product are performed. As a result, the basic “waterfall” approach may be
employed—meaning concept of operations, design, development, integration, and test of the next system or
product iteration. If appropriate, incremental development approaches may also be applicable.

Quadrant 4: Plan Next Phases

The spiral development model has one characteristic that is common to all models—the need for advanced
technical planning and multidisciplinary reviews at critical staging or control points. Each cycle of the
model culminates with a technical review that assesses the status, progress, maturity, merits, risk, of
development efforts to date; resolves critical operational and/or technical issues (COIs/CTIs); and reviews
plans and identifies COIs/CTIs to be resolved for the next iteration of the spiral.

Subsequent implementations of the spiral may involve lower level spirals that follow the same quadrant
paths and decision considerations.

8
System development life cycle (software development life cycle):-
Let's see in detail how and what steps are followed when we develop a computerized system:
First let's have a look in given diagram. Totally we have 7 or eight phases as shown below.

1. Problem definition (first phase)

8) System maintenance (8th phase) 2. System analysis (2nd phase)

7) System evaluation (7th phase) Different phases applied


during development shown 3) System design (3rd phase)
around the circle

6) System implementation (6th phase) 4) System development (4th phase)

5) System testing (5th phase)

Let's see in detail about all steps.


1) Problem definition:-This is the first phase where problem has to be identified and selected. As
we know
everything starts with a concept. It could be a concept of someone, or everyone. However, there are those
that do not start out with a concept but with a question, “What do you want?” and "really, is there a
problem?" They ask thousands of people in a certain community or age group to know what they want and
decide to create an answer. But it all goes back to planning and conceptualization. It is also essential for
developers to know that this stage deals a lot with upper management so if you are not the owner of the
software development company; you have to deal with them a lot in this stage.
In this phase the user identifies the need for a new or changes in old or an improved
system in large organizations. This identification may be a part of system planning process. Information
requirements of the organization as a whole are examined, and projects to meet these requirements are
proactively identified. The organization's information system requirements may result from requests to deal
with problem in current system procedures, from the desire to perform additional tasks or from the
realization of information technology could be used to capitalize on existing opportunity.
We can apply techniques like:
1) Collecting data by about system by measuring things, counting things,
survey or interview with workers, management, customers, and corporate partners to discover what these
people know, observe processes in action to see where problems lie and improvements can be made in
work-flow, research similar systems elsewhere to see how similar problems have been addressed, test the
existing system, study the workers in the organization and list the types of information the system needs to
produce.
2) Getting the idea about context of problem
3) The processes - you need to know how data is transformed into information.
4) getting a complete idea about data and it's type in detail like data types , data
structures, storage, constraints that must be put on the solution (e.g. operating system that is used,
hardware power, minimum speed required etc, what strategy will be best to manage the solution etc.

2) System analysis:-
Systems analysis deals with analysis of sets of interacting entities, the
systems, often prior to their automation as computer systems, and the interactions within those systems.
This field is closely related to operations research. It is also "an explicit formal inquiry carried out to help
someone, referred to as the decision maker, identify a better course of action and make a better decision
9
than he might have otherwise made. In general, analysis is defined as the procedure by which we break
down an intellectual or substantial whole into parts or components. Synthesis is defined as the opposite
procedure: to combine separate elements or components in order to form a coherent whole".

During this phase,by using many sub-phases an analyst work with users to determine the
expectation of users from the proposed system. The sub-phases usuaully involves careful study of current
system, manula or computerized that might be replaced or enhanced as part of this project. Next the
requirements are studied and structured in accordance with their inter-realtionship and eliminates any
redundancies. third ,alternative initial design is generated to match the requirements. then these
alternatives are compared to determine which alternative best meets the requirement in term of cost and
labour to commit to develop process.
The systems discussed within systems analysis can be within any field such as: industrial processes,
management, decision making processes, environmental protection processes, etc
The development of a computer-based information system often comprises the
use of a systems analyst. When a computer-based information system is developed, systems analysis
would constitute the following steps:

• The development of a feasibility study, involving determining whether a project is economically,


socially,, technologically, organisationally,legally,schedully feasible.
• Conducting fact-finding measures, designed to ascertain the requirements of the system's end-
users. These typically span interviews, questionnaires, or visual observations of work on the
existing system.
• Gauging how the end-users would operate the system (in terms of general experience in using
computer hardware/software), what the system would be used for etc.

This analysis is actually done by "system analyst", let's see


in detail about system analysts.
System analysts:- The work of system analyst who designs an information
system is just same as an architect of a house. Three groups of people are involved in developing
information system for organization. They are managers, users of the systems and computer programmers
who implement systems. The system analyst co-ordinates the efforts of all these groups to effectively
develop and operate computer based IS.
System analysts develop information systems. For this task,
they must know about concept of system. They must be involved in all the phases of SDLC i.e. from
preliminary investigation to implementation. Success of development depends on skills and the dedication
of system analyst.
So a systems analyst is responsible for researching, planning, coordinating and
recommending software and system choices to meet an organization's business requirements. The
systems analyst plays a vital role in the systems development process. A successful systems analyst must
acquire four skills: analytical, technical, managerial, and interpersonal. Analytical skills enable systems
analysts to understand the organization and its functions, which helps him/her to identify opportunities and
to analyze and solve problems. Technical skills help systems analysts understand the potential and the
limitations of information technology. The systems analyst must be able to work with various programming
languages, operating systems, and computer hardware platforms. Management skills help systems
analysts manage projects, resources, risk, and change. Interpersonal skills help systems analysts work
with end users as well as with analysts, programmers, and other systems professionals.
Because they must write user requests into technical specifications, the
systems analysts are the liaisons between vendors and the IT professionals of the organization they
represent. They may be responsible for developing cost analysis, design considerations, and
implementation time-lines. They may also be responsible for feasibility studies of a computer system before
making recommendations to senior management.

Basically, a systems analyst performs the following tasks:


Interact with the customers to know their requirements
Interact with designers to convey the possible interface of the software
Interact/guide the coders/developers to keep track of system development
Perform system testing with sample/live data with the help of testers
Implement the new system
Prepare High quality Documentation
10
Many systems analysts have morphed into business analysts. And, the Bureau of Labor
Statistics reports that "Increasingly, employers are seeking individuals who have a master’s degree in
business administration (MBA) with a concentration in information systems."

Why system analysts are needed for a business?


A computerized sustem enables an organization to provide accurate information and
respond faster to the queies,events etc. If a business needs computerzied information system, a system
analyst is required for analysis and design of that system. Information system evolved from the need to
improve the use of computer resources for the infromation processing needs of businees application.
Customer defines business problems to be solved by the computer. Project manager ,analysts,
programmers, and customers apply information technology to build information that solve these problems.
Information technology offers the opportunity to collect and store enormous volume of data ,process
business trasnactions with great speed and accuracy and provide timely and relevant information for taking
correct decision by management. This potential can not be realized without help of system analyst sicne
business users may not fully understand the capabilities and limitations of modern technology. Similarly
programmers and information technoogists do not fully understand the business applications they aretrying
to computerize or support. So system analyst bridgees the gap between those who need computer based
solution and those who understand information technology.
Skills needed for system analyst:
1) Analytical skill:- Analytical skill is the ability to visualize, articulate, and solve
complex problems and concepts, and make decisions that make sense based on available information.
Such skills include demonstration of the ability to apply logical thinking to gathering and analyzing
information, designing and testing solutions to problems, and formulating plans.
To test for analytical skills one might be asked to look for inconsistencies in an advertisement, put a series
of events in the proper order, or critically read an essay. Usually standardized tests and interviews include
an analytical section that requires the examine to use their logic to pick apart a problem and come up with a
solution.
Although there is no question that analytical skills are essential, other skills are equally required as well. For
instance in systems analysis the systems analyst should focus on four sets of analytical skills: systems
thinking, organizational knowledge, problem identification, and problem analyzing and solving.
It also includes the way we describe a problem and subsequently finding out the solutions
2) Technical skill: Many aspects of the job of system analysts are technically
oriented. In order to develop computer based IS (information systems), system analyst must understand
information technologies, their potentials and their limitations. A system analyst needs technical skills not
only to perform tasks assigned to him but also to communicate with other people with whom s/he works in
system development. The technical knowledge of SA must be updated from time to time. S/he should be
familiar with technologies such as:
Micro/mini.main frame computers, workstations
Programming language
Operating systems, datbase and file management systems, data
communications standard system development tools and environments, decision support sysytems.
etc.
3) Managerial :-Management in all business and human organisation activity is the
act of getting people together to accomplish desired goals and objectives. Management comprises
planning, organizing, staffing, leading or directing, and controlling an organization (a group of one or more
people or entities) or effort for the purpose of accomplishing a goal. Resourcing encompasses the
deployment and manipulation of human resources, financial resources, technological resources, and
natural resources.
Management can also refer to the person or people who perform the act(s) of management. under this
mainly there are following functions planning organizing, leading, co-ordinating, controlling, staffing,
motivating
4) "Interpersonal skills" refers to mental and communicative algorithms applied
during social communications and interaction to reach certain effects or results. The term "interpersonal
skills" is used often in business contexts to refer to the measure of a person's ability to operate within
business organizations through social communication and interactions. Interpersonal skills are how people
relate to one another.
As an illustration, it is generally understood that communicating respect for other people or professionals
within will enable one to reduce conflict and increase participation or assistance in obtaining information or
11
completing tasks. For instance, to interrupt someone who is currently preoccupied with the task of obtaining
information needed immediately, it is recommended that a professional use a deferential approach with
language such as, "Excuse me, are you busy? I have an urgent matter to discuss with you if you have the
time at the moment." This allows the receiving professional to make their own judgment regarding the
importance of their current task versus entering into a discussion with their colleague. While it is generally
understood that interrupting someone with an "urgent" request will often take priority, allowing the receiver
of the message to judge independently the request and agree to further interaction will likely result in a
higher quality interaction. Following these kinds of heuristics to achieve better professional results generally
results in a professional being ranked as one with 'good interpersonal skills.' Often these evaluations occur
in formal and informal settings.
Having positive interpersonal skills increases the productivity in the organization since the number of
conflicts is reduced. In informal situations, it allows communication to be easy and comfortable. People with
good interpersonal skills can generally control the feelings that emerge in difficult situations and respond
appropriately, instead of being overwhelmed by emotion.
Role of system analyst:- Among several roles ,some important roles are:
1) Change agent:An analyst can be viewd as an agent
of change. Actually a candidate system is established to introduce change and re-orientation in how the
user organization handles information or makes decision. So it's important that user accepts changesa. Fro
example an anlayst prefers participation of users while designing and implemenation. Analyst plans,
moniotrs and implementschange into the user domain. So as agent analyst may use different apporaches
to introduce to user organization.
2) Investigator and monitor:- a system analyst may
work as investigator to know the reasons why existing system became fail. The role of system is to extract
the problems from existing systems and create information structures that uncover previously unknown
trends that may have direct impact on organization.in this role the analyst must monitor programs in relation
to time, cost and quality.
3) Architect:- The analyst's another role also can be
an architect. here he/she plays role of liasion between logical design and detailed physical design, AS
architect the analyst also creates a detailed physicall design on the basis of end users requirements.
4) Psychologist:- In system development ,the systems
are built around people. The analyst plays role of psychologist in the way s/he reaches people,interpretes
their thoughts,assesses their behaviour and drwas conclusions from these interactions and finds facts.
5) Motivator:- AS the system acceptance is achieved
then analyst starts to give training to user with motivation towards new system. It happens just during few
weeks after implementation and during times when turnover results in new people being trained to work.
6) Intermediary:- the analyst tries to appease all
parties involved. Dipomacy in dealing with people can improve acceptance of the system. The goal of
analyst is to have support of all users. The ananlyst represents their way of thinking and tries achiece their
goal with compuerrization.

Duties of Analyst:- Following are some important duties of analyst:


1) Definig requirements:- It's very difficult duty of analyst to
understand user's problems as well as requirements. some techniques like interview, questionnairing,
survey, data collection etc have to be used.
2) Prioritizing requiremnts by consesus:- An analyst needs to set
priority among the requirements of various users by doing meetings and arriving at certaing consesus. For
this the analyst must have good interpersonal skill, convincing power and knowledge about the
requirements and setting them in proper order of all users.
3) Analysis and evaluation:- Anlyst analyses the working of the
current information system in te organization and finds out extent to which tey meet user's needs. on the
basis of facts,opinions, analyst finds best characteristic of new system which will meet user's stated needs.
4) Solving problem:- Analyst is actually problem solver. An analyst
must study the problem in depth and suggest alternate solutions to managemnet. Problem solving
approach usually has steps: identify the problem, analyse and understand the problem, idnetify alternate
solutions and select best solution.
5) Drawing up functional specification:- The key duty of system
analysis is to obtain the functional specification of the system to be designed. The specification must be
non-technical so that users and managers understand it. The specification must be precise and detailed for
implementers.
12
6) Designing system:- Once accepted the specification, the analyst
starts to design system. The design must be understandable to the system implementer. The design must
be modular to accommodate changes easily. There must be good knowledge of latest tools and how to use
them in system with testing idea.
7) Evaluating sytem:-AS the system designing overs an
dimplementaion starts the ananlyst now has to critically evaluate after certain period of time. The analyst
must know at what time to evaluate,how to evaluate,, how to gather users' comments and complains,
suggestions and to be used in future and how to correct or improve te system accordingly.

a) Feasibility study(analysis) or preliminary study:- The aim of


the feasibility study is to understand the problem and to determine whether it is worth proceeding. If
a project is seen to be feasible from the results of the study, the next logical step is to proceed with
it. The research and information uncovered in the feasibility study will support the detailed planning
and reduce the research time. A well-researched and well-written feasibility study is critical when
making "Go/No Go" decisions regarding entry into new businesses.
A Feasibility Study is a process which defines exactly what a project is and what strategic
issues need to be considered to assess its feasibility, or likelihood of succeeding. Feasibility studies
are useful both when starting a new business, and identifying a new opportunity for an existing
business. Ideally, the feasibility study process involves making rational decisions about a number of
enduring characteristics of a project, including:
• What exactly is the project? Is it possible? Is it practicable? Can it be done?
• Economic feasibility, technical feasibility, schedule feasibility, and operational
feasibility - are the benefits greater than the costs?
• Technical feasibility - do we 'have the technology'? If not, can we get it?
• Schedule feasibility - will the system be ready on time?
• Customer profile: Estimation of customers/revenues.
• Determination of competitive advantage.
• Operational feasibility - do we have the resources to build the system? Will the system
be acceptable? Will people use it?
• Current market segments: projected growth in each market segment and a review of
what is currently on the market.
• Vision/mission statement.
• Definition of proposed operations/management structure and management method etc.
a.1) Technical feasibility: - This is concerned with availability of
hardware and software required for the development of system, to see compatibility and maturity of
the technology proposed to be used and to see the availability of the required technical man power
to develop the system. Following three issues are addressed during this study.
a.1.1) Is the proposed technology proven and practical? At
this stage, the analyst has to see or identify the proposed technology, its maturity, its ability or scope
of solving the problem. If the technology is mature, if it has large customer base, it will be
preferable to use as large customer base already exists and problems that stem from its usage may
be less when compared to other technologies which do not have a significant customer base. Some
companies want to use the state of art new technology irrespective of size of customer base.
a.1.2) the next question is: does the firm possess the
necessary technology it needs. Here we have to ensure that the required technology is practical and
available. Now, does it have required hardware and software? For example, we need ERP
(enterprise resource planning) software, and hardware which can support ERP. Now it our answer is
no for either or the questions, then the possibility of acquiring the technology should be explored.
a.1.3) The Last issue is related to availability of technical
expertise. In this case, software and hardware are available but it may be difficult to find skilled
man power. The company might be equipped to with ERP software but the existing man power
13
might not have expertise in it. So the man power should be trained in ERP software. This may lead
to slippage in the delivery schedules.
a.2) Operational feasibility:- It is all about problems that may arise
during operations. There are two aspects related with this issue:
a.2.1) what is the probability that the solution developed
may not be put to use or may not work?
a.2.1) what is the inclination of management and end users
towards solutions? Though, there is very least possibility of management being averse to the
solution, there is significant probability that the end users may not be interested in using the solution
due to lack of training, insight etc. And some more issues also can be like
a) Information: saying to provide adequate, timely,
accurate and useful information to all categories of users.
b) Response time, it says about response about
output in very fast time to users.
c) Accuracy: A software system must operate
accurately. It means, it should provide value to its users. It says degree of software performance.
d) Services:- The system should be able to provide
reliable services.
e) Security:- there should be adequate security to
information and data from frauds.
f) Efficiency: The system needs to be able to
provide desirable services to users.

a.3) Economic feasibility: - It is the measure of cost effectiveness of


the project. The economic feasibility is nothing but judging whether the possible benefit of solving
the problem is worthwhile or not. At the feasibility study level, it is impossible to estimate the cost
because customer's requirements and alternate solutions have not been identified at this stage.
However when the specific requirements and solutions have been identified, the analysts weights
the cost and benefits of all solutions, this is called "cost benefit analysis". A project which is
inexpensive when compared to the savings that can be made from its usage, ten it is treated as
economically infeasible.
Under "cost benefit analysis", there are mainly two types
of costs namely, cost of human resources and cost of training.
the first one says about salaries of system analysts,
software engineers, programmers, data entry operators etc. whereas 2nd one says about training to be
given to new staffs about how to operate a newly arrived system.
a.4) Legal feasibility: - It is the study about issues arising out the
need to the development of the system. The possible consideration might include copyright law,
labor law, antitrust legislation, foreign trade etc. There may be multi and single user licenses. So
this study plays major role I in formulating contracts between the vendors and users. And one major
another aspect is if whenever an IT company and user company do not belong to same country then
tax, laws, foreign currency transfer regulations have to betaken care of.

3) System design( 3rd phase):- As the analysis phase completes, the designing phase
starts. This phase consists of logical and physical design of the system.
a) Logical design:- Logical Design is done during analysis to
represent what is required of an information system represent (i.e. its specifications) but does not try
to say how the specifications will be constructed. It says about or concentrates on the business aspects
of system. Logical design is not tied to any specific haradware and software paltform. Theoritically ,the
14
system could be implemented on any hardware and any platform(OS).They lay out what the system's
features and abilities will be. They include

o Context Diagrams
o Data Dictionaries
o Hierarchy Charts / Org charts
o Decision Trees

b) Physical design:- Describes how a system will be physically


implemented (like a blueprint). In this design , the logical design is converted into physical or technical
specifications. For example , you must convert digrams that map the origin, flow, and processing of data
into a strucutred system design that can then be broken down into smaller and smaller units calle dmodules
for conversion to instruction written into programming language. we design various parts of the system to
perform the physical operations necessary to facilitate data capture,processing and information output.
During physical design the ananlyst team decides the programming language in which computer
instructions will be written in, which database system and file structure will be used for data, the platform
that will be used and network environment under which the system will run. These decision will finalize the
hardware and software plan initiated at the end of analyst phase. This design includes tools:

o Data Flow Diagrams


o Storyboards
o Flow Charts, N-S charts
o Structure Charts
o IPO charts
o Layout diagrams / mockups
o Pseudo code

Let's see in detail about some tools.

1) Context diagram:- Context Diagram (CD) in software engineering and systems


engineering are diagrams that represent several external entities or actors that may interact with a
system. This diagram is the highest level view of a system, similar to Block diagram, showing a,
possibly software-based, system as a whole and its inputs and outputs from/to external factors. Or
are diagrams used in systems design to represent the more important external actors that interact
with the system at hand. The objective of a system context diagram is to focus attention on external
factors and events that should be considered in developing a complete set of system requirements
and constraints.

For example, if the library were under investigation, this is how it would look in context:

15
"Library Context Diagram.
The context diagram above represents a book lending library.

• The library receives details of books, and orders books from one or more book suppliers.
• Books may be reserved and borrowed by members of the public, who are required to give a
borrower number.
• The library will notify borrowers when a reserved book becomes available or when a
borrowed book becomes overdue.
• A book supplier will furnish the library details of specific books in response to enquiries.

Note, that communications involving external entities are only included where they involve the
'system' process. Whilst a book supplier would communicate with various agencies, for example,
publishers and other suppliers - these data flow are remote from the 'system' process and so this is
not represented on the context diagram."

Some more examples:-

16
2) Data flow diagram: - A data flow diagram (DFD) is a design tool to represent the flow of
data through an information system. With a dataflow diagram, developers can map how a
system will operate, what the system will accomplish and how the system will be
implemented. It's important to have a clear idea of where and how data is processed in a
system to avoid double-handling and bottlenecks. A DFD also helps management organize
and prioritize data handling procedures and staffing requirements. A DFD lets a system
analyst study how existing systems work, locate possible areas prone to failure, track faulty
procedures and reorganize components to achieve better efficiency or effectiveness.

Components:-A data flow diagram graphically represents:

• Processes - jobs that are done with the data. A process transforms incoming data flow into
outgoing data flow.
• Data stores - files, databases, archives. They can be manual, digital or temporary.
• External entities/terminators in a business or other system - other systems or people
beyond the control of the current system. These are the places which provide the organization
with data, or have data sent to them by the organization (e.g. customers, partners,
government bodies). External entities are sources and destinations of the system's inputs and
outputs.
• Connecting data flows - arrows show how data flows from one place to another. Flows that
cross the system boundary are known as Input Output Descriptions. Label the arrows with
the name of the data that moves through it.
For example:-A company, No Blots, supplies ink cartridges for printers which are sold only
through the internet. When customers place an order, the order is checked, a confirmation is sent
back to the customer and the details of the offer are sent to the warehouse. The diagram below
shows the data flow diagram (DFD) for the No Blots online purchasing system. The diagram
does not show the data sources and destinations.

3) E-R diagram: - An entity-relationship (ER) diagram is a specialized graphic that illustrates


the interrelationships between entities in a database. ER diagrams often use symbols to represent
three different types of information. Boxes are commonly used to represent entities. Diamonds are
normally used to represent relationships and ovals are used to represent attributes.

17
Consider the example of a database that contains information on the residents of a city. The ER
digram shown in the image above contains two entities -- people and cities. There is a single "Lives In"
relationship. In our example, due to space constraints, there is only one attribute associated with each
entity. People have names and cities have populations. In a real-world example, each one of these
would likely have many different attributes

PERSON City
Lives in

Name Population

Entity:- An entity may be a physical object such as a house or a car, an event


such as a house sale or a car service, or a concept such as a customer transaction or order. Here we
have two entities namely perosn and city kept in rectangle box. and they are associated with
properties (attribute) name and population kept in oval shape.
Relationship:- A relationship captures how two or more entities are related to
one another. in above example we have lives in as a relationship between two entities.It is always
kept inside diamond box as given above. Similarly we have an owns relationship between a
company and a computer, a supervises relationship between an employee and a department, a
performs relationship between an artist and a song, a proved relationship between a mathematician
and a theorem.

4) Algorithm and flowchart:- In mathematics or computing, and related subjects, an


algorithm is an effective method for solving a problem using a finite sequence of well organized
instructions. Algorithms are used for calculation, data processing, and many other fields.
Each algorithm is a list of well-defined instructions
for completing a task. Starting from an initial state, the instructions describe a computation that
proceeds through a well-defined series of successive states, eventually terminating in a final ending
state.
Where as a flowchart is a common type of diagram, that
represents an algorithm or process, showing the steps as boxes of various kinds, and their order by
connecting these with arrows. Flowcharts are used in analyzing, designing, documenting or
managing a process or program in various fields. Some symbols used in flowcharts are Oval,
parallelogram, rectangle, flowline, annotaion etc. different symbols have different meanings.

Input/output ,
Start/stop Process/operation
, etc.

5) Pseudocode(structured english):- Structured English is the use of the English


language with the syntax of structured programming. Thus structured English aims at getting the
benefits of both the programming logic and natural language. Program logic helps to attain precision
while natural language helps in getting the convenience of spoken languages. It is used to describe
an algorithm in very easy manner by using English verbs. We can use verbs like: START, BEGIN,
END, STOP, DO, WHILE, DO etc. It is used for human readings not for machine. For example in
QBASIC,
START
INPUT A
INPUT B
LET SU=A+B
18
PRINT SU
END
6) Decision table:- A decision table is a table with various conditions and their corresponding
actions. Decision tree is a two dimensional matrix. It is divided into four parts, condition stub,
action stub, condition entry, and action entry. See the first figure listed below. Condition stub shows
the various possible conditions.
Condition entry is used for specifying which condition is being analyzed. Action stub shows the
various actions taken against different conditions.
And action entry is used to find out which action is taken corresponding to a particular set of
conditions.
The steps to be taken for a certain possible condition are listed by action statements. Action entries
display what specific actions to be undertaken when selected conditions or combinations of
conditions are true. At times notes are added below the table to indicate when to use the table or to
distinguish it from other decisions tables.
The right side columns of the table link conditions and actions and form the decision rules hence
they state the conditions that must be fulfilled for a particular set of actions to be taken. In the
decision trees, a fixed ordered sequence is followed in which conditions are examined. But this is
not the case here as the decision rule incorporates all the required conditions, which must be true.
Example: Consider the recruitment policy of ABC Software Ltd.
It the applicant is a BE then recruit otherwise not. If the person is from Computer Science, put him/her in
the software development department and if the person is from non-computer science background put
him/her in HR department. If the Person is from Computer Science and having experience equal to or
greater than three years, take him/her as Team leader and if the experience is less than that then take the
person as Team member. If the person recruited is from non Computer Science background, having
experience less than three years, make him/her Management Trainee otherwise Manager.

7) Decision tree:- Decision tree is a set of rules for what to do in certain conditions.
e.g. If this happens, do that otherwise go to this step. They can be used to enforce strict compliance
with local procedures, and avoid improper behaviors, especially in complex procedures or life-and-
death situations.
E.g. If the photocopier breaks down, call Raj. If Raj is not available, call Aasha. If Aasha is away,
ring Sary.

19
They are valuable when setting out how the system should behave, and what conditions it will need
to be able to cope with.

SHAPE \* MERGEFORMAT

A decision tree showing decisions and actions required of a software system

4) System development:-
The development phase involves converting design specifications into executable programs.
Effective development standards include requirements that programmers and other project
participants discuss design specifications before programming begins. The procedures help ensure
programmers clearly understand program designs and functional requirements. Programmers use
various techniques to develop computer programs. The large transaction-oriented programs
associated with financial institutions have traditionally been developed using procedural
programming techniques. Procedural programming involves the line-by-line scripting of logical
instructions that are combined to form a program.
Primary procedural programming activities include the creation and testing of source code and the
refinement and finalization of test plans. Typically, individual programmers write and review (desk
test) program modules or components, which are small routines that perform a particular task within
an application. Completed components are integrated with other components and reviewed, often by
a group of programmers, to ensure the components properly interact. The process continues as
component groups are progressively integrated and as interfaces between component groups and
other systems are tested.
Advancements in programming techniques include the concept of "object-oriented programming."
Object-oriented programming centers on the development of reusable program routines (modules)
and the classification of data types (numbers, letters, dollars, etc.) and data structures (records, files,
tables, etc.). Linking pre-scripted module objects to predefined data-class objects reduces
development times and makes programs easier to modify. Refer to the "Software Development
Techniques" section for additional information on object-oriented programming. Organizations
should complete testing plans during the development phase. Additionally, they should update
conversion, implementation, and training plans and user, operator, and maintenance manuals.
Library Controls:-
Libraries are collections of stored documentation, programs, and data. Program libraries include reusable
program routines or modules stored in source or object code formats. Program libraries allow programmers
to access frequently used routines and add them to programs without having to rewrite the code.
20
Version Controls
Library controls facilitate software version controls. Version controls provide a means to systematically
retain chronological copies of revised programs and program documentation.
Tools can be used in this phase:-
1. Prototyping:-Designing and building a scaled down but fundamental version of
desired of is known as prototyping. A prototype can be built with any computer language or development
tool to simplify the process. A prototype can be developed with some 4GLs such as query, screen and
report design tools of a DBMS and with tools called CASE (computer aided software engineering) tools.
Using prototype as a development technique, the analyst works with user to determine the initial or basic
requirements of the system. The analyst then builds a prototype. When a prototype is completed, the user
works with it and tells the analyst what they like or not. The analyst uses this feedback to improve the
prototype and take the new version back to the usr. This process is iterated until the users are satisfied.
The main advantages of prototyping are: large extent to which prototyping involves the user in analysis and
design and its ability to capture requirements in concrete rather than in abstract form. Additionally,
prototyping also can be used to augment the SDLC.
CASE (computer aided software engineering):- CASE tools are a class of
software that automates many of the activities involved in various life cycle phases. For example,
when establishing the functional requirements of a proposed application, prototyping tools can be
used to develop graphic models of application screens to assist end users to visualize how an
application will look after development. Subsequently, system designers can use automated design
tools to transform the prototyped functional requirements into detailed design documents.
Programmers can then use automated code generators to convert the design documents into code.
Automated tools can be used collectively, as mentioned, or individually. For example, prototyping
tools could be used to define application requirements that get passed to design technicians who
convert the requirements into detailed designs in a traditional manner using flowcharts and narrative
documents, without the assistance of automated design software.
Automated tools can also facilitate the coordination of software development activities through the
use of data warehouses or repositories. Repositories provide a means to store and access information
relating to a project, such as project plans, functional requirements, design documents, program
libraries, test banks, etc.
Organizations generally implement automated development tools to increase productivity, decrease
costs, enhance project controls, and increase product quality. However, only by managing the
various risks associated with automated technologies will organizations ensure they develop
systems with appropriate functionality, security, integrity, and reliability.
Common CASE risks and associated controls include:
a. Inadequate Standardization b. Unrealistic Expectations c. Quick
Implementation d. Weak Repository Controls

5) System testing (5th phase):-


The testing phase requires organizations to complete various tests to ensure
the accuracy of programmed code, the inclusion of expected functionality, and the interoperability of
applications and other network components. Thorough testing is critical to ensuring systems meet
organizational and end-user requirements. If organizations use effective project management techniques,
they will complete test plans while developing applications, prior to entering the testing phase. Weak project
management techniques or demands to complete projects quickly may pressure organizations to develop
test plans at the start of the testing phase. Test plans created during initial project phases enhance an
organization’s ability to create detailed tests. The use of detailed test plans significantly increases the
likelihood that testers will identify weaknesses before products are implemented.

21
Testing groups are comprised of technicians and end users who are responsible for assembling and
loading representative test data into a testing environment. The groups typically perform tests in stages,
either from a top-down or bottom-up approach. A bottom-up approach tests smaller components first and
progressively adds and tests additional components and systems. A top-down approach first tests major
components and connections and progressively tests smaller components and connections. The
progression and definitions of completed tests vary between organizations. Bottom-up tests often begin
with functional (requirements based) testing. Functional tests should ensure that expected functional,
security, and internal control features are present and operating properly. Testers then complete integration
and end-to-end testing to ensure application and system components interact properly. Users then conduct
acceptance tests to ensure systems meet defined acceptance criteria. Testers often identify program
defects or weaknesses during the testing process. Procedures should be in place to ensure programmers
correct defects quickly and document all corrections or modifications. Correcting problems quickly
increases testing efficiencies by decreasing testers’ downtime. It also ensures a programmer does not
waste time trying to debug a portion of a program without defects that is not working because another
programmer has not debugged a defective linked routine. Documenting corrections and modifications is
necessary to maintain the integrity of the overall program documentation. Organizations should review and
complete user, operator, and maintenance manuals during the testing phase. Additionally, they should
finalize conversion, implementation, and training plans.

Mostly the testing is categorized into two parts ,they are:


1) White box testing (a.k.a. clear box testing, glass box testing, transparent
box testing, or structural testing) uses an internal perspective of the system to design test cases
based on internal structure. It requires programming skills to identify all paths through the software.
The tester chooses test case inputs to exercise paths through the code and determines the appropriate
outputs. In electrical hardware testing, every node in a circuit may be probed and measured; an
example is in-circuit testing .
Typical white box test design techniques include:

• Control flow testing

• Data flow testing

• Branch Testing
• Path Testing

2) Black box testing takes an external perspective of the test object to derive
test cases. These tests can be functional or non-functional, though usually functional. The test
designer selects valid and invalid inputs and determines the correct output. There is no knowledge
of the test object's internal structure.
This method of test design is applicable to all levels of software testing: unit, integration, functional
testing, system and acceptance. The higher the level, and hence the bigger and more complex the
box, the more one is forced to use black box testing to simplify. While this method can uncover
unimplemented parts of the specification, one cannot be sure that all existent paths are tested.

Besides these all

Primary tests include:

• Acceptance Testing – End users perform acceptance tests to assess the overall functionality
and interoperability of an application.
• End-to-End Testing – End users and system technicians perform end-to-end tests to assess
the interoperability of an application and other system components such as databases,
hardware, software, or communication devices.

22
• Functional Testing – End users perform functional tests to assess the operability of a program
against predefined requirements. Functional tests include black box tests, which assess the
operational functionality of a feature against predefined expectations, or white box tests,
which assess the functionality of a feature’s code.
• Integration Testing – End users and system technicians perform integration tests to assess the
interfaces of integrated software components.
• Parallel Testing – End users perform parallel tests to compare the output of a new application
against a similar, often the original, application.
• Regression Testing – End users retest applications to assess functionality after programmers
make code changes to previously tested applications.
• Stress Testing – Technicians perform stress tests to assess the maximum limits of an
application.
• String Testing – Programmers perform string tests to assess the functionality of related code
modules.
• System Testing – Technicians perform system tests to assess the functionality of an entire
system.
• Unit Testing – Programmers perform unit tests to assess the functionality of small modules
of code.

6) System implementation:-
The implementation phase involves installing approved applications into
production environments. Primary tasks include announcing the implementation schedule, training
end users, and installing the product. Additionally, organizations should input and verify data,
configure and test system and security parameters, and conduct post-implementation reviews.
Management should circulate implementation schedules to all affected parties and should notify
users of any implementation responsibilities.
After organizations install a product, pre-existing data is manually input or electronically transferred
to a new system. Verifying the accuracy of the input data and security configurations is a critical
part of the implementation process. Organizations often run a new system in parallel with an old
system until they verify the accuracy and reliability of the new system. Employees should document
any programming, procedural, or configuration changes made during the verification process.
During installation of new system , new hardware and software are used and has to be trained . So
sometimes it has to be converted as:
A good system prevails when it is implemented without affecting the routine operations
of the organizations. This requires careful planning and coordination. in case the system is new,
implementation is quite straightforward . if the system is old one or replacing old one , the
implementation becomes critical and very risky. So in such case we have to apply conversion
strategies as given below.
Parallel conversion: - it says about running of old as well as new in
parallel way until the new one becomes reliable.
Direct conversion: - it says about direct conversion of old one into
new one. It is very risky.
Pilot conversion: - The one unit or department is converted rater
than whole.
23
Phased conversion: - The new system with its components is slowly
replaced by new system's components and later on it becomes fully operational.

7) System evaluation:-
Management should conduct post-implementation reviews at the end of
a project to validate the completion of project objectives and assess project management activities.
Management should interview all personnel actively involved in the operational use of a product
and document and address any identified problems. Management should analyze the effectiveness
of project management activities by comparing, among other things, planned and actual costs,
benefits, and development times. They should document the results and present them to senior
management. Senior management should be informed of any operational or project management
deficiencies.

8) System maintenance:-
The maintenance phase involves making changes to hardware, software, and
documentation to support its operational effectiveness. It includes making changes to improve a system’s
performance, correct problems, enhance security, or address user requirements. To ensure modifications
do not disrupt operations or degrade a system’s performance or security, organizations should establish
appropriate change management standards and procedures.
The maintenance can be done in different regards like
management in baseline version or products, services, procedures, documentation, dissemination.
Besides, the management also gives emphasis to operation, emergency software modifications,,
software patches. So Management should coordinate all technology related changes through an
oversight committee and ssign appropriate party responsible administering software patch
management programs. Quality assurance , security, audit, regulatory compliance , network, end-
user personnel should be appropriately in change management process. Risk and security review
should be done whenever a system modification is implemented to ensure control remain in place.

Computer security: - Computer security is a branch of computer technology known


as information security as applied to computers and networks. The objective of computer security includes
protection of information and property from theft, corruption, or natural disaster, while allowing the
information and property to remain accessible and productive to its intended users.
The computer security deals with following:
1. software and hardware security: We can apply different methods for protection like
o developing powerful operating system
o by using chain of trust
o Cryptography
o Firewall
o Backup
o Antivirus
o Honey pots
o Pinging etc
o Stopping the piracy use of software
2) Data security: it says about
Confidentiality: - the data can not be accesses by unknown persons.
Integrity: no alteration of data in anyway.
Authentication: - Authorized users as they have to be.

24
Database
Database: - Database is a collection of data for one or more multiple uses in very well organized format.
One way of classifying databases involves the type of content, for example: bibliographic, full-text, numeric,
and image. Databases consist of software-based "containers" that are structured to collect and store
information so users can retrieve, add, update or remove such information in an automatic fashion.
Database programs are designed for users so that they can add or delete any information needed. The
structure of a database is tabular, consisting of rows and columns of information.
Database Management System (DBMS) is a set of computer programs that controls the creation,
maintenance, and the use of the database with computer as a platform or of an organization and its end
users. It allows organizations to place control of organization-wide database development in the hands
of database administrators (DBAs) and other specialists. A DBMS is a system software package that helps
the use of integrated collection of data records and files known as databases. It allows different user
application programs to easily access the same database. DBMSs may use any of a variety of database
models, such as the network model or relational model. In large systems, a DBMS allows users and other
software to store and retrieve data in a structured way. Instead of having to write computer programs to
extract information, user can ask simple questions in a query language. Thus, many DBMS packages
provide Fourth-generation programming language (4GLs) and other application development features. It
helps to specify the logical organization for a database and access and use the information within a
database. It provides facilities for controlling data access, enforcing data integrity, managing concurrency
controlled, and restoring database.

Flat file system (old one approach):- Strictly, a flat file database should consist of nothing but
data and, if records vary in length, delimiters. More broadly, the term refers to any database which exists in
a single file in the form of rows and columns, with no relationships or links between records and fields
except the table structure.
Or A "flat file" is a plain text or mixed text and binary file which usually contains one record per
line or 'physical' record (example on disc or tape). Within such a record, the single fields can be separated
by delimiters, e.g. commas, or have a fixed length. In the latter case, padding may be needed to achieve
this length. Extra formatting may be needed to avoid delimiter collision. There are no structural
relationships between the records.
It used to have following disadvantage:-
1) Data redundancy
2) Data integrity
3) Data consistency
4) Security, sharings etc.
To overcome these all we have to use latest database system having table, relationship etc.
Database schema:-
The schema of a database system is its structure described in a formal language supported by
the database management system (DBMS).In a relational database, the schema defines the tables,
the fields, relationships, views, indexes, packages, procedures, functions, queues etc and other
elements. Schemas are generally stored in a data dictionary. Although a schema is defined in text
database language, the term is often used to refer to a graphical depiction of the database structure.
In other words, schema is the structure of the database that defines the objects in the database.
Schema has mainly two ways to structured.
1) Logical schema:-It says about database design or structure mainly how data is managed
or structured, what is used level etc.Logical Schema is a data model of a specific problem
domain expressed in terms of a particular data management technology. Without being
specific to a particular database management product, it is in terms of relational tables and
columns, object-oriented classes, or XML tags.

25
• Physical schema:- The underlying structure of storing device is physical schema. It is for
most part determined by DBMS. It simply says about disk representation of data that
accounts for layout, partitioning, index, space management, etc. The beauty of data
arrangement is that database designers and users do not need to be concerned about physical
storage simply simplifying access to database and making much easier to make changes to
both logical and physical

Normalization :In the field of relational database design, normalization is a systematic way of
ensuring that a database structure is suitable for general-purpose querying and free of certain
undesirable characteristics—insertion, update, and deletion anomalies that could lead to a loss of
integrity. or Normalization is the process of organizing data in a database. This includes creating
tables and establishing relationships between those tables according to rules designed both to protect
the data and to make the database more flexible by eliminating two factors: redundancy and
inconsistent dependency.
Types
1N (first normalization):- A table is said to be in 1N if there is no repeating groups (fields or tuple
for all records) in individual tables. For example
Student advisor Adv-room advisor Class 1 Class 2 Class 3
#
1022 Jones 412 Jones 101-07 143-01 159-02
1023 Smith 216 Smith 201-01 211-02 214-01
Here actually there is one group named class which is repeated and which should not be in table of
two dimensions so let’s make this table in 1N and that is like,
Student # Advisor Adv-room Class
1022 Jones 412 101-07
1022 Jones 412 143-01
1022 Jones 412 159-02
1023 Smith 216 201-01
1023 Smith 216 211-02
1023 Smith 216 214-01
Now this table is in 1N.
2N (second normalization):- A table is said to be in 2N if it is in 1N and no records are
functionally dependant other than primary keys. Let’s take an example,
Student # Advisor Adv-room Class
1022 Jones 412 101-07
1022 Jones 412 143-01
1022 Jones 412 159-02
1023 Smith 216 201-01
1023 Smith 216 211-02
1023 Smith 216 214-01
In above table, the field class is not functionally dependant on primary key(student #)
so it is not in 2N. To bring this in 2N let’s break this into two tables given below.
Students
Student # advisor Ad-room
1022 Jones
1023 Smith
Registration
Student # Class#
1022 101-07
1022 143-01

26
1022 159-02
1023 201-01
1023 211-02
1023 214-01
Now we can see the above tables are in 2N completely.
rd
3N (3 normalization):- A table is said to be in 3N if it is in 2N and if there is no field
which does not depend on key. For example,
Students
Student # advisor Ad-room
1022 Jones
1023 Smith

In the above example, Adv-Room (the advisor's office number) is functionally


dependent on the Advisor attribute. The solution is to move that attribute from the
Students table to the Faculty table, as shown below: students
Student # advisor
1022 Jones
1023 Smith

Faculty
Name Room Dept
Jones 412 42
smith 216 42
Now we can see the tables(broken ones) are in 3N.
Data definition language (DDL):-This is the means by which the content & format data to be stored is
described & structure of database is defined, including relationship between records & indexing strategies.
This definition of database is known as schema.
DDL is essentially link between logical & physical views of database. Here logical refers to the way
the users view the data, physical refers to the way the data are physically stored. The logical structure of
database sometimes is known as schema.
DDL is used to define the physical characteristics of each record: the fields within record & each
field’s logical
Name, data type & length. It is also used to specify relationship among records.
DDL carries following major functions.
1. describes schema & subschema
2. describes fields in each record & record’s logical name
3. describes data type & name of each field
4. indicates keys on each record
5. provides security restriction
6. provides means of associating related data
Data Manipulation Language (DML) is a family of computer languages used by computer programs
database users to retrieve, insert, delete and update data in a database.
Currently the most popular data manipulation language is that of SQL, which is used to retrieve and
manipulate data in a Relational database. Other forms of DML are those used by IMS/DLI, CODASYL
databases (such as IDMS), and others. Data manipulation languages were initially only used by computer
programs, but (with the advent of SQL) have come to be used by people, as well. Data Manipulation
Languages have their functional capability organized by the initial word in a statement, which is almost
always a verb. In the case of SQL, these verbs are:
Select
Insert
Update
Delete
Each SQL statement is a declarative command. The individual SQL statements are declarative, as
opposed to imperative, in that they describe what the program should accomplish, rather than describing
how to go about accomplishing it.
27
Data dictionary Database users and application developers can benefit from an
authoritative data dictionary document that catalogs the organization, contents, and conventions of one or
more databases. This typically includes the names and descriptions of various tables and fields in each
database, plus additional details, like the type and length of each data element. There is no universal
standard as to the level of detail in such a document, but it is primarily a distillation of metadata about
database structure, not the data itself. A data dictionary document also may include further information
describing how data elements are encoded. One of the advantages of well-designed data dictionary
documentation is that it helps to establish consistency throughout a complex database, or across a large
collection of federated databases
“SQL”:- It is a relational database language. This was firstly developed by IBM. This was originally called
“SEQUEL”.
Actually this is used to do query, update, and manage the relational database system. Although it is not a
programming language, it can be used in formulating the interactive queries be embedded in an application
as instruction for handling the data.. it also contains the component for defining altering, controlling &
securing the data. Mainly it has following parts:-
1. Data definition language
2. interactive data manipulation language
3. embedded DML
4. View definition
5. Integrity etc.
Data model:-in software engineering is an abstract model that describes how data is represented
and accessed. Data models formally define data elements and relationships among data elements for a
domain of interest.
Or a data model explicitly determines the meaning of data, which in this case is known as structured
data (as opposed to unstructured data, for example an image, a binary file or a natural language text,
where the meaning has to be elaborated). Typical applications of data models include database models,
design of information systems, and enabling exchange of data.

Database model:-A database model or database schema is the structure or format of


a database, described in a formal language supported by the database management
system.
Hierarchical model:-In a hierarchical model, data is organized into a tree-like
structure, implying a single upward link in each record to describe the nesting, and a
sort field to keep the records in a particular order in each same-level list. Hierarchical
structures were widely used in the early mainframe database management systems,
such as the Information Management System (IMS) by IBM, and now describe the
structure of XML documents. This structure allows one 1:N relationship between two
types of data. This structure is very efficient to describe many relationships in the real
world; recipes, table of contents, ordering of paragraphs/verses, any nested and sorted
information. However, the hierarchical structure is inefficient for certain database
operations when a full path (as opposed to upward link and sort field) is not also
included for each record.
Parent–child relationship: Child may only have one parent but a parent can have
multiple children. Parents and children are tied together by links called "pointers“. A
parent will have a list of pointers to each of their children. Eg:
Sales person record

Sale statistics Customer record


record
28
Account receivable
record
Advantages:- 1) easiest model of database.
2) Searching is fast if parent is known.
3) supports one to many relationship.
Disadvantages: 1) old and outdated one.
2) Can not handle many to many relationships.
3) More redundancy.
Network model:-The network model (defined by the CODASYL specification)
organizes data using two fundamental constructs, called records and sets. Records
contain fields (which may be organized hierarchically, as in the programming
language COBOL). Sets (not to be confused with mathematical sets) define one-to-
many relationships between records: one owner, many members. A record may be an
owner in any number of sets, and a member in any number of sets.
The network model is a variation on the hierarchical model, to the extent that it is built
on the concept of multiple branches (lower-level structures) emanating from one or
more nodes (higher-level structures), while the model differs from the hierarchical
model in that branches can be connected to multiple nodes. The network model is able
to represent redundancy in data more efficiently than in the hierarchical model. The
operations of the network model are navigational in style: a program maintains a
current position, and navigates from one record to another by following the
relationships in which the record participates. Records can also be located by
supplying key values.
Although it is not an essential feature of the model, network databases generally
implement the set relationships by means of pointers that directly address the location
of a record on disk. This gives excellent retrieval performance, at the expense of
operations such as database loading and reorganization.
Order entry records Dealer records

Inventory master
records

Part description
Shop schedule
records

Sub-assembly
records

29
Advantages:- 1) more flexible due to many to many relationship.
2) Reduction of redundancy.
3) Searching is very fast.
Disadvantages:- 1) very complex.
2) Needs long and complex programs.
3) Less security
Relational model: - It is a mathematical model defined in terms of predicate logic and
set theory.
The products that are generally referred to as relational databases in fact implement a
model that is only an approximation to the mathematical model defined by Codd.
Three key terms are used extensively in relational database models: relations,
attributes, and domains. A relation is a table with columns and rows. The named
columns of the relation are called attributes, and the domain is the set of values the
attributes are allowed to take.
The basic data structure of the relational model is the table, where information about a
particular entity (say, an employee) is represented in columns and rows (also called
tuples). Thus, the "relation" in "relational database" refers to the various tables in the
database; a relation is a set of tuples. The columns enumerate the various attributes of
the entity (the employee's name, address or phone number, for example), and a row is
an actual instance of the entity (a specific employee) that is represented by the relation.
As a result, each tuple of the employee table represents various attributes of a single
employee.
All relations (and, thus, tables) in a relational database have to adhere to some basic
rules to qualify as relations. First, the ordering of columns is immaterial in a table.
Second, there can't be identical tuples or rows in a table. And third, each tuple will
contain a single value for each of its attributes.
Data integrity: Integrity is consistency of actions, values, methods, measures,
principles, expectations and outcome. Data integrity is a term used in computer
science and telecommunications that can mean ensuring data is "whole" or complete,
the condition in which data are identically maintained during any operation (such as
transfer, storage or retrieval), the preservation of data for their intended use, or,
relative to specified operations, the a priori expectation of data quality. Put simply,
data integrity is the assurance that data is consistent and correct.
Data integrity is normally enforced in a database system by a series of integrity
constraints or rules. Three types of integrity constraints are an inherent part of the
relational data model: entity integrity, referential integrity

Entity integrity concerns the concept of a primary key. Entity integrity is an integrity
rule which states that every table must have a primary key and that the column or
columns chosen to be the primary key should be unique and not null.

Referential integrity concerns the concept of a foreign key. The referential integrity
rule states that any foreign key value can only be in one of two states. The usual state
of affairs is that the foreign key value refers to a primary key value of some table in
30
the database. Occasionally, and this will depend on the rules of the business, a foreign
key value can be null. In this case we are explicitly saying that either there is no
relationship between the objects represented in the database or that this relationship is
unknown

E-R model: An Entity-Relationship Model (ERM) in software engineering is an


abstract and conceptual representation of data.
Here an entity may be a physical object such as a house or a car, an event such as a
house sale or a car service, or a concept such as a customer transaction database
modeling Entity-relationship diagrams don't show single entities or single instances of
relations. Rather, they show entity sets and relationship sets. Example: a particular
song is an entity. The collection of all songs in a database is an entity set. The eaten
relationship between a child and her lunch is a single relationship. The set of all such
child-lunch relationships in a database is a relationship set.
Lines are drawn between entity sets and the relationship sets they are involved in. If all
entities in an entity set must participate in the relationship set, a thick or double line is
drawn. This is called a participation constraint. If each entity of the entity set can
participate in at most one relationship in the relationship set, an arrow is drawn from
the entity set to the relationship set. This is called a key constraint. To indicate that
each entity in the entity set is involved in exactly one relationship, a thick arrow is
drawn.

DBA: - A database administrator (DBA) is a person who is responsible for the


environmental aspects of a database. The role of a database administrator has changed
according to the technology of database management systems (DBMSs) as well as the
needs of the owners of the databases.

Duties:
1. Installation of new software.
2. Configuration of hardware and software with the system administrator
3. Security administration
4. Data analysis
5. Database design (preliminary
6. Data modeling and optimization
7. Responsible for the administration of existing enterprise databases and the
analysis, design, and creation of new databases
Networking
Networking: A computer network, often simply referred to as a network, is a collection
of computers and devices connected by communications channels that facilitates communications among
users and allows users to share resources with other users. Networks may be classified according to a
wide variety of characteristics.

31
Radio waves are a type of electromagnetic radiation with wavelengths in the electromagnetic spectrum longer than
infrared light. Naturally-occurring radio waves are produced by lightning, or by astronomical objects. Artificially-
generated radio waves are used for fixed and mobile radio communication, broadcasting, radar and other navigation
systems, satellite communication, computer networks and innumerable other applications. Different frequencies of
radio waves have different propagation characteristics in the Earth's atmosphere; long waves may cover a part of the
Earth very consistently, shorter waves can reflect off the ionosphere and travel around the world, and much shorter
wavelengths bend or reflect very little and travel on a line of sight.
In order to receive radio signals, for instance from AM/FM radio stations, a radio antenna must be used. However,
since the antenna will pick up thousands of radio signals at a time, a radio tuner is necessary to tune in to a particular
frequency (or frequency range). This is typically done via a resonator (in its simplest form, a circuit with a capacitor
and an inductor). The resonator is configured to resonate at a particular frequency (or frequency band), thus
amplifying sine waves at that radio frequency, while ignoring other sine waves. Usually, either the inductor or the
capacitor of the resonator is adjustable, allowing the user to change the frequency at which it resonates.
Infrared:-
Infrared radiation (IR) is electromagnetic radiation with a wavelength between 0.7 and 300 micrometres, which
equates to a frequency range between approximately 1 and 430 THz.
Its wavelength is longer (and the frequency lower) than that of visible light, but the wavelength is shorter (and the
frequency higher) than that of terahertz radiation microwaves. Bright sunlight provides an irradiance of just over
1 kilowatt per square meter at sea level. Of this energy, 527 watts is infrared radiation, 445 watts is visible light, and
32 watts is ultraviolet radiation.
Bluetooth:
Bluetooth is a proprietary open wireless protocol[clarification needed] for exchanging data over short distances (using
short length radio waves) from fixed and mobile devices, creating personal area networks (PANs). It was originally
conceived as a wireless alternative to RS-232 data cables. It can connect several devices, overcoming problems of
synchronization.
Bluetooth is a standard communications protocol primarily designed for low power consumption, with a short range
(power-class-dependent: 100 m, 10 m and 1 m, but ranges vary in practice; see table below) based on low-cost
32
transceiver microchips in each device. Because the devices use a radio (broadcast) communications system, they do
not have to be in line of sight of each other
Satellite:
In the context of spaceflight, a satellite is an object which has been placed into orbit by human endeavor. Such objects
are sometimes called artificial satellites to distinguish them from natural satellites such as the Moon.
The first artificial satellite, Sputnik 1, was launched by the Soviet Union in 1957. By 2010 thousands of satellites have
been launched into orbit around the Earth. These originate from more than 50 countries and have used the satellite
launching capabilities of ten nations. A few hundred satellites are currently operational, whereas thousands of unused
satellites and satellite fragments orbit the Earth as space debris. A few space probes have been placed into orbit around
other bodies and become artificial satellites to the Moon, Venus, Mars, Jupiter and Saturn.
Satellites are used for a large number of purposes. Common types include military and civilian Earth observation
satellites, communications satellites, navigation satellites, weather satellites, and research satellites. Space stations and
human spacecraft in orbit are also satellites. Satellite orbits vary greatly, depending on the purpose of the satellite, and
are classified in a number of ways. Well-known (overlapping) classes include low Earth orbit, polar orbit, and
geostationary orbit.
Satellites are usually semi-independent computer controlled systems. Satellite subsystems attend many tasks, such as
power generation, thermal control, telemetry, attitude control and orbit control.
Narrowband
Narrowband refers to a situation in radio communications where the bandwidth of the message does not significantly
exceed the channel's coherence bandwidth. It is a common misconception that narrowband refers to a channel which
occupies only a "small" amount of space on the radio spectrum.
The opposite of narrowband is wideband.
In the study of wireless channels, narrowband implies that the channel under consideration is sufficiently narrow that
its frequency response can be considered flat. The message bandwidth will therefore be less than the coherence
bandwidth of the channel. This is usually used as an idealizing assumption; no channel has perfectly flat fading, but
the analysis of many aspects of wireless systems is greatly simplified if flat fading can be assumed.
Narrowband can also be used with the audio spectrum to describe sounds which occupy a narrow range of
frequencies.
In telephony, narrowband is usually considered to cover frequencies 300–3400 Hz.

Bridge:-A device that connects two local-area networks (A computer network that spans a relatively small area. Most
LANs are confined to a single building or group of buildings. However, one LAN can be connected to other LANs
over any distance via telephone lines and radio waves. (LANs), or two segments of the same LAN that use the same
protocol (An agreed-upon format for transmitting data between two devices). The protocol determines the following:
the type of error checking to be used data compression method, if any 2) how the sending device will indicate that it
has finished sending a message 3) how the receiving device will indicate that it has received a message. There are a
variety of standard protocols from which programmers can choose. Each has particular advantages and disadvantages;
for example, some are simpler than others, some are more reliable, and some are faster. Such as Ethernet (A local-area
network (LAN) architecture developed by Xerox Corporation in cooperation with DEC and Intel in 1976. Ethernet
uses a bus or star topology and supports data transfer rates of 10 Mbps. The Ethernet specification served as the basis
for the IEEE 802.3 standard, which specifies the physical and lower software layers.) Or Token-Ring(A type of
computer network in which all the computers are arranged (schematically) in a circle. A token, which is a special bit
pattern, travels around the circle. To send a message, a computer catches the token, attaches a message to it, and then
lets it continue to travel around the network.).
Repeater:- A network device used to regenerate or replicate a signal. Repeaters are used in transmission systems to
regenerate analog or digital signals distorted by transmission loss. Analog repeaters frequently can only amplify the
signal while digital repeaters can reconstruct a signal to near its original quality.
In a data network, a repeater can relay messages between sub networks that use different protocols or cable types.
Hubs(A common connection point for devices in a network. Hubs are commonly used to connect segments of a LAN.
A hub contains multiple ports. When a packet arrives at one port, it is copied to the other ports so that all segments of
the LAN can see all packets. ) can operate as repeaters by relaying messages to all connected computers. A repeater
cannot do the intelligent routing(In internetworking, the process of moving a packet of data from source to destination.
Routing is usually performed by a dedicated device called a router. Routing is a key feature of the Internet because it
enables messages to pass from one computer to another and eventually reach the target machine. Each intermediary
computer performs routing by passing along the message to the next computer. Part of this process involves analyzing
a routing table to determine the best path.

33
Routing is often confused with bridging, which performs a similar function. The principal difference
between the two is that bridging occurs at a lower level and is therefore more of a hardware function whereas
routing occurs at a higher level where the software component is more important. And because routing
occurs at a higher level, it can perform more complex analysis to determine the optimal path for the packet.)
performed by bridges and routers(A device that forwards data packets along networks. A router is connected to
at least two networks, commonly two LANs or WANs or a LAN and its ISP’s network. Routers are located at
gateways, the places where two or more networks connect. Routers use headers and forwarding tables to
determine the best path for forwarding the packets, and they use protocols such as ICMP to communicate
with each other and configure the best route between any two hosts).Very little filtering of data is done
through routers
Gateway:-A node ((In networks, a processing location. A node can be a computer or some other device, such
as a printer. Every node has a unique network address, sometimes called a Data Link Control (DLC) address or
Media Access Control (MAC) address. In tree structures, a point where two or more lines meet). network. In
enterprises, the gateway is the computer that routes the traffic from a workstation to the outside network that
is serving the Web pages. In homes, the gateway is the ISP that connects the user to the internet. In
enterprises, the gateway node often acts as a proxy server(A server that sits between a client application, such
as a Web browser, and a real server. It intercepts all requests to the real server to see if it can fulfill the
requests itself. If not, it forwards the request to the real server.) and a firewall(A system designed to prevent
unauthorized access to or from a private network. Firewalls can be implemented in both hardware and
software, or a combination of both. Firewalls are frequently used to prevent unauthorized Internet users from
accessing private networks connected to the Internet, especially intranets. All messages entering or leaving
the intranet pass through the firewall, which examines each message and blocks those that do not meet the
specified security criteria.). The gateway is also associated with both a router, which use headers and
forwarding tables to determine where packets are sent, and a switch, which provides the actual path for the
packet in and out of the gateway.
(2) A computer system located on earth that switches data signals and voice signals between satellites and terrestrial
networks.
(3) An earlier term for router, though now obsolete in this sense as router is commonly used.
Router: - A device that forwards data packets along networks. A router is connected to at least two networks,
commonly two LANs or WANs or a LAN and its ISP’s network. Routers are located at gateways, the places where
two or more networks connect. Routers use header (In a network transmission, a header is part of the data packet and
contains transparent information about the file or the transmission. In file management, a header is a region at the
beginning of each file where bookkeeping information is kept. The file header may contain the date the file was
created, the date it was last updated, and the file's size. The header can be accessed only by the operating system or by
specialized programs) and and forwarding tables to determine the best path for forwarding the packets, and they use
protocols such as ICMP (Short for Internet Control Message Protocol, an extension to the Internet Protocol (IP)
defined by RFC 792. ICMP supports packets containing error, control, and informational messages. The PING
command, for example, uses ICMP to test an Internet connection.) to communicate with each other and configure the
best route between any two hosts.
Very little filtering of data is done through routers.
OSI model:- Short for Open System Interconnection, an ISO standard for worldwide communications that defines a
networking framework for implementing protocols in seven layers. Control is passed from one layer to the next,
starting at the application layer in one station, and proceeding to the bottom layer, over the channel to the next station
and back up the hierarchy.
At one time, most vendors agreed to support OSI in one form or another, but OSI was too loosely defined and
proprietary standards were too entrenched. Except for the OSI-compliant X.400 and X.500 e-mail and directory
standards, which are widely used, what was once thought to become the universal communications standard now
serves as the teaching model for all other protocols. Most of the functionality in the OSI model exists in all
communications systems, although two or three OSI layers may be incorporated into one.
Its seven layers are:-
The OSI, or Open System Interconnection, model defines a networking framework for implementing protocols in
seven layers. Control is passed from one layer to the next, starting at the application layer in one station, and
proceeding to the bottom layer, over the channel to the next station and back up the hierarchy.
Application This layer supports application and end-user processes. Communication partners are identified, quality
(Layer 7) of service is identified, user authentication and privacy are considered, and any constraints on data
syntax are identified. Everything at this layer is application-specific. This layer provides application

34
services for file transfers, e-mail, and other network software services. Telnet and FTP are applications
that exist entirely in the application level. Tiered application architectures are part of this layer.
This layer provides independence from differences in data representation (e.g., encryption) by translating
Presentation from application to network format, and vice versa. The presentation layer works to transform data into
(Layer 6) the form that the application layer can accept. This layer formats and encrypts data to be sent across a
network, providing freedom from compatibility problems. It is sometimes called the syntax layer.
This layer establishes, manages and terminates connections between applications. The session layer sets
Session
up, coordinates, and terminates conversations, exchanges, and dialogues between the applications at each
(Layer 5)
end. It deals with session and connection coordination.
Transport This layer provides transparent transfer of data between end systems, or hosts, and is responsible for
(Layer 4) end-to-end error recovery and flow control. It ensures complete data transfer.
This layer provides switching and routing technologies, creating logical paths, known as virtual circuits,
Network
for transmitting data from node to node. Routing and forwarding are functions of this layer, as well as
(Layer 3)
addressing, internetworking, error handling, congestion control and packet sequencing.
At this layer, data packets are encoded and decoded into bits. It furnishes transmission protocol
knowledge and management and handles errors in the physical layer, flow control and frame
Data Link synchronization. The data link layer is divided into two sublayers: The Media Access Control (MAC)
(Layer 2) layer and the Logical Link Control (LLC) layer. The MAC sublayer controls how a computer on the
network gains access to the data and permission to transmit it. The LLC layer controls frame
synchronization, flow control and error checking.
This layer conveys the bit stream - electrical impulse, light or radio signal -- through the network at the
Physical electrical and mechanical level. It provides the hardware means of sending and receiving data on a
(Layer 1) carrier, including defining cables, cards and physical aspects. Fast Ethernet, RS232, and ATM are
protocols with physical layer components.

TCP/IP:- Abbreviation of Transmission Control Protocol, and pronounced as separate letters. TCP is one of the main
protocols.
Here protocol means An agreed-upon format for transmitting data between two devices. The protocol determines the
following:
1) the type of error checking to be used
data compression method, if any
2) how the sending device will indicate that it has finished sending a message
3) how the receiving device will indicate that it has received a message
There are a variety of standard protocols from which programmers can choose. Each has particular advantages and
disadvantages; for example, some are simpler than others, some are more reliable, and some are faster.
From a user's point of view, the only interesting aspect about protocols is that your computer or device must support
the right ones if you want to communicate with other computers. The protocol can be implemented either in hardware
or in software.)
It suites for all communications between devices require that the devices agree on the format of the data. The set of
rules defining a format is called a protocol. At the very least, a communications protocol must define the following:
1) rate of transmission (in baud or bps)
whether transmission is to be synchronous or asynchronous

35
2) whether data is to be transmitted in half-duplex or full-duplex mode
In addition, protocols can include sophisticated techniques for detecting and recovering from transmission errors and
for encoding and decoding data.
The table lists the most commonly used protocols for communications via modems. These protocols are almost always
implemented in the hardware; that is, they are built into modems.
In addition to the standard protocols listed in the table, there are a number of protocols that complement these
standards by adding additional functions such as file transfer capability, error detection and recovery, and data
compression. The best-known are Modem, Kermit, MNP, and CCITT V.42 . These protocols can be implemented
either in hardware.
TCP/IP (The Transmission Control Protocol/Internet Protocol) is the protocol suite that drives the Internet.
Specifically, TCP/IP handles network communications between network nodes (computers, or nodes, connected to the
net).
The suite is actually composed of several protocols including IP which handles the movement of data between host
computers, TCP which manages the movement of data between applications, UDP which also manages the movement
of data between applications but is less complex and reliable than TCP, and ICMP which transmits error messages and
network traffic statistics
It is used to connect used to connect hosts on the Internet. TCP/IP uses several protocols, the two main ones being
TCP and IP. TCP/IP is built into the UNIX operating system and is used by the Internet, making it the de facto
standard for transmitting data over networks. Even network operating systems that have their own protocols, such as
Netware, also support TCP/IP.
IP –here IP means Short for Internet Protocol. IP specifies the format of packets, also called datagram, and the
addressing scheme. Most networks combine IP with a higher-level protocol called Transmission Control Protocol
(TCP), which establishes a virtual connection between a destination and a source.
IP by itself is something like the postal system. It allows you to address a package and drop it in the system, but there's
no direct link between you and the recipient. TCP/IP, on the other hand, establishes a connection between two hosts so
that they can send messages back and forth for a period of time.

CSMA (carrier sense multiple access/collision detection):-


is a network access method in which a carrier sensing scheme is used. A transmitting data station that
detects another signal while transmitting a frame, stops transmitting that frame, transmits a jam signal, and
then waits for a random time interval (known as "backoff delay" and determined using the truncated binary
exponential backoff algorithm) before trying to send that frame again. CSMA/CD is a modification of pure
Carrier Sense Multiple Access (CSMA). Collision detection is used to improve CSMA performance by
terminating transmission as soon as a collision is detected, and reducing the probability of a second collision
on retry. Methods for collision detection are media dependent, but on an electrical bus such as Ethernet,
collisions can be detected by comparing transmitted data with received data. If they differ, another
transmitter is overlaying the first transmitter's signal (a collision), and transmission terminates immediately.
A jam signal is sent which will cause all transmitters to back off by random intervals, reducing the
probability of a collision when the first retry is attempted. CSMA/CD is a layer 2 access method not a
protocol OSI model.
Multiplexing and de multiplexing:
In telecommunications and computer networks, multiplexing (also known as muxing) is a process where
multiple analog message signals or digital data streams are combined into one signal over a shared medium.
The aim is to share an expensive resource. For example, in telecommunications, several phone calls may be
transferred using one wire. It originated in telegraphy, and is now widely applied in communications.
The multiplexed signal is transmitted over a communication channel, which may be a physical transmission
medium. The multiplexing divides the capacity of the low-level communication channel into several higher-
level logical channels, one for each message signal or data stream to be transferred. A reverse process,
known as demultiplexing, can extract the original channels on the receiver side.
A device that performs the multiplexing is called a multiplexer (MUX), and a device that performs the
reverse process is called a demultiplexer (DEMUX).
De-multiplexer:- a demultiplexer (or demux) is a device taking a single input signal and selecting
one of many data-output-lines, which is connected to the single input. A multiplexer is often used with a
complementary demultiplexer on the receiving end.

36
An electronic multiplexer can be considered as a multiple-input, single-output switch, and a demultiplexer as
a single-input, multiple-output switch. The schematic symbol for a multiplexer is an isosceles trapezoid with
the longer parallel side containing the input pins and the short parallel side containing the output pin. The
schematic on the right shows a 2-to-1 multiplexer on the left and an equivalent switch on the right. The sel
wire connects the desired input to the output.
In telecommunications, a multiplexer is a device that combines several input information signals into one
output signal, which carries several communication channels, by means of some multiplex technique. A
demultiplexer is in this context a device taking a single input signal that carries many channels and separates
those over multiple output signals
Switching:- In electronics, a switch is an electrical component that can break an electrical circuit,
interrupting the current or diverting it from one conductor to another.
A switch may be directly manipulated by a human as a control signal to a system, such as a computer
keyboard button, or to control power flow in a circuit, such as a light switch. Automatically-operated
switches can be used to control the motions of machines, for example, to indicate that a garage door has
reached its full open position or that a machine tool is in a position to accept another workpiece. Switches
may be operated by process variables such as pressure, temperature, flow, current, voltage, and force, acting
as sensors in a process and used to automatically control a system. For example, a thermostat is an
automatically-operated switch used to control a heating process
Packet switching:- is a digital networking communications method that groups all transmitted data –
irrespective of content, type, or structure – into suitably-sized blocks, called packets. Packet switching
features delivery of variable-bit-rate data streams (sequences of packets) over a shared network. When
traversing network adapters, switches, routers and other network nodes, packets are buffered and queued,
resulting in variable delay and throughput depending on the traffic load in the network.
Packet switching contrasts with another principal networking paradigm, circuit switching, a method which
sets up a limited number of dedicated connections of constant bit rate and constant delay between nodes for
exclusive use during the communication session. In case of traffic fees, for example in cellular
communication, circuit switching is characterized by a fee per time unit of connection time, even when no
data is transferred, while packet switching is characterized by a fee per unit of information.
Two major packet switching modes exist; connectionless packet switching, also known as datagram
switching, and connection-oriented packet switching, also known as virtual circuit switching. In the first
case each packet includes complete addressing or routing information. The packets are routed individually,
sometimes resulting in different paths and out-of-order delivery. In the second case a connection is defined
and preallocated in each involved node before any packet is transferred. The packets includes a connection
identifier rather than address information, and are delivered in order
Circuit switching:-
In telecommunications, a circuit switching network is one that establishes a circuit (or channel) between
nodes and terminals before the users may communicate, as if the nodes were physically connected with an
electrical circuit.
The bit delay is constant during a connection, as opposed to packet switching, where packet queues may
cause varying packet transfer delay. Each circuit cannot be used by other callers until the circuit is released
and a new connection is set up. Even if no actual communication is taking place in a dedicated circuit that
channel remains unavailable to other users. Channels that are available for new calls to be set up are said to
be idle.
Virtual circuit switching is a packet switching technology that may emulate circuit switching, in the sense
that the connection is established before any packets are transferred, and that packets are delivered in order.
There is a common misunderstanding that circuit switching is used only for connecting voice circuits
(analog or digital). The concept of a dedicated path persisting between two communicating parties or nodes
can be extended to signal content other than voice. Its advantage is that it provides for non-stop transfer
without requiring packets and without most of the overhead traffic usually needed, making maximal and
optimal use of available bandwidth for that communication. The disadvantage of inflexibility tends to
reserve it for specialized applications, particularly with the overwhelming proliferation of internet-related
technology.
Message switching:-

37
In telecommunications, message switching was the precursor of packet switching, where messages were
routed in their entirety, one hop at a time. It was first introduced by Leonard Kleinrock in 1961. Message
switching systems are nowadays mostly implemented over packet-switched or circuit-switched data
networks. each message is treated as a separate entity. Each message contains addressing information, and at
each switch this information is read and the transfer path to the next switch is decided. Depending on
network conditions, a conversation of several messages may not be transferred over the same path.Each
message is stored (usually on hard drive due to RAM limitations) before being transmitted to the next
switch. Because of this it is also know as a 'store-and-forward' network. Email is a common application for
Message Switching. A delay in delivering email is allowed unlike real time data transfer between two
computers.

Multimedia +e-commerce
Multimedia:- Multimedia is media and content that utilizes a combination of different content
forms. The term can be used as a noun (a medium with multiple content forms) or as an adjective
describing a medium as having multiple content forms. The term is used in contrast to media which
only utilize traditional forms of printed or hand-produced material. Multimedia includes a
combination of text, audio, still images, animation, video, and interactivity content forms.
Multimedia is usually recorded and played, displayed or accessed by information content processing
devices, such as computerized and electronic devices, but can also be part of a live performance.
Multimedia also describes electronic media devices used to store and experience multimedia
content. Multimedia is similar to traditional mixed media in fine art, but with a broader scope.
Multimedia presentations may be viewed in person on stage, projected, transmitted, or played
locally with a media player. A broadcast may be a live or recorded multimedia presentation.
Broadcasts and recordings can be either analog or digital electronic media technology. Digital
online multimedia may be downloaded or streamed. Streaming multimedia may be live or on-
demand.

38
Multimedia games and simulations may be used in a physical environment with special effects,
with multiple users in an online network, or locally with an offline computer, game system, or
simulator.
It has wide applications like:
1. Creative industries
2. Commercial
3. Entertainment and fine arts
4. education
5. Industries
6. matematicaland scientif research
7. Medicine etc…

Simulation: - Simulation is the imitation of some real thing, state of affairs, or process. The
act of simulating something generally entails representing certain key characteristics or behaviours
of a selected physical or abstract system.
A computer simulation (or "sim") is an attempt to model a real-life or
hypothetical situation on a computer so that it can be studied to see how the system works. By
changing variables, predictions may be made about the behaviour of the system. Computer
simulation has become a useful part of modeling many natural systems in physics, chemistry and
biology, and human systems in economics and social science (the computational sociology) as well
as in engineering to gain insight into the operation of those systems. A good example of the
usefulness of using computers to simulate can be found in the field of network traffic simulation. In
such simulations, the model behaviour will change each simulation according to the set of initial
parameters assumed for the environment.
Similarly Simulation is often used in the training of civilian and military personnel. This
usually occurs when it is prohibitively expensive or simply too dangerous to allow trainees to use
the real equipment in the real world. In such situations they will spend time learning valuable
lessons in a "safe" virtual environment.
Medical simulators are increasingly being developed and deployed to teach therapeutic and
diagnostic procedures as well as medical concepts and decision making to personnel in the health
professions. Simulators have been developed for training procedures ranging from the basics such
as blood draw, to laparoscopic surgery and trauma care.
Some more we have like City simulators / urban simulation, Classroom of the future,
Engineering, technology or process simulation etc.
Animation: - Animation is the rapid display of a sequence of images of 2-D or 3-D
artwork or model positions in order to create an illusion of movement. It is an optical illusion of
motion due to the phenomenon of persistence of vision, and can be created and demonstrated in a
number of ways. The most common method of presenting animation is as a motion picture or video
program, although several other forms of presenting animation also exist.

computer animation encompasses a variety of techniques, the unifying idea being that the animation
is created digitally on a computer.

2D animation
Figures are created and/or edited on the computer using 2D bitmap graphics or created and
edited using 2D vector graphics. This includes automated computerized versions of
traditional animation techniques such as of tweening, morphing, onion skinning and
interpolated rotoscoping
3D animation
39
Digital models manipulated by an animator. In order to manipulate a mesh, it is given a
digital armature (sculpture). This process is called rigging. Various other techniques can be
applied, such as mathematical functions (ex. gravity, particle simulations), simulated fur or
hair, effects such as fire and water and the use of Motion capture to name but a few. Many
3D animations are very believable and are commonly used as special effects for recent
movies.
Artificial intelligence:- Artificial Intelligence (AI) is the intelligence of machines and the
branch of computer science which aims to create it. Major AI textbooks define the field as "the
study and design of intelligent agents,"where an intelligent agent is a system that perceives its
environment and takes actions which maximize its chances of success. John McCarthy, who coined
the term in 1956, defines it as "the science and engineering of making intelligent machines.”
The field was founded on the claim that a central property of human beings,
intelligence—the sapience of Homo sapiens—can be so precisely described that it can be
simulated by a machine. This raises philosophical issues about the nature of the mind and limits of
scientific hubris, issues which have been addressed by myth, fiction and philosophy since antiquity.
Artificial intelligence has been the subject of breathtaking optimism, has suffered stunning setbacks
and, today, has become an essential part of the technology industry, providing the heavy lifting for
many of the most difficult problems in computer science.
AI research is highly technical and specialized, so much so that some critics decry the
"fragmentation" of the field. Subfields of AI are organized around particular problems, the
application of particular tools and around longstanding theoretical differences of opinion. The
central problems of AI include such traits as reasoning, knowledge, planning, learning,
communication, perception and the ability to move and manipulate objects. General intelligence (or
"strong AI") is still a long term goal of (some) research.
But during the development of AI, the researcher feel many problems like:
1) deduction, reasoning,problems solving 2)knowledge repersentation 3) planning 4) learning
5)natural language processing, 6) motion and manipulation 7)perception 8)social intellengce
9)creativity etc .even after this the reserachers are using technique like 1) search and optimization
2)logic 3)Classifiers and statistical learning methods etc to create better ones Artifcial intellengence
objects.
E-commerce:- Electronic commerce, commonly known as (electronic marketing) e-
commerce or eCommerce, consists of the buying and selling of products or services over electronic
systems such as the Internet and other computer networks. The amount of trade conducted
electronically has grown extraordinarily with widespread Internet usage. A wide variety of
commerce is conducted in this way, spurring and drawing on innovations in electronic funds
transfer, supply chain management, Internet marketing, online transaction processing, electronic
data interchange (EDI), inventory management systems, and automated data collection systems.
Modern electronic commerce typically uses the World Wide Web at least at some point in the
transaction's lifecycle, although it can encompass a wider range of technologies such as e-mail as
well.
A large percentage of electronic commerce is conducted entirely electronically for virtual
items such as access to premium content on a website, but most electronic commerce involves the
transportation of physical items in some way. Online retailers are sometimes known as e-tailers and
online retail is sometimes known as e-tail. Almost all big retailers have electronic commerce
presence on the World Wide Web
Electronic commerce that is conducted between businesses is referred to as business-to-
business or B2B. B2B can be open to all interested parties (e.g. commodity exchange) or limited to
specific, pre-qualified participants (private electronic market). Electronic commerce that is
conducted between businesses and consumers, on the other hand, is referred to as business-to-
40
consumer or B2C. This is the type of electronic commerce conducted by companies such as
Amazon.com.
Electronic commerce is generally considered to be the sales aspect of e-business. It also
consists of the exchange of data to facilitate the financing and payment aspects of the business
transactions.
It has broad applications like

• Email
• Enterprise content management
• Instant messaging
• Newsgroups
• Online shopping and order tracking
• Online banking
• Online office suites
• Domestic and international payment systems
• Shopping cart software
• Teleconferencing
• Electronic tickets etc.

41

Вам также может понравиться