Вы находитесь на странице: 1из 10


1. Risk Concepts and Vocabulary

 Test case : - Test cases are how the testers validate that a software
function, such as deducting a payroll tax, or a structural attribute of the
 Test Data: - Test data is information used to build a test case
 Test Script: - Test Scripts are online entry of test cases in which the
sequence of entering test cases and the structure of the online entry system
must be validated. This may be manually prepared using paper forms, or may
be automated using capture/playback tools or other kind of automated scripts
 Risk: - Risk is the potential loss to an organization, as for example, the risk
resulting from the misuse of its computer. This may involved unauthorized
disclosure, unauthorized modification, and/or loss of information resources, as
well as authorized but incorrect use of a computer
 Risk Analysis :- Risk Analysis is an analysis of an organization’s information
resources, it’s existing controls, and remaining organization and computer
system vulnerabilities
 Threat:- A threat is something capable of exploiting vulnerability in the
security of a computer system or application
 Vulnerability :- A weakness in automated system security procedures,
administrative controls, physical layout, internal controls, and so forth, that
could be exploited by a threat to gain unauthorized access to information or
disrupt critical processing
 Control: - control is anything that tends to cause the reduction of risk.
Control can accomplish this by reading harmful effects or by reducing the
frequency of occurrence.
 Testers role on the risk
 Identify these risks
 Estimate the severity of the risk
 Develop tests to substantiate the impact of the risk on the application

2. Risk Associated with Software Development

 Improper use of Technology: - This usually will happen in the design

Phase. For example many organizations introduce web sites without clearly
establishing the need for that technology.
 System analyst or system programmers improperly
 Early use of HW/SW technology
 Minimum Planning for the new HW/SW technology
 Way To Mitigate:- Interviewing the users & Prototyping
 Repetition of Errors: - This is usually will happens in the coding phase. A
person might process one item correctly , make an error on the next, process
the next twenty correctly, and then make another error
 Inadequate checks on entry of master information
 Insufficient program testing
 Failure to monitor the results of processing
 Way to Mitigate :- White Box Testing
 Cascading of Errors: - This is usually will happens in the maintenance
phase. The cascading of errors is the domino effect of errors throughout an
application system. An error in one part of the program or application triggers
a second error in another part of the application
 Inadequate tested application and limited testing of program changes

Page 1 of 10 santhosh_data@yahoo.com

 Failure to communicate the type and date of changes being

 Ways to mitigate :- regression testing & System testing
 Illogical Processing: - Usually, this will happened in the design Phase. For
example, if a pay role check was produced for a clerical individual for over $1
million. This is possible in an automated system as a result of programming
or hardware errors.
 Failure to check for unusually large amounts on output documents
 Fields that are either too small or too large and
 Failure to scan output documents
 Ways to Mitigate :- Data Validation (Bounds √ing, Field verification) &
Code Review
 Inability to translate user needs into Technical Requirements
 Prevalent Phase :- Requirements & Design
 Users cannot adequately express their needs in terms that facilitate
the preparation of computerized applications
 Users with out technical IT Skills
 Technical people without sufficient understanding of user requirements
 Multi user systems with no user “in charge” of the system
 Ways to Mitigate :- Prototyping & Interviewing Users
 Inability to control technology: - Controls are needed over the
technological environment. The controls ensure that the proper version of the
program is in production at the right time
 Prevalent Phase :- Requirements & Design
 Vendor offered HW/SW without consulting how to control then
 Too Many controls
 Inadequate restart & recovery
 Ways to Mitigate:- Prototyping, Interviewing Users
 Incorrect entry of data: - There is a mechanical step required to convert
input data into machine-readable format. For example, scanners enter data
directly in to the computer system.
 Prevalent Phase :- Entire SDLC
 Human Errors in key data
 Mechanical failure of hardware devices
 Misunderstanding of data entry procedures
 Ways to Mitigate :- Structured Configuration Management

 Concentration of Data
 Prevalent Phase:- Code, release & Maintenance
 Inadequate access control enabling unauthorized access to data
 Erroneous data and their impact on multiple users of the data
 Impact of HW/SW failures that make the data available to multiple
 Ways to mitigate :- Bounds Testing, White Box Testing & Data
Validation Testing
 Inability to react quickly
 Prevalent Phase:- Requirement, design & code
 The structure is inconsistent
 Computer time is unavailable to satisfy the request
 Cost of processing exceeds the value of the information requested
 Ways to Mitigate:- Performance / Load Testing & BM / CM
 Inability to substantiate Processing
 Prevalent Phase :- Design

Page 2 of 10 santhosh_data@yahoo.com

 Application systems need to substantiate processing for the purpose of

correcting errors and providing the correctness of processing
 When error occur, computer personnel need to pinpoint the cause of
those errors so they can be corrected
 Evidence is not retained long enough
 Evidence from intermediate processing is not retained
 Ways to Mitigate :- Security Testing, Security Logs, Transaction Logs,
White Box Testing, Back up Scheme & CM
 Concentration of Responsibilities
 Prevalent Phase :- Design Phase
 Establishing of large databases
 Client-Server Systems
 Web-based systems
 Ways to mitigate :- Security Testing, (Security) Policies & Procedures
 Erroneous :- Input date is simplest and most common cause of undesirable
performance by an applications system
 Prevalent Phase :- Design & Maintenance
 Inconsistence source data values may not be deducted ‘
 Keying errors may not be detected
 Records in one format may be interpreted according to a different
 Way to mitigate :- Data Validation, I/O Testing, Black Box Testing,
Security Testing & Processes and Procedures
 Misuse by Authorized end users
 Prevalent Phase :- Design & Maintenance / Release
 An employee may convert information to an unauthorized use
 Authorized users may use the system for personal Benefit
 An authorized user may accept a bribe to modify or obtain information
 Ways to Mitigate :- Audit Trails, Security Testing & Processes and
 Uncontrolled System Access
 Prevalent Phase :- Design & Maintenance
 Data or programs may be stolen from the computer room or other
storage areas.
 IT facilities may be destroyed or damaged by either intruders or
 Individuals may not be adequately identified before they are allowed
to enter the IT area.
 Unauthorized persons may not adequately protect remote terminals
from use.
 An unauthorized user may gain access to the system.
 Passwords may be inadvertently revealed to unauthorized individuals.
A user may write his/her password in some convenient place, or the
password may be obtained from discarded printouts, or by observing
the user as they type it.
 A user may leave a logged-in terminal unattended, allowing an
unauthorized person to use it.
 A terminated employee may retain access to an IT system because his
name andpassword are not immediately deleted from authorization
tables and control lists.
 An unauthorized individual may gain access to the system for his own
purposes(e.g., theft of computer services, or data, or programs,

Page 3 of 10 santhosh_data@yahoo.com

modification of data,alteration of programs, sabotage, and denial of

 Repeated attempts by the same user or terminal to gain unauthorized
access to the system or to a file may go undetected.
 Ways to Mitigate :- Securing Equipment and Access & Processes and
 Ineffective Security and Privacy Practices for the Application
 Prevalent Phase :- Design & Maintenance / Release
 Large fund disbursements, unusual price changes, and unanticipated
inventory usage may not be reviewed for correctness.
 Repeated payments to the same party may go unnoticed because
there is no review.
 The application staff may carelessly handle sensitive data, by the mail
service, or by other personnel within the organization.
 Way to mitigate :- Securing Equipment and Access, Processes and
Procedures, Security Testing, Security Logs Created & Automatic
Emails Generated
 Procedural Errors during Operators
 Prevalent Phase :- Release & Maintenance
 Files may be destroyed during database reorganization or during
release of disk space.
 Operators may ignore operational procedures – for example, allowing
programmers to operate computer equipment.
 Job control language parameters may be erroneous.
 An installation manager may circumvent operational controls to obtain
 Careless or incorrect restarting after shutdown may cause the state of
a transaction update to be unknown.
 Storage media containing sensitive information may not get adequate
protectionbecause the operations staff is not advised of the nature of
the information content.
 Output may be sent to the wrong individual or terminal.
 Ways to mitigate :- Audits, Procedures and Policies, Configuration
Management, Checklists & CPI
 Program Errors
 Prevalent Phase :- Design, code & Maintenance
 Records may be deleted from sensitive files without a guarantee that
the deleted records can be reconstructed.
 Programmers may insert special provisions in programs that
manipulate data concerning themselves (e.g., payroll programmer
may alter own payroll records).
 Program changes may not be tested adequately before being used in a
production run.
 Changes to a program may result in new errors because of
unanticipated interactions between program modules.
 Program acceptance tests may fail to detect errors that occur for only
unusual combinations of input (e.g., a program that is supposed to
reject all except a specified range of values, actually accepts an
additional value.)
 Ways to Mitigate :- White Box Testing, Document Reviews, Audits,
Code Reviews & Security Testing
 Operating System Flaws: - Design and implementation errors, system
generation and maintenance problems, and deliberate penetrations resulting

Page 4 of 10 santhosh_data@yahoo.com

in modifications to the operating system can produce undesirable effects in

the application system. Flaws in the operating system are often difficult to
prevent and detect:
 Prevalent Phase :- Design
 User jobs may be permitted to read or write outside assigned storage
 Inconsistencies may be introduced into data because of simultaneous
processing of the same file by two jobs.
 An operating system design or implementation error may allow a user
to disable controls or to access all system information.
 Ways to mitigate :- Code Reviews, Security Testing, Load &
Performance Testing
 Communication System Failure
 Prevalent Phase :- Design & Code
 Undetected communications errors may result in incorrect or modified
 Information may be accidentally misdirected to the wrong terminal.
 Unauthorized individuals may monitor communication lines.
 Data or programs may be stolen.
 Programs in the network that switch computers may be modified to
compromise security.
 Ways to mitigate :- Data Transmission Testing, Document Reviews &
Physical Security

3. Risk Associated with Software Testing

 When conducting risk analysis, two major components are taken into
 The probability that the vent will occur
 The potential or associated with the event
 Some Primary Testing Risks :-
 Not Enough Training :- User does not trained in testing, half of full time
independent testing personnel have been trained in testing techniques
 Us Vs them mentality :- This common problem arises when developers and
testers are on opposite side of the testing issue
 Lake of Test Tool: - IT management may have the attitude the test tools
are a luxury. Manual testing can be an overwhelming task
 Lake of Management Understanding and support of testing:- Support
for testing must come from the top, otherwise staff will not take the job
seriously and testers morale will suffer
 Lack of Customers and user Involvement:- Users and customers may be
shut out of the testing process, or perhaps they don’t want to be involved
 Not Enough Schedule or budget for testing:- Over Reliance on
Independent Testers :- Sometimes called the “throw it over the wall”
syndrome, developers know that independent testers will check their work, so
they focus on coding and let the testers do the testing. Unfortunately, this
results in higher defect levels and longer testing times
 Rapid Change: - In some technologies, especially Rapid Application
Development (RAD), the software is created and modified faster than the
testers can test it. This highlights the need for automation, but also for
version and release management.

Page 5 of 10 santhosh_data@yahoo.com

 Testers are in A Lose-Lose Situation:- the one hand, if the testers report
too many defects, they are blamed for delaying the project. Conversely, if the
testers do not find the critical defects, they are blamed for poor quality.
 Having to Say “No” :- Having to say, “No, the software is not ready for
production,” is the single toughest dilemma for testers. Nobody on the project
likes to hear that and frequently testers succumb to the pressures of schedule
and cost.
 Test Environment: - The work environment is not conducive to effective and
efficient testing.
 New Technology: - The new focus on client/server, Intranet, and Internet
applications has introduced even more risk to the test process. These multi-
tiered systems are more complex than traditional mainframe systems,
 New developmental processes: - Project teams have moved away from
traditional “waterfall” methods, and follow a much more iterative approach to
development and delivery.
 Premature Release Risk:- Premature release is defined as releasing the
software into production under the following conditions:
 The requirements were implemented incorrectly.
 The test plan has not been completed.
 Defects uncovered in testing have not been corrected.
 The software released into production contains defects; although the
testing is not complete the defects within the software system may not
be known.
4. Risk Analysis

 Testing is a process designed to minimize software risks. To make software

testing most, it is important to assure all the high risks associated with the
software, will be tested first

Page 6 of 10 santhosh_data@yahoo.com

 Risk Analysis Process as follows

 Form the Risk Analysis Team, Identify Risks, Estimate the Magnitude of the
Risk & testing priorities
 Form the Risk Analysis Team
o Knowledge of the user application
o Understanding of risk concepts
o Ability to identify controls
o Familiarity with both application and information services risks
o Understanding of information services concepts and systems
o Understanding of computer operations procedures
 Identify Risks
o Risk analysis: - In this method, the risk team “brainstorms” the
potential application risks using their experience, judgment, and
knowledge of the application area.
o Risk checklist: - The risk team is provided with a list of the more
common risks that occur in automated applications. From this list,
the team selects those risks that are applicable to the application.
 Estimate the Magnitude of the Risk
o Intuition and Judgment:- In this process, one or more individuals
state they believe the risk is of a certain magnitude.
o Risk Formula:- The risk formula can be used to calculate the
magnitude of the risk. For example, if a type X risk occurs and the
loss per occurrence is $500, then the loss associated with the risk
is $500.
o Annual Loss Expectation (ALE) Estimation:-

Page 7 of 10 santhosh_data@yahoo.com

 Testing priorities
o Compliance required to laws and regulations
o Impact on ompetitiveness
o Impact on ethics, values and image

5. Risk Management

 Risk management is a totality of activities that are used to both the frequency
and the impact associated with risks. After determining risks; need to
determine risk ‘appetite’ (the amount of the loss) management is willing to
accept for a given risk. There are two activities with risk management Risk
Reduction Method and Contingency Planning
 Reduction Method
o Risk Frequency X Loss/Occurrence = Total Loss
o Apply Controls to Minimize Risk
1. Reduce the Opportunity for Error » Minimize Loss
2. Identify Error prior to Loss » Recover Loss
o If the Controls Cost less than the Estimated Loss there is a Good
Case to Implement the Controls
 Contingency Planning
o Action Plans should be established for activation when a loss is
known to occur for a given risk. The testers should evaluate the
adequacy of the contingency plans

6. Prerequisite for Test Planning

 Test Objectives – assures the project directives are met, testing to achieve the
mission of the software testing group. Testing the functional and structural
objectives of the software is accomplished to meet the quality definition category
for“meeting requirements.”
 Acceptance Criteria – have the user define this criteria
 Assumptions – have them clearly documented. For example if a software
system required a newly developed piece of hardware, an assumption could be
that the hardware would be available on a specific date. The test plan would then
be constructed based on that assumption.

Page 8 of 10 santhosh_data@yahoo.com

 People Issues – Watch out if the s/w engineering Director is also the head of
 Constraints – Obvious constraints are test staff size, test budget, and test
7. Create test Plan

 The test plan describes how testing will be accomplished. Its creation is essential
to effective testing, and should take about of the total test effort. If the plan
is developed carefully, test execution, analysis, and reporting will flow smoothly
 A Test Plan should Have tests that are Repeatable, controllable and ensure
adequate coverage
o Repetable:-
o Controlable :-
o Coverage :-
 How to Write a Good Test Plan…
o Define what it means to meet the project objectives.
o Understand the core business areas and processes.
o Assess the severity of potential failures.
o Identify the components for the system.
o Assure requirements are testable.
o Address implementation schedule issues.
o Address interface and data exchange issues.
o Evaluate contingency plans for system and activities.
o Identify vulnerable parts of the system and processes operating
outside the information resource management area.
 Build the Test Plan:
o Set Test Objectives
 Referenced by a number
 Write as a measurable statement
 Assign a priority
 Define acceptance criteria
o Develop Test Matrix
 Define tests as required
 Define conceptual test cases to be entered as a test script
 Define verification tests
 Prepare software test matrix
o Define Test Administration
 Identifies schedule, milestones, and resources needed
 Cannot be completed until the test matrix is completed
 Develop the Test Matrix:
o Define Tests as Required
 Referenced by a name and number
 Contains: Objective, Test Inputs, Test Procedure, Acceptance
Criteria, Test Controls – when to stop the test, & Reference to
what is tested
o Define Conceptual Test Cases to be Entered as Test Scripts
 Use Case type tests.
o Define Verification Tests
 Static test performed on a document developed by the team
responsible for creating software.
o Prepare the Software Test Matrix
 A requirements traceability matrix.
 Write the Test Plan:

Page 9 of 10 santhosh_data@yahoo.com

o Start Early
o Keep the Test Plan Flexible
o Review the Test Plan Frequently
o Keep the Test Plan Concise and Readable
o Calculate the Planning Effort
o Spend the necessary time to complete the Test Plan
 Test Plan Standard: - There is no one universally accepted standard for test
planning. There are many Test Plan Standards available on the Internet from
various organizations, for example: MilStd 498, IEEE829, IEEE/EIA 12207. Most
Test Plan Standards consist of the following sections
 Test Scope  Test Tools
 Test Objectives  Scope
 Assumptions  Referenced Documents
 Risk Analysis  Software Test Environment
 Test Design  Test Identification
 Roles & Responsibilities  Test Schedules
 Test Schedule & Resources  Requirements Traceability
 Test Data Management  Notes
 Test Environment  Appendices
 Communication Approach

Page 10 of 10 santhosh_data@yahoo.com