Вы находитесь на странице: 1из 35

Software Testing

What is testing?

Software testing is process to identify correctness, completeness and quality of developed


software.

SQA (Software Quality Assurance)

To monitor & measure the strength of development process organization follows SQA
concept.

Technical:

1) Meet customer requirement ( On what purpose)

2) Meet customer expectation ( Privacy and Performance)

Non-Technical:

3) Time to market ( Timely Deliverable)

4) Cost of Software ( About Money)

5) Risk Management ( If application crack while working risk management team


should be there to handle the situation)

Software Development Life Cycle (SDLC):

What is SDLC process?

SDLC defines Software Development Life Cycle, It is a process and each and every
organization follows this process to get a quality of software. So there are several stages
comes under the SDLC such as:
Different Stages of SDLC process:

Information Gathering

Analysis

Design

Coding

Testing

Maintenance

There are three generic phases:

Definition: Customer requirement & analysis (Information Gathering & Analysis)

Development: (Design, Coding, Testing)

Maintenance: (Re-engineering, Adaptability, Correction)

What is difference between verification & validation?

Verification defines static testing. It starts from BRS, Analysis and analysis will be reviewed,
design will be reviewed and coding will be reviewed (WBT).

A validation means dynamic testing. In that only maintenance will be reviewed.

Verification is also known as “In Process Testing”.

Validation is also known as “End Process Testing”.


Validation starts after the completion of verification.

BRS (Business Requirement Specification):

Business people act as a bridge between customer & technical people. This document
defines customer requirement to be developed as software. This document developed by
Business Analyst (BA people).

SRS (Software Requirement Specification):

This document defines with respect to BRS. This document is also known as FS (Functional
Specification). This document defines functional requirement to be developed & system
requirement to be used. This document is also developed by Business Analyst category
people.

Review:

It is a static testing technique used to check completeness & correctness of a document.

HLD (High Level Design):

It is also known as External Design. This document defines structure/ hierarchy of all
possible functionality to be developed as main modules. This is developed by project
architect & software designer.

LLD (Low Level Design):

It is also known as Internal Design. This document defines static logic of every sub-module.

E.g. Entity Relationship (ER) Diagram, Class Diagram, DFD (Data Flow Diagram).

Prototype:

A sample module of an application without any functionality is called prototype.

E.g. PowerPoint slideshow.

White Box Testing:

It is a coding level testing techniques used to check completeness & correctness of the
program. It is done by development team.

There are three possible WBT testing techniques:


1. Execution Testing Technique:

Basic Path Coverage: Execution of all possible block (Debugging).

Loop Coverage: Termination of loop.

Program Technique Coverage: Take less no. of memory & CPU cycle.

2. Operation Testing:

Runs on customer expected platforms, customer expected platform means operating


system, compiler browser &other systems.

3. Mutation Testing:

Mutation means change. Developer perform the change to estimate test coverage in the
program.

Integration Testing:

After completion of coding and their review developer combine all independent module to
form as system, during integration they apply integration testing on that coupled module
with respect to HLD & LLD.

There are three approaches conducts the integration testing such as:

a. Top Down Approach: In this approach test engineer conduct testing on main module
without going to sub-module, using STUB.

STUB is a temporary program used to check main module instead of under constructive
sub-module. This is also known as ‘Called Program’.

b. Bottom-Up Approach: In this approach test engineer conducts testing on the main
module instead of coming from the sub module, using DRIVER.

DRIVER is a temporary program which is used instead of main module. This is also known
as ‘Calling Program’.

c. Sandwich/Bi-Directional/Hybrid Approach: Combination of top down and bottom up


approach is called hybrid approach.

“After completion of Integration Testing Build is created”.


Who does the integration testing?

Developer and testing people both does the integration testing.

A development person writes Database Connectivity.

Testing people with respect to combination of sub-module & with respect to functionality.

Black Box Testing:

It is a build level testing technique. During this testing we people I mean to say Test Eng.
Validates internal functionality depends on external interface.

System & Functional Testing:

After completion of system integration testing & review we people concentrate on the
system and functional testing through a set of BBT techniques to validates the functionality
with respect to customer requirement.

This testing technique classified into four categories:

1) Usability Testing: During this testing we people validates user friendliness of screen or
build or GUI.

2) Functional Testing: During this testing we people validates completeness and


correctness of the functionality with respect to customer requirement.

3) Security Testing: During this testing we people validates privacy to user operations.

4) Performance Testing: During this testing we people check speed of application.

Usability Testing: In general testing process starts with or we people start test execution
with usability testing. Usability testing defines user friendliness of screen or build or GUI.

It is classified in to two types:

1) User Interface/GUI Testing: Less no. of event to complete task or easy validation.

E.g. To send email simple click on reply.

Ease of use (Screen should be Understandable).

Look & Feel (Screen should be Pleasantness & Attractiveness).

Speed of interface (Less no. of event to complete task or easy validation).


2) Manual Support Testing: It is done after system & functionality testing, Context
sensitiveness of user manuals.

E.g. If we gives some keyword in Google textbox it will gives the references I mean it says
what you want.

Functional Testing:

Q. What do you have done in your project?

Sir I have done functional testing to define customer requirement in terms of (BIEBSC)
coverage functionality testing. It is major part of BBT technique. During this testing we
people validates completeness and correctness of the functionality with respect to
customer requirement.

There are two types of Functional Testing:

1) Functionality Testing: During this testing we people validates completeness and


correctness of the functionality with respect to customer requirement.

This is also known as Requirement Testing.

There are several coverage’s comes under the Functionality Testing.

a) Behavior Coverage: During this testing we people check the property of object.

[Enabled/Disabled or ON/OFF property].

b) Input Domain Testing: During this testing we people check the size & type of input
objects. We are going to apply this testing.

c) Error Handling Coverage: Preventing negative navigation. [Testing valid as well as


invalid data].

d) Back End Coverage: (Database Testing) Content of backend operation, impact of


content operation.

e) Calculation Based Coverage: Checking output values with respect to arithmetic


calculation.

f) Service Level Coverage: Order of functionality, every functionality should be in proper


order.
Input Domain Testing: This is a part of functionality testing to measure the size & type of
input object. We people maintains a special structure in terms of BVA & ECP.

BVA defines Boundary Value Analysis. The size of input object. [Min & Max].

I/P Object Min Max


Pwd Min=6-Pass Max=16-Pass

Min=6 Min-1=Fail Max+1=Fail

Max=16 Min+4=Pass Max-5=Pass

ECP defines Equivalence Class Partition. The size of data type. [Valid & Invalid].

I/P Object Valid Invalid


Pwd A to Z, a to z,
Blank Space.
0 to 9, @,#,@,$.

2) Non-Functionality Testing: During this testing we people check all non functional
related issues I mean to say functionality would be developed and impact of system
requirement.

There are several types in Non-Functionality Testing

1) Recovery Testing: It is also known as Reliable testing. During this testing we people
check or validates whether an application recover from abnormal situation to normal
situation.

E.g. Normal Situation- Be-sent Require. Working fine


Abnormal Situation- Stick to be execute, system crash/server down/power fail.

Q. Are you involved in recovery testing?


Yes.

2) Compatibility Testing: This is also known as portability testing. During this testing we
people validates whether an application support/runs on customer expected platform or
not.
Basically we people involve into browser compatibility testing.
Customer expected platforms, customer expected platform means operating system,
compiler browser & other systems.
There are two types:

a) Forward Compatibility: Build is ok but problem with operating system.


b) Back-word Compatibility: Operating system is ok but problem with build.

In general test engineers find back-word compatibility defects as maximum (Defect in


build).

Q. What is browser compatibility testing?


Browser compatibility testing defines to ensure that our application runs on different
types of browser. Basically in this testing we people validates some scenarios like:

GUI of the application.


Functionality of the application (That means all links are working properly or not).
Accessibility of application (application accessible properly or not).

In general we people map the Internet Explorer browser with other existing browser in the
market.
E.g. Safari, chrome, Mozilla fireworks, opera.

Q. Are you involved in compatibility testing?


Basically I am involved in browser compatibility testing.
Two browser are in the market I.E. & Chrome (Using the application), mapping or
comparing between I.E. & chrome.

3) Configuration Testing/ Hardware: It is also known as hardware compatibility testing.


During this testing we people validates whether an application supports different types of
hardware devices or not.
E.g. Different types of printers, LAN, WAN, Topologies, MS word.

4) Intersystem/End to End/Verification & Validation Testing/ Internal system testing:


It is also known as End to End testing. During this testing we people validates, Co-existence
of our application or software with other existence software to share the resources.
Sometimes our application is interconnected with other application.

E.g. One bank ATM card (SBI ATM card) is accepted by other bank ATM to withdraw money,
during this time we people validates the functionality w.r. to interconnection to other
software.

It is also known as Verification & Validation.


Basically the End to End testing never involve in system & functional testing.

Note: Generally End to End testing is being performed after the completion of functionality
testing of each & every module.
5) Installation Testing: During this testing we people validates installation of our
application along with co-existed software in the customer expected like configuration
(environment) to validate the functionality w.r. to customer requirement.
We people concentrate on some factors such as:

1.Setup program execution before installation. (Whether all setup files are available or not).
2. Easy interface during installation.(Default Radio buttons should be there while
installation.
3. Occupied disk space after installation.
4. Verify un-installation.

6) Parallel/Comparative Testing: It is applicable only for software product. During this


we people compare the product with other comparative product in the market. It is done
after functional testing.

Q. What is difference between product & application?


Product made for multiple customers.
Application made for specific customers.

7) Sanitation/Garbage Testing: During this testing we people tries to remove extra


functionality in the application w.r. to customer requirement.
When customer faces a problem they can request for modification for which he has to pay
some amount and we make business from that.

8) Globalization Testing: It is the testing to ensure that our application supports multiple
languages or not.

a) Localization Testing: To ensure that our application supports local language or not.
b)Nationalization Testing: To ensure that our application supports Nationalization
languages or not.

Q. Do you involve in globalization testing?


Actually what happen previous release I got chance to perform an globalization testing. In
that our application is develop for a north Canada user which is basically speaks a French
language in this scenario I was check our application supports French language or not.
In that application I remember one scenario that is whenever we click on French
language date format American MM/DD/YYYY has been converted into French i.e.
DD/MM/YYYY. Secondly when we select that country code country currency will be
displayed.

Performance Testing: (Speed of processing): This is advance testing technique &


complex to conduct because testing team require a huge environment to conduct this
testing.
E.g. Automation tools, basically performance testing is done with the help of automation
tools.

Performance testing is classified into four types such as:


1. Load Testing
2. Stress Testing
3. Storage Testing
4. Data Volume Testing

1. Load Testing: Execution of our application under customer expected configuration


(environment) & customer expected load to estimate performance is called Load Testing. It
is also known as Scalability Testing.
E.g. In mechanical the load on steel chip if load exceed 500 tan, it will break.

2. Stress Testing (Stress defines maximum of load): The execution of our under
customer expected configuration and an internal load & peak load to estimate performance
is called Stress Testing. It is ensure that the maximum load on the application handle.
E.g. Maximum 700 users can use the application at a time.

3. Storage Testing (It defines maximum of data store): The execution of our application
under huge amount of resources to estimate storage limitation in our applications called
Storage Testing.
E.g. No of volume in byte, Mobile card handle only 2GB data as per card limit.

4. Data Volume Testing: The execution of our application under customer expected
configuration to estimate the peak limits of data is called Data Volume Testing.
E.g. No. of records in your database can be stored.

Security Testing: It is also known as advance level testing technique & complex to conduct.
During this technique we people validates the privacy to user operation as per customer
requirement.

There are three types of security testing.


1.Authorization
2.Acess Control (Authentication)
3.Encryption & Decryption

1. Authorization: It is done by Test engineer, during this test we people validates whether
the user is valid or not.
E.g. Employee of wipro.

2. Access Control (Authentication): It is done by Test engineer, during this test we people
validates whether the user has permission for specific operation or not.
E.g. Employee of wipro but need to permission for specific operation.
3. Encryption & Decryption: Data conversion between client and server. It is done by
developer.

Encryption

Client Server

Decryption

User/Customer Acceptance Testing: (UAT/ CAT)

After completion of an system & functional testing our organization concentrate on user
acceptance testing to collect the feedback from customer.

UAT is classified into two types:

a) Alfa Testing: This testing is applicable for a software application. Alfa testing is a testing
technique which is done in controlled environment in the presence of developer & tester.

b) Beta Testing: This testing is applicable for a software product. Beta testing is done at
the customer side in the absence of developer & tester in uncontrolled environment.

Q. Have you involve in beta testing?

Basically user manual is an main part of an user acceptance testing and we involve
in some manner.

Final Regression/ Port/ Pre-Acceptance/Pre-Release/ Sanity of Release Testing:

After completion of an User Acceptance Testing & their review our organization
concentrate on released, from team formation which is known as build released team. This
release team consists of some Hardware engineer, some Developer & some Test engineer.
Basically this release team apply port testing on critical part of application to validate the
functionality before release the build. In general released testing will be performed within 2
days and it will be performed at client side like environment (configuration).

During this testing they concentrate on some factors such as:

1 Compact installation (Installation Testing)

2 Overall functionality (System & functional Testing)

3 Input/Output device handling.

4 Secondary storage device handling.

5 Co-existence with other existing software (End to End Testing)

Q. What is change request (CR)?

After completion of port testing, release team providing training session to customer
side people and come back to our organization. During maintenance customer side people
request our organization for change or modification that is known as change request.

During change request we focus on some factors are:

# Impact analysis.

# Perform change.

# Test change process.

Change Control Board (CCB):

Analyze impact at change senior team will handle change request during an test
execution. Change Control Board comes under configuration management. Configuration
management means to handle the change request during test execution.

Testing Terminology:

It is comes under the System & Functional testing.

1.Monkey/Chimpanzee/Speed Testing:
Basically monkey testing defines maximum number of test cases with less number of
time for execution. During monkey testing we people concentrate on high priority test
cases.

2.Adhoc Testing:

It is done by senior tester. Test engineer do not have sufficient test data but with the
help of past experience have to conduct the test.

I mean to say we don’t have test data but we have domain knowledge.

2.Exploratory Testing/ Orthodox Testing:

Level by level functionality coverage is called as exploratory testing. I mean to say we


have test data but don’t have domain knowledge.

3.Sanity Testing/Octangle Testing:

It is a initial stage of Black Box Testing. Development team or developer estimate the
stability at build to check whether build is ready before testing or not, we people also
involved in sanity testing as test engineer to check the core functionality of application is
called as sanity testing.

We conduct a sanity testing after each build up-gradation that is after receiving new
build. In sanity testing we perform:

#All link validation


#All tab validation
#Core functionality of the application.
During sanity testing if we find any mismatch or defect then it is known as environmental
issue, I mean to say
#Run time error
#Link is not get access
#Tab not working
#Page disappears.

Smoke Testing:
Smoke testing is a extra shake (vibration) up of sanity testing. We trying to identify a
troubleshoot. During this execution if we find environmental issue/runtime error.
We are trying to identify invalid object then we are trying to identify package that
means the object belong to which package then we are going to request to database
administrative people to recompile that package that’s all about smoke testing.
Package is combination & clubbing of similar type of object & it is created by database
administrator.

Big Bag Testing/ Informal Testing:


During this test, testing team concentrate on single stage instead of multiple stages
after completion of entire system development process. It is also called as an random
testing.

Incremental/ Formal Testing:


During this test, testing team concentrate on unit level to system level. During this
testing they involve multiple level testing. It is also called as Formal testing.

UAT

System & Functional Testing

Integration

Unit

Retesting:

Retesting of an same application or build with multiple test data to validate the
functionality with respect to customer requirement is called as retesting.

In other word we can say to see that whether failed functionality is working fine or not &
sir resting is applied only for failed test cases.

E.g. To define multiplication table test engineer choose the different types of combination in
terms of +ve, -ve, integer, float, min, max and zero that is same build with multiple test data.
Regression Testing:

Regression comes word from regret. Re-Execution of test on modify build to ensure
that bug fix and occurrence of side effect to validate functionality is known as regression
testing.

In other word we can say to see that whether unchanged functionality is working fine or
not & sir regression testing is applied only for passed test cases.

Regression Suite we performed.

#Newly added scenario.

#High priority test case.

Testing Accept
Report the Defect

Retest Fix Defect


Modify Build

Modify Build

Regression Test

Agile Methodology/ Model:

Product Owner Product Back-log Sprint Back-log Stories

Test Case Design

Test Engineer

Test Case Execution


Agile defines the characterized quickness. It is a simple philosophy, It’s not plan
driven it is a value driven. So we are going to follow some procedure I mean to say the
product owner collect the list the requirement, this is nothing but the product back-log &
from this product back-log we are going to select the list of requirement & move to
corresponding sprint this is nothing but sprint back-log. So during this time we are going to
estimate that is what are the requirement should be deliver in this sprint? and what are the
requirement should be deliver in the next sprint? & this estimation can be done by using
several factors, I mean to say Complexity, Efforts, Knowledge & this process would be done
by the development lead along with the testing lead, and each sprint back-log consist of
stories and from that stories we are going to prepare the test case.

There are several attributes such as:

#Check point after each model (Small Regression):

#Scrum Meeting: Scrum meeting is taken under control of scrum master, each day start
with this meeting and we discussed in meeting I mean to say what did yesterday? , What
are going to do today? And what is the road blocks (Issues). In scrum meeting involve
people are BA, scrum master, development team & testing team as well as all including all
project members. Time span of this meeting is 15 to 20 min but it would be extend
depending upon issues.

#Sprint wise delivery: In agile we give 1 month delivery that is


1R (Release) = 1M (Month) = 1S (Sprint)

Advantages of Agile:

#Major advantages of agile methodology is Frequently changes in requirement doesn’t


impact on production process or delivery process.

# Less Cost.

# Less Resource Utilization.

# Fast Delivery.

Disadvantages of Agile: When project is complex/ big and it has lot of independent
module in that case it get difficult to implement.
Sprint Test Plan of Agile:

28 Dec = Estimation.

#1 Jan to 5 Jan (1st Week)

Requirements Analysis.
Test case design.
Test case review.

#8 Jan to 12 Jan (2nd Week)

Regression test case design.


Regression test case review.

#15 Jan to 20 Jan (3rd Week)

Test case execution.


Regression test case execution.
Defect Analysis.
Defect Log.
Defect Fix.
#23 Jan to 28 Jan (4th Week)

UAT port testing.


Test planning for next sprint.
Test closer activity.

That’s all about the agile methodology.

STLC: Software Testing Life Cycle

PM TL T.E. T.E. TL

Test Initiation Test Plan Test Design Test Execution Test Closer

Defect Report

rt
In many organization testing process starts with initiation process. During this
stage project manager concentrate on the scope of the project, requirement of the project &
the risk involved in the project.

After that in the test plan Test Lead mainly concentrate on job allocation in the
terms of
#What to test?
#How to test?
#When to Test?
#Who will test?

During test design we people I mean to say test engineer prepare the test case From
SRS/Functional specification to validate the functionality with respect to customer
requirement.

After completion of test design we people execute the test case to validate the functionality.
During this execution if we find any mismatch or defect then we log the defect and send it to
development team to fix that defect.

After fixing that bug developer send the modified build and on that modified build we
people apply the regression testing to check whether the defect is resolve or not and to
check the occurrence of any side effect due to bug fix. This is the cyclic process and we
will continue till all defect get resolve.
After completion of test execution we prepare that report & send it to test lead.
During test closer test lead concentrate on whether all the testing process going correctly
or not. That’s all about STLC.

Defect Life Cycle:

Reject
Close
New Open Fix

Re-Open
Differed

During system and functional testing we people get new defect & its status is new and log
the defect in defect tracking tool.

After discussion with the senior tester we people set the defect status as open & assign it to
developer.
After receiving the defect from the testing team, developer analyze & verify the defect. If
developer doesn’t accept it, he set defect status as reject.

If developer decides to fix the defect in later version he set defect status as differed.

If developer accept the defect & he fix it, then he send the modified build to testing team.

Then we people I mean to say test engineer verify whether the defect has been fix or not. If
defect has been fix we people set the defect status as closed.

If defect is re-occurring any side effect due to bug fix, then we people set the defect status as
reopen and send it to developer.

That’s all about the defect life cycle.


|| Depth Part ||

Test Documentation Hierarchy Process

Quality Control (QC) Test Policy

Company Level Document (HLD)


Test Strategy

QA/PM Quality Analyst

Test Methodology

Test Plan
Team Lead (TL)

Test Case

Project Level Document


Test Procedure

Test Engineer (TE)


Test Script

Defect Report

Test Summery Or

Final Report Or Test Summery Report send to Client.


Team Lead (TL)

Software Release
Note (SRN)
Test Policy
It is a company level document and developed by quality control people. (Almost management) this
document defines “testing objective” to be achieved.

Name of Company

Address of company

Location of company

Testing Definition: Verification + Validation.

Testing Process: Proper planning before starts testing.

Testing Standard: 1 defect per 250 loc/1defect per 10fp.

Testing Measurements: QAM, TMM, PCM.

Signature of

(C.E.O)

LOC – Lines of code.

FP – Functional point.

Example: No of screens / No of forms / No. of reports / No. of inputs / No. of outputs / No. of queries.

Using functional point we can know the size of project.

QAM – Quality Assessment Measurements.

TMM – Test Management Measurements.

PCM – Process Capability Measurements.

Basically this document shown to the client side people in according to gain new project.

Test Strategy
It is also a company level document and developed by quality analyst people (project manager level). This strategy
document defines testing approach to be followed by testing team.

1) Scope and objective: The definition & purpose of testing in our organization.

2) Business issues: Budget control for testing, cost of testing is estimated accordingly cost of a project. (How much we
are going to spend on testing).

3) Testing Approach: Mapping between development stages and testing issues (Factors) TRM is prepared.

Ex: v-model, Test Responsibility Matrix (TRM)/ Test Matrix (TM).

Dev. Stages/Testing Info. Gathering


Design Coding Testing Maintenance
factors & Issues & Analysis
1. Authorization No No Yes Yes
2. Access Control No Yes Yes Yes
3. Audit Trial Yes Yes Yes No Depend on
……… change request.
……….
15. Methodology

Note: TRM is prepared by PM & send it to customer accordingly money will be given.

4) Test deliverables: Required testing task to be completed before start testing w.r. to H/W configuration & resource
document. Entry & Exit criteria should be define properly.

5) Roles and responsibilities: Names of jobs in testing team and responsibilities of every job during testing.

6) Communication and status reporting: Required negotiation between every two consecutive jobs in testing team.

7) Test automation and testing tool: Purpose of automation and availability of testing tools in your organization.

8) Defect reporting and tracking: Required negotiation between testing team and development team when testers got
mismatches in testing. Because it may affect on production or releases.

9) Testing measurements and metrics: This is the unit to measure testing process. QAM, TMM, PCM

10) Risks and Mitigations: If any problem occurs during testing then the solutions to overcome.

11) Change and configuration management: Ability to handle change request during execution. How to handle sudden
changes in customer requirements during testing.

12) Training plan: Required number of sessions to understand customer requirements by testing team.
TEST FACTORS OR TESTING ISSUES

To define one quality software. Software engineering people are using 15 factors/issues.

1) Authorization: Whether a user is valid or not? To connect to application.

2) Access control: Whether a valid user have permissions to use specific service or not?

3) Audit trail: Maintains Meta data about user operations.

4) Continuity of processing: Inter-process communication.

5) Correctness: Meet customer requirements in terms of functionality.

6) Coupling: Co-existence with other software applications to share common resources.

7) Ease of use: User friendliness of screens.

8) Ease of operate: Installation, un-installation, dumping, down loading, up loading etc.

9) File integrity: Creation of back up during operations.

10) Reliability: Recover from abnormal situation to normal situation.

11) Portable: Run on different platforms.

12) Performance: Speed of processing.

13) Service levels: Order of functionalities.

14) Maintainable: Whether our application build is long time serviceable to customer site people or not?

15) Methodology: Whether our testing team is following standards or not? (during testing)

Test Factors Vs Testing Techniques

1) Authorization:

Security testing (separate testing team)

Functionality/ requirement testing (common testing team)

2) Access control:

Security testing (separate testing team)


Functionality/ requirement testing (common testing team)

3) Audit trail:

Functionality/ requirement testing

4) Continuity of processing:

Integration testing (top down/ bottom up/ hybrid)

5) Correctness: Functionality testing

6) Coupling: Intersystem testing

7) Ease of use: User interface testing , manual support testing

8) Ease of operate: Installation testing

9) File integrity: Functionality/ requirement testing/Recovery testing

10) Reliability: Recovery testing (1 user level)

11) Portable: Compatibility testing, Configuration testing

12) Performance: Load testing, stress testing, storage testing, data volume testing.

13) Service levels: Functionality/ requirements testing (1 user level)

14) Maintainable: Compliance testing

15) Methodology: compliance testing

Compliance testing: Whether the testing team is following standards or not during testing, is called compliance testing.
Compliance means that complete plan.

Test Methodology

It is a project level document & it is developed by Project Manager/QA category level people. & this document QA/PM
defines a required testing approach/issues/factors to be followed for corresponding project. In this document QA/PM
select possible test issues or factors for current project requirement.

Note: Test strategy is overall but test methodology is applied on selected area. So test methodology [Project Level] is
important than test strategy [Company Level]

To develop test methodology, project manager/quality analyst follows some approach. Before start every project testing.

Step 1:- Collect test strategy.

Step 2:- Identify current project type.


Information
Gathering &
Project Type Design Coding System Testing Maintenance
Analysis
Traditional √ √ √ √ √

Outsourcing * * √ √ *

Maintenance * * * * √

Note:- Depending on project type, project manager delete some of the columns from TRM (test responsibility matrix) for
this project testing.

Step 3:- Study project requirements.

Note: – Depending on requirements in the project, PM delete unwanted factors (rows) from TRM for this project testing.

Step 4: – Determine the scope of project requirements.

Note: – Depending on expected future enhancements, PM is adding some of previously deleted factors to TRM for this
project testing.

Step 5: – Identify tactical risks.

Note: – Depending on analyzed risks, PM is deleting some of the factors from selected TRM for this project testing.
Because that not might be supported by organization environment.

CASE STUDY: 15  Test factors


-3  Requirements
——
12
+1  Scope of requirements
——
13
-4  Risks
——
9  Finalized to be applied on project

Step 6 :- Finalized TRM for current project testing.

Step 7 :- Prepare system test plan.

Step 8 :- Prepare module test plan.


Test Plan

After completion of test initiation and testing process finalization, test lead category people are concentrating on test plan
document preparation in terms of “what to test?”, “how to test?” , “when to test?” , “who to test?” .

Input Process Output

1. Team Formation
Development Strategy (Resource Allocation)

2. Identify Tactical Risk


Finalize TRM form (Risk Analysis)
System Test Plan
Test Methodology 3. Prepare Test Plan

4. Review Test Plan

Testing team formation: In general test planning process starts with testing team formation. In this stage test lead is
depending on below factors.

 Availability of test engineers


 Availability of test environment resources

Case study:

Test Duration: C/S, Web, ERP  3 to 5 months of system testing


System s/w  7 to 9 months of system testing
Machine critical  12 to 15 months of system testing

Team Size: Developers : Testers = 3 : 1

Identify Tactical Risks: After formation of testing team, test lead is analyzing selected team level risks. This risk analysis
is also known as Root Cause Analysis.
Ex: Risk 1 : Lack of knowledge of testing team on that domain.

Risk 2 : Lack of budget (time & cost ).

Risk 3 : Lack of resources (testing tools not available)

Risk 4 : Lack of test data (improper documents)

Risk 5 : Delays in delivery (Health problem)

Risk 6 : Lack of development process rigor.

Risk 7 : Lack of communication (in b/w testing team to development team)

Prepare test plan: after completion of testing team formation and risks analysis, test lead concentrate on test plan
document preparation in IEEE format (Institute of Electrical and Electronics Engineers).

IEEE Format:

Test plan ID: Unique number / name.

Introduction: About project

Test items: Names of all modules in that project ex: website.

Features to be tested: New module names for test design. (What to test)

Features not to be tested: Which ones and why not? (Copy test cases from server)

Approach: Selected list of testing techniques by project manager to be applied on above modules,(finalized TRM)

Testing tasks: Necessary operations to do before start every module testing.

Suspension criteria: Possible raised problems during above modules testing.

Ex: exception handling.

Feature pass/fail criteria: When a module is pass and when a module is fail.

Test environment: Required hardware’s and software’s to conduct testing on above modulus. Ex: WinRunner

Test deliverables: Names of testing documents to be prepared during above modulus

EX: test cases, test procedures, test scripts, test log, defect reports for every modules.

Staff and training needs: Names of selected test engineers for this project testing

Responsibilities: Mapping between names of test engineers and names of modules.

(Work allocation)

Schedule: dates and times.


Risks and mitigations: Raised problems during testing and solutions to overcome.

Approvers: signatures of project manager and test lead.

Review test plan

After completion of first copy of test plan document development, test lead conducts a review on that document for
completeness and correctness. In this review meeting test lead concentrate on coverage analysis.

Coverage analysis:

 Business requirement based coverage (what to test?)

 TRM based coverage (how to test?)

 Risks based coverage (when & who to test?)

After finalizations of test plan, test lead is providing some training sessions to selected testing team on project
requirements.

Test Case Design & Methods:

After finalization of test plan and after completion of training sessions, test engineers are concentrating on test cases
development for responsible modules. There are three methods to prepare test cases such as:

Business logic based test case design (depending on SRS or FS for Application)

Input domain based test case design (design documents for Product)

User interface based test case design (Application & Product)

Business logic based test case design

In general, test engineers are preparing maximum test cases depending on use cases in SRS. Every use case is describing
functionality in terms of inputs, process and outputs.

Depending on that use case. Every use case is also known as functional specification. Every test case describes a testable
condition to be applied on build.

To study use cases, test engineers are following below approach.

Step1: Collect required use cases for responsible modules.

Step2: Selecting a use case and their dependencies from above collected list of use case.

Step2.1: Identify entry condition (base state)

Step2.2: Identify input required (test data)

Step2.3: Identify output and out come (expected)


Step2.4: Study normal flow (navigation)

Step2.5: Study end condition (end state)

Step2.6: Study alternative flow and exceptions

Step3: Prepare test cases depending on above study of use cases

Step4: Review the test cases for completeness and correctness

Step5: Go to step2 until all use cases study completion

Test case format

During test design test engineers are preparing test cases in IEEE format. Through these formats test engineers are
documenting every test case.

Format of Test Case:

Test case id: Unique number/name

Test case name: The name of test conditions.

Feature to be tested: Corresponding module or function name.

Test suit id: The corresponding batch id, in that batch this case is also member.

Priority: The importance of test case in terms of functionality.

EX: P0—Basic functionality (requirements)

P1—General functionality (recovery, Compatability, inter systems, load—)

P2—Cosmetic functionality (user inter face)

Test environment: Required hard wares and soft wares including testing tool to execute this test case.

Test efforts: (person/hour) Time to execute this test case .

EX: 20 min average time

Test duration: Date and time

Test setup: Necessary tasks to do before start this case execution

Test procedure: This step-by-step procedure from base state to end state

Test case pass/fail criteria: When this case is pass/ when this case is fail

NOTE: In general, test engineers are not maintaining complete format for every test case. They can try to maintain test
procedure as mandatory for every test case.
Input domain based test case design

In general, test engineers are preparing test cases depending on use cases or functional specifications in SRS. Sometimes
they can go to depending on design documents also. Because, use cases are not providing complete information about size
and type of input objects. Due to this reason, test engineers are studying data models in design documents.

EX: ER-diagrams (entity relationship diagrams)

In this study, test engineers are following below approach

Step1: Collect data models of responsible modules from design documents

Ex: ER-diagrams

Step2: Study every input attribute in terms of size and type with constraints

Step3: Prepare BVA and ECP for every input attribute in below format

I/P Attribute ECP BVA (Size/Range)


Valid Invalid Min Max

This table is called DATA MATRIX. This table is providing information about every object

Step4: Identify critical and non-critical inputs in above list

Critical inputs are involving in internal manipulations. Non-critical inputs used for printing purpose.

NOTE: If our test case is covering an operation, then test engineers are preparing step-by-step procedure from base state
to end state. If our test case is covering an object, then test engineers are preparing data matrix.

User Interface Based Test Case Design

To conduct usability testing, test engineers are preparing test cases depending on global user inter face convection, our
organization rules and interest of customer site people.

Example test cases:

Test case1: Spelling check

Test case2: Graphics check (alignment, font, style, color and other micro soft six rules)

Test case3: Meaning full error messages

Test case4: Accuracy of data display

Test case5: Accuracy of data in the database as a result of user input

Test case6: Accuracy of data in the database as a result of external factors

Ex: file attachment, export files, import files etc


Test case7: Meaning full help messages

NOTE: test case1 to test case6 are indicating user inter face testing and test case7 is indicating manuals support testing.

Q. What is Microsoft Six Rules?

1) Controls are init Cap

2) Controls should not be overlapped.

3) Controls should be visible

4) Controls should be alignment

5) Ok & Cancel buttons existence.

6) System menu existence.

Test Case Design Review

Before receiving build from development team to start test execution, test lead is analyzing the completeness and
correctness of prepared test cases by test engineers through a review meeting.

There are four types of Review:

1) Self Review.
2) Peer Review (Along with colleague).
3) Internal Review (PM/TL/BA/TE)
4) External Review (Customer)

In this review test lead is analysis the coverage’s.

—Business requirement based coverage


—Use cases based coverage
—Data model based coverage
—User inter face based coverage
—Test responsibility matrix based coverage

At the end of this review, test lead is preparing requirements trace ability matrix (RTM). This matrix defines mapping
between customer requirements and prepare test cases. This matrix is also known as requirements validation matrix
(RVM).

Traceability Matrix: (RVM/RTM)

Traceability Matrix define mapping between business requirement & prepared test cases to validate the customer
requirement. This matrix is prepared by the TL.
a) Forward Traceability Matrix: Mapping between prepared test case & business requirement is called as
Forward Traceability Matrix

b) Backward Traceability Matrix: It is a mapping between defects and prepared test cases

Requirement Validation Matrix: Sometimes defect is valid defect and written test case is also correct but still
defect is occurred that time we have add extra test case for that defect is called as Requirement Validation Matrix.

Test Reporting Or Defect Tracking

During level-1 and level-2 test execution, test engineers are reporting mismatches to development team in IEEE format.

Format:

Defect Id: Unique number and name.

Description: Summary of defect

Build Version Id: Version number of build, in which test engineers found this defect

Feature: The Corresponding module name, in which test engineer found this defect

Test Case Name: The name of test condition, during this case execution, test engineer found this defect

Re-producible: Yes, means defect appears every time in test execution No, means defect appears rarely in test execution

If yes, attach test procedure

If no, attach snap shot and strong reasons

Found By: The name of test engineer

Detected On: Date of submission

Assigned To: The responsible person at development side to receive this defect

Status: New – Re-reporting defect or Reopen – Reporting defect first time

Severity: The seriousness of defect in terms of functionality

High – Not able to continue test execution with out resolving that defect

Medium – Able to continue remaining testing but compulsory to solve

Low – Able to continue remaining testing but optional to resolve (may/may not)

Priority: The importance of this defect in terms of customer

Suggested Fix (Optional): Expected possibilities to resolve this defect by developers


Fixed By: Project manager/project lead

Resolved By: Programmer name

Resolved On: Date of resolving

Resolution Type:

Approved By: Signature of project manager

NOTE: In above format development people try to change priority of defect with respect to customer importance

Defect age: The time gap between defect reported date and defect resolved date is called defect age

Defect Resolution Type: During test execution, test engineers are reporting mismatches to development team as defects.
After receiving defect reporting from testing team, development people are conducting bug-fixing review and they will
send resolution type report to corresponding testing team. There are 12 types of resolutions to report to testing team.

Duplicate: Rejected due to this defect equal to previously reported defect.

Enhancement: rejected due to this defect related to future requirement of the customer

Hard ware limitation: rejected due to this defect related to limitations of hard ware devices

Soft ware limitation: rejected due to this defect related to limitations of soft ware technologies

Not applicable: rejected due to improper meaning of defect

Functions as designed: rejected due to coding is correct with respect to design logic

Need more information: not accepted and not rejected but developer’s required extra information to under stand the
defect

Not reproducible: not accepted and not rejected but developer’s required correct producer to reproduce that defect

No plan to fix it: not accepted and not rejected but developer’s required extra time to fix

Fixed: accepted and ready to resolve

Fixed indirectly (deferred): accepted but postponed to future version

User miss-understanding: extra negotiation between developers and tester

Types Of Bugs: During test execution either in manual or in automation, test engineers are finding below types of bugs.
Users Inter Face Bugs: (low severity)

Ex1: Spelling mistake (high priority)

Ex2: Improper right alignment (low priority)

Error Handling Bugs: (medium severity)

Ex1: Does not return error message (high priority)

Ex2: Complex meaning in error message (low priority)

Input Domain Bugs: (medium severity)

Ex1: Allows in valid inputs (high priority)

Ex2: Allows in valid type also (low priority)

Calculations Bugs: (high severity)

Ex1: Dependent out puts are wrong (application show stopper) (high priority)

Ex2: Find out put is wrong (module show stopper) (low priority)

Race Condition Bugs: (high severity)

Ex1: Deadlock or hang (application show stopper) (high priority)

Ex2: Improper order of functionalities (low priority)

Load Condition Bugs: (high severity)

Ex1: Does not allows multiple users (high priority)

Ex2: Does not allows customer expected load (low priority)

Hard-Ware Bugs: (high severity)

Ex1: Not able to establish connection to hard ware device (high priority)

Ex2: Wrong out put from device (low priority)

Version Control Bugs: (medium severity)

Ex1: Mis matches in between two consecutive build versions

ID-Control Bugs: (medium severity)

Ex: Wrong logo, logo missing, copy right window missing, wrong version number, soft ware title mistake, team members
names missing——etc.

Source Bugs: (medium severity)


Ex: Mistakes in help documents

Вам также может понравиться