Вы находитесь на странице: 1из 42

Table of Content

1. What is Software Testing?


1.1. Why is Testing required?
1.2. Who does Testing?
1.3. When do we start Software Testing?
1.4. When to stop testing?
1.5. How is Software Testing done?
1.6. Why is testing required?
2. What is Software Quality?
2.1. Verification and Validation
2.2. Attributes of Software Quality
3. Seven Testing Principles
4. What is Software Testing Life Cycle?
5. Different Levels of Software Testing
6. Test Design Techniques
6.1. Static Test Design Techniques
6.2. Dynamic Test Design Techniques
7. What is a Test Case?
7.1. Test Case Attributes:
7.2. How to write good test cases?
7.3. Test Management Tools
8. Life Cycle of a Defect
9. Software Development Life Cycle (SDLC)
9.1. SDLC Stages
9.2. SDLC Models
10. Overview of Scrum Agile Development Methodology
11. The Bug Report
11.1 Elements of a Bug Report
11.2 How to write an effective Bug Report
11.3 Bug Tracking Software
12. Practical Tips for Software Testers
1. What is Software Testing?
Software testing is the process of evaluating a system with intend of finding bugs. It is performed
to check if the system satisfies its specified requirements.
Testing measures the overall quality of the system in terms of its correctness, completeness,
usability, performance and other functional and non-functional attributes.

1.1. Why is Testing required?

Software testing as a separate activity in SDLC, is required because:

 Testing provides an assurance to the stakeholders that product works as intended.


 Avoidable defects leaked to the end user/customer without proper testing adds bad
reputation to the development company.
 Separate testing phase adds a confidence factor to the stakeholders regarding quality of
the software developed.
 Defects detected earlier phase of SDLC results into lesser cost and resource utilization of
correction.
 Saves development time by detecting issues in earlier phase of development.
 Testing team adds another dimension to the software development by providing a
different view point to the product development process.

1.2. Who does Testing?

Software Testing is/can be done by all technical and non-technical people associated with the
software. Testing in its various phases is done by-

 Developer - Developer does the unit testing of the software and ensure that the individual
methods work currently
 Tester - Testers are the face of the software testing. A tester verifies the functionality,
usability of the application as functional tester, a tester checks the performance of the
application as a Performance tester, a tester automates the manual-functional test cases
and creates test scripts as an automation tester
 Test Managers/Lead/Architects - Define the test strategy and test plan
 End users - A group of end users do the User Acceptance Testing (UAT) of the
application to make sure the software can work in the real world

1.3. When do we start Software Testing?

Based on the selection of different Software Development Life Cycle Model for the software
project, testing phase gets started in the different phases. There is a software myth that testing is
done only when some part of software is built but testing can (should) be started even before a
single line of code is written. It can be done in parallel with development phase e.g. in case of V
Model:
Development Phase Testing Activity
Requirement Design UAT Test Preparation
Functional Specification Functional Tests Preparation
Implementation Unit test preparation
Code Complete Test case execution

1.4. When to stop testing?

This question - "When to stop testing" or "how much testing is enough" is very tricky to answer
as we can never be sure that the system is 100% bug-free. But still there are some markers that
help us in determining the closure of the testing phase of software development life cycle.

 Sufficient pass percentage - Depending on the system, testing can be stopped when an
agreed upon test case pass percentage is reached.
 After successful test case execution - Testing phase can be stopped when one complete
cycle of test cases is executed after the last known bug fix.
 On meeting deadline - Testing can be stopped after deadlines get met with no high
priority issues left in system.
 Mean Time Between failures (MTBF) - MTBF is the time interval between to inherent
failures. Based on stakeholders decisions, if the MTBF is quite large one can stop the
testing phase
 Based on Code coverage value - Testing phase can be stopped when the automated code
coverage reaches a certain acceptable value.

1.5. How is Software Testing done?

Software testing can be done both manually as well as using automation tools. Manual effort
includes verification of requirement and design; development of test strategy and plan;
preparation of test case and then the execution of tests. Automation effort includes preparation of
test scripts for UI automation, back-end automation, performance test script preparation and use
of other automation tools.

1.6. Why is testing required?

Let's now briefly see why we need testing in software context-

 Testing is important as it uncovers a defect before it is delivered to customer ensuring


quality of software.
 So that the defects or bugs can be identified in early stages of development; as later the
stage in which bug is identified, more is the cost to rectify it.
 It makes software more reliable and user-friendly to operate.
 An untested software not only makes software error prone, it also costs the customer
business failure too like in case of Microsoft's MP3 player - Zune's crash.
 Software issue can cost lives too e.g. in case of Therac 25 - many people died due to
concurrent programming errors wherein patients were given radiation doses that were
hundreds of times greater than normal, resulting in death or serious injury.
 Well tested software provide efficient resource utilization resulting in low cost.
 A thoroughly tested software ensures reliable and high performance functioning of the
software.

2. What is Software Quality?


Software quality is the conformance of a software system to its requirements. In software
perspective, quality is defined by two factors - Validation and Verification. Validation checks if
the process used during software development is correct or not whereas in verification the
product is evaluated to check if its meets the specifications.

2.1. Verification and Validation

Software testing is basically the sum total of the two activities - Verification and Validation.
Verification is the process of evaluating the artifacts of software development in order to ensure
that the product being developed will comply with the standards. It is static process of analyzing
the documents and not the actual end product.
Whereas, Validation is the process of validating that the developed software product conforms
to the specified business requirements. It involves dynamic testing of software product by
running it.

The difference between the two is:

Verification Validation
1. Verification involves evaluation of artifacts of Validation involves validation of
software development to ensure that the developed software product to check if it
product being developed will comply to its conforms to the specified business
requirement. requirements.
2. It is static process of analyzing the documents It involves dynamic testing of software
and not the actual end product. product by running it.
3. Verification is a process oriented approach. Validation is a product oriented approach.
4. Answers the question - "Are we building the Answers the question - "Are we building
product right?" the right product?"
5. Errors found during verification require lesser Errors found during validation require
cost/resources to get fixed as compared to be more cost/resources. Later the error is
found during validation phase. discovered higher is the cost to fix it.
6. It involves activities like document review, It involves activities like functional
test cases review, walk-throughs, inspection testing, automation testing etc.
etc.

Example: Let’s say we are writing a program for addition. a+b = c

Verification: Are we getting some output for a+b? 1+1 = 6


Validation: Are we getting correct output for a+b? 1+1=2?

Verification basically asks if the program is correct. To use your simple example, a=x + Y is
correct if x=1 and y=2 yields a = 3.

Validation asks if the correct program was produced. For example, calculate the area of a
rectangle with length x and width y. If x=1 and y =2, the result should be 2. The first program is
correct but not valid given the requirement. It is not the right program.

2.2. Attributes of Software Quality

 Correctness - Correctness measures the software quality for the conformance of the
software to its requirements
 Reliability - Checks if the software performs its functions without any failure within the
expected conditions
 Robustness - Robustness is the ability of the software to not crash when provided with
unexpected input
 Usability - Usability is the ease of operating the software
 Completeness - Completeness is the extent to which the software system meets its
specifications
 Maintainable - Maintainability is the measure of the amount of effort required for
software maintenance after it has shipped to end user
 Portability - Ability of the software to be transformed from one platform or
infrastructure to other
 Efficiency - Efficiency is the measure of resources required for the functioning of the
software
3. Seven Testing Principles
A number of testing principles have been suggested over the past 40 years and offered general
guidelines common for all testing.

1. Testing is context dependent

Different methodologies, techniques and types of testing is related to the type and nature of the
application. For example, a software application in a medical device needs more testing than a
games software. More importantly a medical device software requires risk based testing, be
compliant with medical industry regulators and possibly specific test design techniques. By the
same token, a very popular website, needs to go through rigorous performance testing as well as
functionality testing to make sure the performance is not affected by the load on the servers.

2. Exhaustive testing is impossible

Unless the application under test (AUT) has a very simple logical structure and limited input, it
is not possible to test all possible combinations of data and scenarios. For this reason, risk and
priorities are used to concentrate on the most important aspects to test.

3. Early testing

The sooner we start the testing activities the better we can utilize the available time. As soon as
the initial products, such the requirement or design documents are available, we can start testing.
It is common for the testing phase to get squeezed at the end of the development lifecycle, i.e.
when development has finished, so by starting testing early, we can prepare testing for each level
of the development lifecycle.
Another important point about early testing is that when defects are found earlier in the lifecycle,
they are much easier and cheaper to fix. It is much cheaper to change an incorrect requirement
than having to change a functionality in a large system that is not working as requested or as
designed!

4. Defect clustering

During testing, it can be observed that most of the reported defects are related to small number of
modules within a system. i.e. small number of modules contain most of the defects in the system.
This is the application of the Pareto Principle to software testing: approximately 80% of the
problems are found in 20% of the modules.

5. The pesticide paradox

If you keep running the same set of tests over and over again, chances are no more new defects
will be discovered by those test cases. Because as the system evolves, many of the previously
reported defects will have been fixed and the old test cases do not apply anymore. Anytime a
fault is fixed or a new functionality added, we need to do regression testing to make sure the new
changed software has not broken any other part of the software. However, those regression test
cases also need to change to reflect the changes made in the software to be applicable and
hopefully fine new defects.

6. Testing shows the presence of bugs

Testing an application can only reveal that one or more defects exist in the application, however,
testing alone cannot prove that the application is error free. Therefore, it is important to design
test cases which find as many defects as possible.

7. Absence of errors fallacy

Just because testing didn’t find any defects in the software, it doesn’t mean that the software is
ready to be shipped. Were the executed tests really designed to catch the most defects? or where
they designed to see if the software matched the user’s requirements? There are many other
factors to be considered before making a decision to ship the software.

Other principles to note are:

o Testing must be done by an independent party.

Testing should not be performed by the person or team that developed the software since they
tend to defend the correctness of the program.

o Assign best personnel to the task.

Because testing requires high creativity and responsibility only the best personnel must be
assigned to design, implement, and analyze test cases, test data and test results.
o Test for invalid and unexpected input conditions as well as valid conditions.

The program should generate correct messages when an invalid test is encountered and should
generate correct results when the test is valid.

o Keep software static during test.

The program must not be modified during the implementation of the set of designed test cases.

o Provide expected test results if possible.

A necessary part of test documentation is the specification of expected results, even if providing
such results is impractical.
4. What is Software Testing Life Cycle?
Testing a software is not a single activity wherein we just validate the built product, instead it
comprises of a set of activities performed throughout the application lifecycle. Software testing
life cycle or STLC refers to all these activities performed during the testing of a software
product.

Phases of STLC

 Requirement Analysis - In this phase the requirements documents are analyzed and
validated. Along with that the scope of testing is defined.

 Test Planning and Control - Test planning is one of the most important activities in test
process. It involves defining the test specifications in order to achieve the project
requirements. Whereas, test Control includes continuous monitoring of test progress with
the set plan and escalating any deviation to the concerned stake holders.

 Test Analysis and Design - This phase involves analyzing and reviewing requirement
documents, risk analysis reports and other design specifications. Apart from this, it also
involves setting up of test infrastructure, creation of high level test cases and creation of
traceability matrix.

 Test Case Development - This phase involves the actual test case creation. It also
involves specification of test data and automated test scripts creation.

 Test Environment Setup - This phase involves creation of a test environment closely
simulating the real world environment.

 Test Execution - This phase involves manual and automated test case execution. During
test case execution any deviation from the expected result leads to creation of defects in a
defect management tool or manual logging of bugs in an excel sheet.

 Exit Criteria Evaluation and Reporting - This phase involves analyzing the test
execution result against the specified exit criteria and creation of test summary report.

 Test Closure - This phase marks the formal closure of testing. It involves checking if all
the project deliverable are delivered, archiving the test ware, test environment and
documenting the learning.
5. Different Levels of Software Testing
Software testing can be performed at different levels of software development process.
Performing testing activities at multiple levels help in early identification of bugs and better
quality of software product. In this tutorial we will be studying the different levels of testing
namely - Unit Testing, Integration Testing, System Testing and Acceptance Testing.

Now, we will describe the different testing level of testing in brief and in the next tutorials we
will explain each level individually, providing example and detailed explanation.

Unit Testing

 Unit testing is the first level of testing usually performed by the developers.
 In unit testing a module or component is tested in isolation.
 As the testing is limited to a particular module or component, exhaustive testing is
possible.
 Advantage - Error can be detected at early stage saving time and money to fix it.
 Limitation - Integration issues are not detected in this stage, modules may work perfectly
on isolation but can have issues in interfacing between the modules.

Integration Testing

 Integration testing is the testing of a group of related modules.


 It aims at finding interfacing issues between the modules.
 It can be done in two ways - bottom-up integration and top-down integration (third type
called Sandwich Integration is also used).
System Testing

 System Testing is the level of testing where the complete integrated application is tested
as a whole.
 It aims at determining if the application conforms to its business requirements.
 System testing is carried out in an environment which is very similar to the production
environment.

Acceptance Testing

 Acceptance testing is the final and one of the most important levels of testing on
successful completion of which the application is released to production.
 It aims at ensuring that the product meets the specified business requirements within the
defined standard of quality.
 There are two kinds of acceptance testing- alpha and beta testing. When acceptance
testing is carried out by end users in developer's site it is known as alpha testing. User
acceptance testing done by end users at end-user's site is called beta testing.

6. Test Design Techniques

What are the different test design techniques?

Test design techniques are standards of test designing which allow creation of systematic and
widely accepted test cases. These techniques are based on the different scientific models and
over the year’s experiences of many QA professional.
The test design techniques can be broadly categorized into two parts - "Static test design
technique" and "Dynamic test design technique".

6.1. Static Test Design Techniques

The Static test design techniques are the testing techniques which involve testing without
executing the code or the software application. So, basically static testing deals with Quality
Assurance, involving reviewing and auditing of code and other design documents. The various
static test design techniques can be further divided into two parts - "Static testing performed
manually" and "Static testing using tools".
6.1.1. Manual Static Design Techniques

 Walk through - A Walk-through is step by step presentation of different requirement


and design documents by their authors with the intent of finding defects or any missing
pieces in the documents.
 Informal reviews - As the name suggest, an informal review done by an individual
without any process or documentation.
 Technical reviews - A technical review involves reviewing the technical approach used
during the development process. It is more of a peer review activity and less formal as
compared to audit and inspection.
 Audit - An audit is a formal evaluation of the compliance of the different processes and
artifacts with standards and regulations. It is generally performed by an external or
independent team or person.
 Inspection - An inspection is a formal and documented process of reviewing the different
documents by experts or trained professional.
 Management review - It is a review performed on the different management documents
like project management plans, test plans, risk management plans etc.

6.1.2. Static Design Techniques Using Tools

 Static analysis of code - The static analysis techniques for the source code evaluation
using tools are :
o Control flow analysis - The control flow analysis requires analysis of all possible
control flows or paths in the code.
o Data flow analysis - The data flow analysis requires analysis of data in the
application and its different states.
 Compliance to coding standard - This evaluates the compliance of the code with the
different coding standards.
 Analysis of code metrics - The tool used for static analysis is required to evaluate the
different metrics like lines of code, complexity, code coverage etc.

6.2. Dynamic Test Design Techniques

Dynamic test design techniques involves testing by running the system under test. In this
technique, the tester provide input data to the application and execute it, in order to verify its
different functional and non-functional requirements.

 Specification based - Specification based test design techniques are also referred to as
black-box testing. These involve testing based on the specification of the system under
test without knowing its internal architecture. The different types of specification based
test design or black box testing techniques are - "Equivalence partitioning", "Boundary
value analysis", "Decision tables", "Cause-effect graph", "State transition testing" and
"Use case testing".
 Structure based - Structure based test design techniques are also referred to as white
box testing. In this techniques the knowledge of code or internal architecture of the
system is required to carry out the testing. The various kinds of testing structure based or
white testing techniques are - "Statement testing", "Decision testing/branch testing",
"Condition testing", "Multiple condition testing", "Condition determination testing" and
"Path testing".

 Experienced based - The experienced based techniques as the name suggest does not
require any systematic and exhaustive testing. These are completely based on the
experience or intuition of the tester. Two most common forms of experienced based
testing are – ad-hoc testing and exploratory testing.

6.2.1. Specification based Test design Technique - Black Box Testing


Techniques

What is specification based testing?

Specification based testing is also referred to as black-


box testing. It involves performing testing based on the
specification of the system under test. The knowledge of
the internal architecture and coding is not required in
specification based testing.

Specification based testing techniques

The different types of specification based test design


techniques or black box testing techniques are-

 Equivalence partitioning - In equivalence class


partitioning we group the input data into logical groups
or equivalence classes. All the data items lying in an
equivalence class are assumed to have same behavior
when passed as input to the application. E.g. for an application finding square of
numbers, we can different equivalence classes like - all whole positive numbers, negative
numbers, decimal numbers, negative decimal numbers etc.

 Boundary value analysis - In boundary value analysis the boundary values of the
equivalence partitioning classes are taken as input to the application. E.g. for equivalence
classes limiting input between 0 and 100, the boundary values would be 0 and 100.

 Decision tables - Decision tables testing is used to test application's behavior based on
different combination of input values. A decision table has different set of input
combination and their corresponding expected outcome on each row.
 Cause-effect graph - A cause-effect graph testing is carried out using graphical
representation of input i.e. cause and output i.e. effect. We can find the coverage of cause
effect graphs based on the percentage of combinations of inputs tested out of the total
possible combinations.

 State transition testing - The state transition testing is based on state machine model. In
this technique, we test the application by graphically representing the transition between
the different states of the application based on the different events and actions.

 Use case testing - Use case testing is carried out using use cases. In this technique, we
test the application using use-cases, representing the interaction of the application with
the different actors.

Equivalence Class Partitioning

Equivalence class partitioning is a black-


box testing technique (or specification
based testing technique) in which we
group the input data into logical groups
or equivalence classes. All the data items
lying in an equivalence class are assumed
to be processed in the same way by the
application when passed as input.

Example

Consider the example below:


An application accepting integer values (that is whole number values) between -10,000 to
+10,000 can be expected to be able to handle negative integers, zero and positive integers.
Therefore, the set of input values can be divided into three partitions:

From -10,000 to -1, 0 and from 1 to 10,000

Moreover, it is expected that the system to behave the same for values inside each partition. i.e.
the way the system handles -6391 will be the same as -9. Likewise, positive integers, 5 and 3567
will be treated the same by the system. In this particular example, the value 0 is a single value
partition. It is normally a good practice to have a special case for number zero.

It is important to note that this technique does not only apply to numbers. The technique can
apply to any set of data that can be considered as an equivalent. E.g. an application that reads in
images of only three types, .jpeg, .gif and .png, then three sets of valid equivalent classes can be
identified.

An image with a .jpeg extension


An image with a .gif extension
An image with a .png extension
Now, opening a .jpeg file which is an image of the moon, will behave the same as a file with an
image of a dog. Therefore, opening only one file of type .jpeg, will suffice the purpose of the test
case. The assumption is that the system will behave the same for all jpeg files. The same scenario
applies to files with .gif and .png files. Likewise, if the application cannot open files other than
the allowed and valid types, then trying to open a word document, the result will be the same as
trying to open an excel spreadsheet or a text file. (Hopefully, the application has been designed
well to cope with other file types and generates an appropriate message when attempting to open
non-acceptable file types).

These would be classed as set of invalid equivalent data. Trying to open the application with
non-acceptable or invalid file types is an example of negative testing, which is useful when
combined with equivalence partitioning technique which partitions the set of equivalent
acceptable and valid data.

Boundary Value Analysis

Boundary value analysis is a black-box


testing technique, closely associated with
equivalence class partitioning. In this
technique, we analyze the behavior of the
application with test data residing at the
boundary values of the equivalence classes.
Example

Consider the testing of a software program


that takes the integers ranging between the values of -100 to +100. In such a case, three sets of
the valid equivalent partitions are taken, which are – the negative range from -100 to -1, zero (0),
and the positive range from 1 to 100.

Each of these ranges has the minimum and maximum boundary values. The Negative range has a
lower value of -100 and the upper value of -1. The Positive range has a lower value of 1 and the
upper value of 100

While testing these values, one must see that when the boundary values for each partition are
selected, some of the values overlap. So, the overlapping values are bound to appear in the test
conditions when these boundaries are checked.

These overlapping values must be dismissed so that the redundant test cases can be eliminated.

So, the test cases for the input box that accepts the integers between -100 and +100 through BVA
are:
 Test cases with the data same as the input boundaries of input domain: -100 and +100 in
our case.
 Test data having values just below the extreme edges of input domain: -101 and 99
 Test data having values just above the extreme edges of input domain: -99 and 101

6.2.2. Structure Based Test Design Techniques White Box Testing Techniques

What is structure based testing?

Structure based testing is also referred to as white-box


testing. In this technique, the knowledge of the code and
internal architecture of the system is required to carry
out the testing.

Structure based testing techniques

The different types of structure based test design or the


white box testing techniques are-

 Statement testing - Statement testing is a white box


testing approach in which test scripts are designed to
execute code statements. The statement coverage is the
measure of the percentage of statements of code
executed by the test scripts out of the total code
statements in the application. The statement coverage is the least preferred metric for
checking test coverage.

 Decision testing/branch testing - Decision testing or branch testing is a white box


testing approach in which test coverage is measured by the percentage of decision
points(e.g. if-else conditions) executed out of the total decision points in the application.

 Condition testing - Testing the condition outcomes (TRUE or FALSE). So, getting
100% condition coverage required exercising each condition for both TRUE and FALSE
results using test scripts(For n conditions we will have 2n test scripts).

 Multiple condition testing - Testing the different combinations of condition outcomes.


Hence for 100% coverage we will have 2^n test scripts. This is very exhaustive and very
difficult to achieve 100% coverage.

 Condition determination testing - It is an optimized way of multiple condition testing


in which the combinations which doesn't affect the outcomes are discarded.

 Path testing - Testing the independent paths in the system (paths are executable
statements from entry to exit points).
7. What is a Test Case?
A test case is a set of conditions for evaluating a particular feature of a software product to
determine its compliance with the business requirements. A test case has pre-requisites, input
values and expected results in a documented form which cover the different test scenarios.

7.1. Test Case Attributes:

A test case can have following attributes:

 TestCaseId - This field uniquely identifies a test case. It is mapped with automation
scripts (if any) to keep a track of automation status. The same field can be used for
mapping with the test scenarios for generating a traceability matrix.
 E.g. - GoogleSearch_1
 Component/Module - This field specifies the specific component or module that the test
case belongs to. E.g. - Search_Bar_Module
 Priority - This field is used to specify the priority of the test case. Normally the
conventional followed for specifying the priority is either High, Medium, low or P0, P1,
P3, P3 etc with P0 being the most critical.
 Description - In this field describe the test case in brief. E.g. - Verify that when a user
writes a search term and presses enter, search results should be displayed.
 Pre-requisites - In this field specify the conditions or steps that must be followed before
the test steps executions. E.g. - Browser is launched.
 Test Steps - In this field we mention each and every step for performing the test case.
The test steps should be clear and unambiguous. E.g.
1. Write the url - http://google.com in the browser's URL bar and press enter.
2. Once google.com is launched, write the search term - "Apple" in the google
search bar.
3. Press enter.
 Test Data - In this field we specify the test data used in the test steps. E.g. in the above
test step example we could use the search term-"apple" as test data.
 Expected Result - This steps marks the expected result after the test step execution. This
used to assert the test case. E.g. - Search results related to 'apple' should be displayed.
 Actual Result - In this step we specify the actual result after the test step execution. E.g.
- Search results with 'apple' keyword were displayed.
 Status/Test Result - In this step we mark the test case as pass or fail based on the
expected and actual result. Possible values can be - Pass, Fail, Not executed.
 Test Executed by - In this field we specify the tester's name who executed the test case
and marked the test case as pass or fail.

Apart from these mandatory fields there are many optional fields that can be added as the
organization or application's need like Automation status - for marking test as automated or
manual, TestScenarioId - for mapping test case with its test scenario, AfterTest step - for
specifying any step required to be executed after performing the test case and TestType - to
specify if the test is applicable for Regression, Sanity, Smoke etc and DefectId - id of the defect
launched in any of the defect management tools etc.
Apart from these some other fields can be added for additional information like - Test Author,
Test Designed Date, Test Executed Date etc.

7.2. How to write good test cases?

As we know that a test case is a set of conditions for evaluating a software product to determine
its compliance with the business requirements. Having an ill-formed test cases can lead to severe
defect leakage, which can cost both time and money. So, writing effective test cases is an utmost
requirement for the success of any software product.

Now, let's see how we can write effective test cases:

1. Test design technique - Follow a test design technique best suited for your organization
or project specific needs like - boundary value analysis, equivalence class partitioning
etc. This ensures that well researched standards and practices are implemented during test
case creation.
2. Clear and concise tests - The test case summary, description, test steps, expected results
etc should be written in a clear and concise way, easily understandable by the different
stakeholders in testing.
3. Uniform nomenclature - In order to maintain consistency across the different test cases
a uniform nomenclature and set of standards should be followed while writing the test
cases.
4. Fundamental/Atomic Test cases - Create test cases as fundamental as possible, testing a
single unit of functionality without merging or overlapping multiple testable parts.
5. Leave no scope of ambiguity - Write test case with clear set of instruction e.g. instead of
writing "Open homepage", write - "Open homepage - http://www.{homepageURL}.com
in the browser and press enter".
6. No Assumptions - While writing test cases do not assume any functionality, pre-requisite
or state of the application. Instead, bound the whole test case creation activity to the
requirement documents - SRS, Use-case documents etc.
7. Avoid redundancy - Don't repeat the test cases, this leads to wastage of both time and
resources. This can be achieved by well-planned and categorized test cases.
8. Traceable tests - Use traceability matrix to ensure that 100% of the application's feature
in the scope of testing are covered in the test cases.
9. Ensure that different aspects of software are covered - Ensure that apart from the
functionality, the different aspects of software are tested like performance, usability,
robustness etc are covered in the test case by creating performance test cases and
benchmarks, usability test cases, negative test cases etc.
10. Test data - The test data used in testing should be as diverse and as close to real time
usage as possible. Having diverse test data can more reliable test cases.
7.3. Test Management Tools

Test management tools are used by test teams to capture requirements, design test cases, test
execution reports and lots more. If you are also one of those who are getting issues in test case
execution, need not to worry!

The test management tools will provide you a friendly UI environment, thus making your work
easier and more convenient.

1. Zephyr (JIRA + Zephyr)

JIRA has a few choices for test cases writing services, however, it is the best and perfectly
integrated bug tracker for Zephyr.

Many IT developers know that JIRA is mainly a bug tracker aiming to control development process
with tasks, bugs and other types of agile cards. Zephyr is one of the many JIRA’s plugins
extending JIRA’s capacities.

Main features of Zephyr


 Link to stories, tasks, requirements, etc.
 Create, view, edit and clone tests
 Execute tests
 File defects
 Track quality metrics
 Create custom dashboards
 Perform advanced searches using ZQL
 Plan test execution cycles
 Integration with tools like JIRA, Selenium, QTP/UFT, Bamboo, Jenkins, Confluence
 Well documented REST APIs for custom integrations.

2. TestLink

It is an open source and web-based test


management tool. In spite of some difficulties with
installing, it is used by many Development Teams and
QA Engineers. Testing Life Cycle begins with creating a
project, adding members and assign them roles. It is quite
the same as in other development tools.

Several more features:

 create and describe the requirements of your product


 create test cases on the basis of these requirements
 group your test cases in a test plan
 cover the client’s requirements (or your own � ) with test cases
 select a person for testing
 receive the report once the test run is over
3. TestRail

TestRail, made by Gurock Software GmbH Company, was the first tool our team used for
planning and testing. The company founded in 2004 has created a range of test tools but the most
successful product is TestRail. It is another amazing test case management tool which generates
platform to create and run test cases. TestRail integrates with a ticket management tool called
Gemini and with many other issue-tracking tools and provides some external link for its test case
creation and execution support.
TestRail Features:

 Create Test Case and Test Suites


 Create Test Plan
 Monitor Test Execution
 Report Metrics
 Milestones facility
 Track Test Results
 Email notifications

Other test management tools:

 qTest
 TestCollab
 TestLodge
 QACoverage
 EasyQA
 QMetry
8. Life Cycle of a Defect

A defect life cycle is the movement of a bug or defect in different stages of its lifetime, right
from the beginning when it is first identified till the time is marked as verified and closed.
Depending on the defect management tool used and the organization, we can have different
states as well different nomenclature for the states in the defect life cycle.

 New - A bug or defect when detected is in New state


 Assigned - The newly detected bug when assigned to the corresponding developer is in
Assigned state
 Open - When the developer works on the bug, the bug lies in Open state
 Rejected/Not a bug - A bug lies in rejected state in case the developer feels the bug is
not genuine
 Deferred - A deferred bug is one, fix of which is deferred for some time (for the next
releases) based on urgency and criticality of the bug
 Fixed/InTest - When a bug is resolved by the developer it is marked as fixed and
assigned to the tester
 Reopened - If the tester is not satisfied with issue resolution the bug is moved to
Reopened state
 Verified - After the Test phase if the tester feels bug is resolved, it is marked as verified
 Closed - After the bug is verified, it is moved to Closed status.
9. Software Development Life Cycle (SDLC)
SDLC stands for “Software Development Life Cycle”. It describes the various phases involved
in the software development process. The software development life cycle starts with the
requirement gathering phase and after that based on the requirements, design specifications are
created. The high and low level designs created in the design-specification phase lead to the
implementation phase. In the implementation phase, coding is done. This phase leads to a
software product that is required to be tested in order to validate the business requirements of the
product. After successful testing (product with no high priority bugs), the software is deployed to
the client.

9.1. SDLC Stages

The different phases of Software Development Life Cycle are:

 Requirement Gathering
 Design Specification
 Coding/Implementation
 Testing
 Deployment
 Maintenance

Now, let's see the different phases of SDLC in detail.

1. Requirement Gathering

Requirement gathering is one of the most critical phase of SDLC. This phase marks as the basis
of whole software development process. All the business requirements are gathered from the
client in this phase. A proper document is made which tells the purpose and the guidelines for the
other phases of the life cycle. For example- if we want to make a website for a restaurant. The
requirement analysis phase will answer the questions like-

 What type of website is needed – static or dynamic?


 What kind of functionalities are needed by the client?
 Is ordering online facility required?
 Is online payment functionality required?

2. Design Specification

A software design or we can say a layout is prepared in this phase according to the requirements
specified in the previous step. In this phase, the requirements are broken down into multiple
modules like login module, signup module, menu options on other modules etc. So this design is
considered as the input for the next implementation phase.

3. Implementation

In this phase, the actual development gets started. The developer writes codes using different
programming languages depending upon the need of the product. The main stakeholders in this
phase are the development team.

4. Testing

After the completion of the development phase testing begins. Here testers test the software and
provide appropriate feedback to the developing team. The tester checks that whether the software
developed fulfills the desired requirements of the client that are described in the requirement
phase or not. Functional and non-functional testing is performed here before delivering it.

5. Deployment

After the testing gets completed these product developed gets live and is handed over to the
client. Now the client can publish it online and can decide about customer’s access.

6. Maintenance

In this phase, the maintenance of the software product is taken care of like making changes to the
software that are required for the intended functionality of the product over a period of time.
9.2. SDLC Models

There are various models in software development life cycle depending on the requirement,
budget, criticality and various other factors. Some of the widely used SDLC models are:

 Waterfall model
 Iterative model
 Incremental model
 Spiral model
 V model
 Agile model

A. Waterfall Model

Waterfall is the oldest and most straightforward of the structured SDLC methodologies — finish
one phase, then move on to the next. No going back. Each stage relies on information from the
previous stage and has its own project plan. Waterfall is easy to understand and simple to
manage. But early delays can throw off the entire project timeline. And since there is little room
for revisions once a stage is completed, problems can’t be fixed until you get to the maintenance
stage. This model doesn’t work well if flexibility is needed or if the project is long term and
ongoing.

B. Iterative Model

The Iterative model is repetition incarnate. Instead of starting with fully known requirements,
you implement a set of software requirements, then test, evaluate and pinpoint further
requirements. A new version of the software is produced with each phase, or iteration. Rinse and
repeat until the complete system is ready.

One advantage over other SDLC methodologies: This model gives you a working version early
in the process and makes it less expensive to implement changes. One disadvantage: Resources
can quickly be eaten up by repeating the process again and again.

C. Incremental Model

In incremental model the whole requirement is divided into various builds. Multiple development
cycles take place here, making the life cycle a “multi-waterfall” cycle. Cycles are divided up into
smaller, more easily managed modules. Incremental model is a type of software development
model like V-model, Agile model etc.

In this model, each module passes through the requirements, design, implementation and testing
phases. A working version of software is produced during the first module, so you have working
software early on during the software life cycle. Each subsequent release of the module adds
function to the previous release. The process continues till the complete system is achieved.
D. Spiral Model

One of the most flexible SDLC methodologies, the Spiral model takes a cue from the Iterative
model and its repetition; the project passes through four phases over and over in a “spiral” until
completed, allowing for multiple rounds of refinement. This model allows for the building of a
highly customized product, and user feedback can be incorporated from early on in the project.
But the risk you run is creating a never-ending spiral for a project that goes on and on.

E. V-Shaped Model

Also known as the Verification and Validation model, the V-shaped model grew out of Waterfall
and is characterized by a corresponding testing phase for each development stage. Like
Waterfall, each stage begins only after the previous one has ended. This model is useful when
there are no unknown requirements, as it’s still difficult to go back and make changes.

F. Agile Model

By breaking the product into cycles, the Agile model quickly delivers a working product and is
considered a very realistic development approach. The model produces ongoing releases, each
with small, incremental changes from the previous release. At each iteration, the product is
tested.

This model emphasizes interaction, as the customers, developers and testers work together
throughout the project. But since this model depends heavily on customer interaction, the project
can head the wrong way if the customer is not clear on the direction he or she wants to go.
10. Overview of Scrum Agile Development Methodology

Scrum is an agile development methodology for managing and completing projects. It is a way
for teams to work together to achieve a set of common goals.

Scrum is an iterative and incremental approach to software development, meaning that a large
project is split into a series of iterations called “Sprints”, where in each sprint, the goal is to
complete a set of tasks to move the project closer to completion.

Each sprint typically lasts 2 to 4 weeks or a calendar month at most. Building products one small
piece at a time encourages creativity and enables teams to respond to feedback and change and to
build exactly what is needed.

In scrum, product is designed, coded and tested in the sprint.

The scrum framework has three components: Roles, Events and Artifacts.

Roles

Product Owner

 Defines features and release plans


 Prioritize features every iteration as needed
 Accept or reject work results

Scrum Master

 Responsible for enacting Scrum values and practices


 Ensure that the team is fully functional and productive
 Enable close cooperation across all roles and functions

Team

 Cross-functional: Programmers, testers, user experience designers, etc.


 Members should be full-time
 Teams are self-organizing

Sprint Events

Daily scrum meetings

 Daily review meeting for 10-15 mins


 Status review and not for problem solving
 All sprint team members participate
 More on daily scrum meeting

Sprint review

 Demo of new features to customer/product owner


 Team presents work accomplished during the sprint
 All major stakeholders participate

Sprint retrospective

 Periodic post mortem to review what’s working and what’s not


 Done after every sprint
 All major stakeholders participate

Artifacts

Product backlog

 A list of all desired work on the project


 Ideally expressed such that each item has value to the users or customers of the product
 Prioritized by the product owner
 Reprioritized at the start of each sprint

Sprint Backlog

 A list of tasks identified by the Scrum team to be completed during the sprint.
 The team selects the items and size of the sprint backlog
Sprint Burndown Charts

 Chart updated every day, shows the work remaining within the sprint
 Gives an indication of the progress and whether some stories need to be removed and
postponed to the next sprint
More info about Agile Methodology and Scrum:

http://www.agilenutshell.com/
https://www.collab.net/sites/default/files/uploads/CollabNet_scrumreferencecard.pdf
11. The Bug Report

11.1. Elements of a Bug Report

Writing a good defect or bug report goes a


long way in identifying and resolving the problems
quickly. In here, I list the elements that are normally
included in a bug report.

• Defect Identifier, ID - The identifier is very important in being able to refer to the defect
in the reports. If a defect reporting tool is used to log defects, the ID is normally a program
generated unique number which increments per defect log.
• Summary - The summary is an overall high level description of the defect and the
observed failure. This short summary should be a highlight of the defect as this is what the
developers or reviewers first see in the bug report.
• Description - The nature of the defect must be clearly written. If a developer reviewing
the defect cannot understand and cannot follow the details of the defect, then most probably the
report will be bounced back to the tester asking for more explanation and more detail which
causes delays in fixing the issue. The description should explain exactly the steps to take to
reproduce the defect, along with what the expected results were and what the outcome of the test
step was. The report should say at what step the failure was observed and what the actual result
is.
• Severity - The severity of the defect shows how sever the defect is in terms of damaging
to other systems, businesses, environment and lives of people, depending on the nature of the
application system. The severities are normally ranked and categorized in 4 or 5 levels,
depending on organization’s definition.
S1 – Critical: This means the defect is a show stopper with high potential damages and has no
workaround to avoid the defect. An example could be the application does not launch at all and
causes the operating system to shut down. This requires immediate attention and action and fix.
S2 – Serious: This means that some major functionalities of the applications are either missing or
do not work and there is no workaround. Example, an image viewing application cannot read
some common image formats.
S3 – Normal: This means that some major functionality do not work, but, a workaround exists to
be used as a temporary solution.
S4 – Cosmetic / Enhancement: This means that the failure causes inconvenience and annoyance.
Example can be that there is a pop-up message every 15 minutes, or you always have to click
twice on a GUI button to perform the action.
S5 – Suggestion: This is not normally a defect and a suggestion to improve a functionality. This
can be GUI or viewing preferences.
• Priority - Once the severity is determine, next is to see how to prioritize the resolution.
The priority determines how quickly the defect should be fixed. The priority normally concerns
the business importance such as impact on the project and the likely success of the product in the
marketplace. Like severity, priority is also categorized in to 4 or 5 levels.
P1 – Urgent: Means extremely urgent and requires immediate resolution
P2 – High: Resolution requirement for next external release
P3 – Medium: Resolution required for the first deployment (rather than all deployments)
P4 – Low: Resolution desired for the first deployment or subsequent future releases
It is important to note that a defect which has a high severity also bears a high priority, i.e. a
severe defect will require a high priority to resolve the issue as quick as possible. There can
never be a high severity and low priority defect. However, a defect can have a low severity but
have a high priority.
An example might be a company name is misspelled on the splash screen as the application
launches. This does not cause a severe damage to the environment or people’s lives, but can have
potential damages to company’s reputation and can harm business profits.
• Data and time - The date and time that the defect occurred or reported is also essential.
This is normally useful when you want to search for defects that were identified for a particular
release of software or from when the testing phase started.
• Version and Build of the Software Under Test - This is very important too. In most
cases, there are many versions of software; each version has many fixes and more functionality
and enhancements to the previous versions. Therefore, it is essential to note which version of the
software exhibited the failure that we are reporting. We may always refer to that version of
software to reproduce the failure.
• Reported by - Again, this is important, because if we may need to refer to the person
who raised the defect, we have to know who to contact.
• Related Requirement - Essentially, all features of a software application can be traced to
respective requirements. Hence, when a failure is observed, we can see what requirements have
been impacted. This can help in reducing duplicate defect reports in that if we can identify the
source requirement, then if another defect is logged with the same requirement number, we may
not need report it again, if the defects are of similar nature.
• Attachments/ Evidence - Any evidence of the failure should be captured and submitted
with the defect report. This is a visual explanation of the description of the defect and helps the
reviewer, developer to better understand the defect (screen-shots, video etc.)

11.2. How to write an effective Bug Report

As a tester tests an application and if he/she finds any defect, the life cycle of the defect starts
and it becomes very important to communicate the defect to the developers in order to get it
fixed, keep track of current status of the defect, find out if any such defect (similar defect) was
ever found in last attempts of testing etc. For this purpose, previously manually created
documents were used, which were circulated to everyone associated with the software project
(developers and testers), now a days many Bug Reporting Tools are available, which help in
tracking and managing bugs in an effective way.

It’s a good practice to take screen shots of execution of every step during software testing. If any
test case fails during execution, it needs to be failed in the bug-reporting tool and a bug has to be
reported/logged for the same.

The tester can choose to first report a bug and then fail the test case in the bug-reporting tool or
fail a test case and report a bug. In any case, the Bug ID that is generated for the reported bug
should be attached to the test case that is failed.
At the time of reporting a bug, all the mandatory fields from the contents of bug (such as Project,
Summary, Description, Status, Detected By, Assigned To, Date Detected, Test Lead, Detected in
Version, Closed in Version, Expected Date of Closure, Actual Date of Closure, Severity, Priority
and Bug ID etc.) are filled and detailed description of the bug is given along with the expected
and actual results. The screen-shots taken at the time of execution of test case are attached to the
bug for reference by the developer.

After reporting a bug, a unique Bug ID is generated by the bug-reporting tool, which is then
associated with the failed test case. This Bug ID helps in associating the bug with the failed test
case.

After the bug is reported, it is assigned a status of ‘New’, which goes on changing as the bug
fixing process progresses.

If more than one tester are testing the software application, it becomes a possibility that some
other tester might already have reported a bug for the same defect found in the application. In
such situation, it becomes very important for the tester to find out if any bug has been reported
for similar type of defect. If yes, then the test case has to be blocked with the previously raised
bug (in this case, the test case has to be executed once the bug is fixed). And if there is no such
bug reported previously, the tester can report a new bug and fail the test case for the newly raised
bug.

If no bug-reporting tool is used, then in that case, the test case is written in a tabular manner in a
file with four columns containing Test Step No, Test Step Description, Expected Result and
Actual Result. The expected and actual results are written for each step and the test case is failed
for the step at which the test case fails.

This file containing test case and the screen shots taken are sent to the developers for reference.
As the tracking process is not automated, it becomes important keep updated information of the
bug that was raised till the time it is closed.

11.3. Bug Tracking Software

A bug tracking system, also known as a defect tracking system, is considered to be a software
application that helps to keep track of the reported software bugs in all the software development
projects. Bug tracking tools are regarded as a type of issue tracking system. It is kind of a
computer program, used by the team of application support professionals, to keep track of the
various issues that the software developers face.
Some of the best tracking tools in software industries are:

4. BugZilla - is a Mozilla Foundation supported / developed bug


tracking system that allows its users to log and track defects in their
product effectively. It is a very mature and feature rich application
with features like advanced search capabilities, bug lists in multiple
formats, scheduled reports, automatic duplicate bug detection,
capability to file / modify bugs by email, time tracking, request system, private
attachment and comments, patch viewer etc.

2. Mantis BT is a web-based bug tracking system


that not only keeps track of bugs, but includes a user
system so that multiple users can interact and
multiple projects can be tracked. The application has
features like an integrated wiki, chat, RSS feeds, time
tracking, source code integration, built in reporting, email notifications, attachments, multi-
DBMS support, support for mobile devices etc.

3. Trac - Trac is a web based open source issue tracking system


that developed in Python. It is the superior version of wiki and
used as the issue tracking tool for software development projects.
You can use it to browse through the code, view history, view
changes, etc. when you integrate Trac with SCM. It supports
multiple platforms like Linux, Unix, Mac OS X, Windows, etc. A
time-line shows all current and past project event in order while
the roadmap highlights the upcoming milestones.

4. RedMine - Redmine is a popular issue


tracking tool built on Ruby on Rails and dating
back to 2006. Similar in many regards to Trac,
another one of our favorites, Redmine is capable of managing multiple projects and
integrates with a number of version control systems. In addition to basic issue tracking,
Redmine also offers forums, wikis, time tracking tools, and the ability to generate Gantt
charts and calendars to track progress

5. JIRA - Thousands of software professionals use JIRA as a bug-


tracking tool because of its easy to use framework. JIRA is a commercial
product and helps to capture and organize the team issues, prioritizing the
issue and updating them with the project. It is a tool that directly
integrates with the code development environments making it a perfect fit for
developers as well. Due to its capability to track any kind of issues it is not just
restricted to the software industry. It supports agile projects. It comes with many
add-ons that make this tool more powerful than other tools

More info about JIRA:

https://www.youtube.com/watch?v=9Z5ruL6JOHk

http://www.guru99.com/jira-tutorial-a-complete-guide-for-beginners.html

https://confluence.atlassian.com/jira064/jira-user-s-guide-720416011.html
12. Practical Tips for Software Testers

Below are a list of guidelines and tips for software testers and QA professionals when involved
in testing applications. These software testing tips are collected from many years of experience
in testing web applications in an agile environment. If you want to share your testing tips, then
add it in the comments field.

Some guidelines for QAs when testing for story/bug

 Don’t leave any questions unanswered. The acceptance criteria must be complete in order
to ensure you fully understand what the feature/story wants to achieve.

 Ensure you know how to test the feature/story.

 Consider the full end-to-end flows when thinking about test cases.

 Consider all related error scenarios, e.g. web service connection down, invalid inputs, etc.

 Consider different browsers – as per the supported browsers.

 Consider mobile impact – mobile web and tablet – should any of the features behave
differently when used on a touch device, compared to using a keyboard to navigate?

 Consider basics of security testing, such as https both URL and resources for protected
areas of the site.

 Consider whether this story warrants being included in the automation test suite.
As a rough guide: only scenarios where its failure would result in a P1 or P2 in
production will be automated. This also includes scenarios with a lot of data to be
checked through, which would be very repetitive to do manually.

 When you find bugs related to a story, raise them as bug-subtasks, to ensure the link to
the story is kept.

 When signing a story or bug off as testing complete, ensure a comment is added in Jira
which includes the test environment and code version on which the tests were signed off.

 If the story or bug can’t, or won’t be tested by a QA and will be tested by a developer
instead, ensure you review the test approach and add a note in Jira that you approve of the
dev’s test approach, ideally with a short description. Ensure the dev adds which version
is being signed off.
On Daily Tasks

 Understand the area of the application being modified by developers

 What unit tests have been written by developers

 What are the high priority stories and priorities work depending on day of sprint

 Get clarifications on stories that are vague

 Review of the automated checks to see if there were any failures

On Sprint Planning

 Estimate testing for each story

 Talk with PO to resolve any misunderstandings on new stories

 Ensure the stories are testable

 Be very proactive in the meeting by asking questions to get ideas for test

 Start thinking about high level test scenarios

On Test Design in collaboration with Dev and PO

 Thinking of test cases to validate features, applying various test techniques, positive,
negative, Boundary Values, Equivalent Partitions, etc

 Use Mind maps to assist with test scenarios and user journeys

 Consider risks – provide more test conditions around a feature of high risk

 Always think about “What if”, “what else”, “how else” when designing test cases

 Think about integration tests, how is this feature affecting nearest-neighbor features

 Really understand what is going on when interacting with a feature rather than just
looking at it from a surface. Think about what back-end systems / DB / Web services are
being touched

 Candidates for automation – what test cases are best to be automated

 When there are a lot of combinations of data to test, how can the permutations be reduced
without compromising quality / testing – e.g. using pair-wise test technique
 Peer reviews of test conditions – discussing with developers what test cases have been
designed

On Test Execution / Completion

 Test Environments / Prerequisite setup before execution

 Test as soon as a feature is ready and available

 Ensure quick feedback is provided to the developers

 Review of the automated checks to see if there were an failures

 Does the new developed feature make business sense

 Talk to developers to improve testability of a feature

 Ensure existing tests are updated if there is a change in the workflow

 Maintain the test packs and ensure all tests are up to date

On Process Improvement / Self Development

 Learn about new developments in software testing

 What are current issues with the QA process / How can current issues be solved,
improved

 Learn technical skills such as Databases, Coding, Web technologies to get a better
understanding of what is happening when testing

 Discuss with the team their thoughts about process improvements

Вам также может понравиться