Вы находитесь на странице: 1из 21

TESTING IN IT

-Presented By
Poonam Kasat 07030141045
Milan Nawandar 07030141050
Mayur Shetty 07030141043
Savita 070301410
Ashok 070301410
Testing:

Testing is exercising the software to uncover errors and


ensure the system meets defined requirements.

It represents an ultimate review of Specification,


Design & Code generation.

Testing is the process of establishing confidence that a


program or system does what it is supposed to. - Hetzel
1973
Objectives of Testing

To determine all expected error conditions.

To provide procedures for correction of errors.

To maintain control over errors.

Demonstrate a given software product matching its requirement


specifications.

Validate the quality of a software testing using the minimum cost and
efforts.
Causes of software bugs
Specification: Sometimes is not enough , constantly
changing, not communicated to entire deal.

Design: Rushed, Changed, not well communicated


properly.

Code Generation: Software Complexity, poor


documentation, schedule pressure & poor understanding of
requirements.

Cost: Bugs found at later stage will increase the cost.


Who does software Testing?
- Test manager
- manage and control a software test project
- supervise test engineers
- define and specify a test plan

- Software Test Engineers and Testers


- define test cases, write test specifications, run tests

- Independent Test Group

- Development Engineers
- Only perform unit tests and integration tests

- Quality Assurance Group and Engineers


- Perform system testing
- Define software testing standards and quality control process
Testing Principles
All tests should be traceable to customer requirements.

Test should be planned long before testing begins.

Testing should begin ‘in the small’ & progress towards ‘in
the large’.

Exhaustive testing is not possible.

Testing should be conducted by independent 3rd party.


Verification & validation
Software testing is one element of a broader topic that is
often referred to as
===> Verification and Validation (V&V)

Verification --> refers to the set of activities that ensure


that software correctly implements a specific function.

Validation -> refers to a different set of activities that


ensure that the software that has been built is traceable
to customer requirements.
Software Testing Process.

V&V Targets
Unit test Code & Implementation

Integration Software Design


test

System System Specification


test

Acceptance Requirement
test
Types of Testing
White Box or Glass Box Testing: is based on knowledge
of internal logic of an applications code. Tests are based
on coverage of code statements, branches, paths and
conditions.

Black Box Testing: It is not based on any knowledge of


internal design or code. Tests are based on
requirements and functionality.

Acceptance Testing : Each time a new version of


program is received, check whether stable enough to be
tested. It is standardized by giving copies to
programmers.
Alpha testing : refers to testing of an application when development is near
completion; minor design changes may still be made as a result of such testing.
Typically this is done by end users or others not by programmers or testers.

Beta Testing : means testing when development and testing are essentially
completed and final bugs and problems need to be found before final release.
Typically this is done by end users or others not by programmers or testers.

Comparison Testing – is comparing software weaknesses and strengths to


competing products.
• Compatibility Testing – means testing how well software
performs in a particular H/W, S/W / operating system / network
environment.

• Functional Testing – is black box type of testing geared to


functional requirements of an application; this type of testing
should be done by testers. This doesn’t mean that the
programmers shouldn’t check that their code works before
releasing it.

• Guerilla testing – is the nastiest test cases that can be thought


up, executed “on the fly”. The test may include boundaries, error
handling, challenges for module interactions – anything that the
tester thinks is likely to expose a bug is fair game.
• Integration Testing – is testing of combined components of an application
to determine whether they function together correctly . The components
can be code modules, individual applications, client and server
applications on a network an so on.

• Load/ Performance Testing – means testing an application under heavy


loads such as testing of a web site under a range of loads to determine at
what point the systems response time degrades or fails.

• Mutation Testing – is a method of determining whether a set of test data


or test cases is useful by deliberately introducing various code changes
(bugs) and retesting with the original test data/ cases to determine if the
bugs are detected. Proper implementation requires large computational
resources.
• Recovery Testing – means testing how well a system
recovers from crashes, hardware failures, or other
catastrophic problems.

• Regression Testing – is retesting after fixes or


modifications of the software or it’s environment. It can
be difficult to determine how much retesting is needed,
especially near the end of the development cycle.

• Security Testing – is testing how well the system


protects against unauthorized internal or external
access, willful damage, and so on.
• System testing – means a black box type of testing that is
based on overall requirement specifications and that covers all
combined parts of a system.

• Unit Testing – is the smallest scale of testing. It tests the


internal working ( functions) of a program , unit or module
without regard to how it is invoked.

• Sanity Testing – is typically an initial testing effort to determine


whether a new software version is performing well enough to
accept it for a major testing effort.
• Usability Testing – is testing for “ user friendliness” from the users
point of view. This is subjective and will depend on the targeted end
user or customer. User interviews, surveys, video recording of user
sessions and other techniques can be used.

• User Acceptance Testing – is testing to determine whether software is


satisfactory to an end user or customer.

• Integrity & Release Testing: It anticipates every major criticism that


will appear in product review, every major complaints by customer.
Also tested for virus.
Performance Testing: Its objective is
performance enhancement. This test determines
which modules execute most often &
performance of the modules.
Testing scenario :
Testing benefits phase-wise
Feasibility phase
Advantage: Early test estimates help determine
overall product feasibility

Analysis phase
Advantage: Identify requirements that might not be
testable

Development phase
Advantage: Early product experience helps
finalize test strategies
Commonly used Testing Tools
WinRunner
WinRunner, Mercury Interactive enterprise functional testing tool. It is used to
quickly create and run sophisticated automated tests on your application.
TestDirector
TestDirector, the industry’s first global test management solution, helps
organizations deploy high-quality applications more quickly and effectively.
LoadRunner
LoadRunner is a performance and load testing product by Hewlett-Packard (since it
acquired Mercury Interactive in November 2006) for examining system behaviour
and performance, while generating actual load.

Вам также может понравиться