Вы находитесь на странице: 1из 17

TEST PLANNING

Test planning is one of the keys to successful software testing, yet it's frequently
omitted due to time constraints, lack of training, or cultural bias. A survey taken at
a recent STAR conference showed that 81% of the companies participating in the
survey completed test plans. That doesn't sound too bad, but our experience has
shown that many of those 81% are calling the testing schedule the test plan, so the
actual percentage is probably much less. Testing without a plan is analogous to
developing software without a project plan and generally occurs for the same
reason - pressure to begin coding (or in this case, testing) as soon as possible. Many
organizations measure progress in development by modules completed or lines of
code delivered, and in testing by the number of test cases run. While these can be
valuable measures, they don't recognize planning as a worthwhile activity.

Levels (Stages) of Test Planning

Test planning can and should occur at several levels or stages. The first plan to
consider is the Master Test Plan (MTP), which can be a separate document or could
be included as part of the project plan. The purpose of the MTP is to orchestrate
testing at all levels. The IEEE Std. 829-1998 Standard for Software Test
Documentation identifies the following levels of test: Unit, Integration, System, and
Acceptance. Other organizations may use more or less than four levels and possibly
use different names. Some other levels (or at least other names) that we frequently
encounter include beta, alpha, customer acceptance, user acceptance, build, string,
and development. In this book, we will use the four levels identified in the IEEE and
illustrated in fig.

Test planning CAN'T be separated from project planning. All important test planning
issues are also important project planning issues.

The test manager should think of the Master Test Plan as one of his or her major
communication channels with all project participants. Test planning is a process
that ultimately leads to a document that allows all parties involved in the testing
process to proactively decide what the important issues are in testing and how to
best deal with these issues. The goal of test planning is not to create a long list of
test cases, but rather to deal with the important issues of testing strategy, resource
utilization, responsibilities, risks, and priorities.

In test planning, even though the document is important, the process is ultimately
more important than the document. Discussing issues of what and how to test early
in the project lifecycle can save a lot of time, money, and disagreement later.

In addition to the Master Test Plan, it is often necessary to create detailed or level-
specific test plans. On a larger or more complex project, it's often worthwhile to
create an Acceptance Test Plan, System Test Plan, Integration Test Plan, Unit Test
Plan, and other test plans, depending on the scope of your project. Smaller projects,
that is, projects with smaller scope, number of participants, and organizations, may
find that they only need one test plan, which will cover all levels of test. Deciding
the number and scope of test plans required should be one of the first strategy
decisions made in test planning. As the complexity of a testing activity increases,
the criticality of having a good Master Test Plan increases exponentially, as
illustrated in Figure.

Detailed Test Planning. For the most part, the major considerations for detailed test
plans are the same as those for the master test plan, but differ in scope and level of
detail. In fact, it's normally desirable to use the same basic template for the detailed
test plans that you use for the master test plan.

TEST MANAGEMENT

Purpose :

Few people can argue against the need for improved quality in software
development. Users of technology that utilizes software have come to expect
various faults and flaws and, especially in the world of personal computers, we
consider frequent problems to be completely normal and expected. However, as
software development matures, we are beginning to better understand how to
achieve a necessary improvement in quality. The purpose of this article is to
introduce concepts about, and provide general best practices in the field of, test
management.

What is test management?

An important part of software quality is the process of testing and validating the
software. Test management is the practice of organizing and controlling the process
and artifacts required for the testing effort. Traditional tools used for test
management include:

•Pen and paper

•Word processors

•Spreadsheets

Larger testing efforts may use home-grown software test management solutions,
usually built on spreadsheets or databases, or commercial test management
applications such as IBM® Rational® ClearQuest® Test Manager or Mercury
TestDirector.

The general goal of test management is to allow teams to plan, develop, execute,
and assess all testing activities within the overall software development effort. This
includes coordinating efforts of all those involved in the testing effort, tracking
dependencies and relationships among test assets and, most importantly, defining,
measuring, and tracking quality goals.

Aspects of test management

Test management can be broken into different phases: organization, planning,


authoring, execution, and reporting. These are described in more detail below.

Test artifact and resource organization is a clearly necessary part of test


management. This requires organizing and maintaining an inventory of items to
test, along with the various things used to perform the testing. This addresses how
teams track dependencies and relationships among test assets. The most common
types of test assets that need to be managed are:

•Test scripts

•Test data

•Test software

•Test hardware
Test planning is the overall set of tasks that address the questions of why, what,
where, and when to test. The reason why a given test is created is called a test
motivator (for example, a specific requirement must be validated). What should be
tested is broken down into many test cases for a project. Where to test is answered
by determining and documenting the needed software and hardware configurations.
When to test is resolved by tracking iterations (or cycles, or time period) to the
testing.

Test authoring is a process of capturing the specific steps required to complete a


given test. This addresses the question of how something will be tested. This is
where somewhat abstract test cases are developed into more detailed test steps,
which in turn will become test scripts (either manual or automated).

Test execution entails running the tests by assembling sequences of test scripts into
a suite of tests. This is a continuation of answering the question of how something
will be tested (more specifically, how the testing will be conducted).

Test reporting is how the various results of the testing effort are analyzed and
communicated. This is used to determine the current status of project testing, as
well as the overall level of quality of the application or system.

The testing effort will produce a great deal of information. From this information,
metrics can be extracted that define, measure, and track quality goals for the
project. These quality metrics then need to be passed to whatever communication
mechanism is used for the rest of the project metrics.

A very common type of data produced by testing, one which is often a source for
quality metrics, is defects. Defects are not static, but change over time. In addition,
multiple defects are often related to one another. Effective defect tracking is crucial
to both testing and development teams.

Other factors in test management

In addition to the software and hardware test artifacts and resources, the testing
team has to be managed. Test management must coordinate the efforts of all
project team members involved in the testing effort. This requires controlling user
security and permissions for testing members and artifacts. For projects that span
more than one site or team (which is rapidly becoming the norm), this also includes
organizing site and team coordination.

The particular testing process for a project will have an obvious bearing on test
management. For an iterative project, test management will have to provide the
foundation and guide the effort to plan, execute, and evaluate testing iteratively.
Following this, the testing strategy will also have to follow the test management
framework.

EXECUTION AND REPORTING


Performance test execution is the activity that occurs between developing test
scripts and reporting and analyzing test results. Much of the performance testing–
related training available today treats this activity as little more than starting a test
and monitoring it to ensure that the test appears to be running as expected. In
reality, this activity is significantly more complex than just clicking a button and
monitoring machines. This chapter addresses these complexities based on
numerous real-world project experiences.

The following activities are involved in performance test execution:

•Validate the test environment

•Validate tests

•Run tests

•Baseline and benchmark

•Archive tests

Performance test execution involves activities such as validating test


environments/scripts, running the test, and generating the test results. It can also
include creating baselines and/or benchmarks of the performance characteristics. It
is important to validate the test environment to ensure that the environment truly
represents the production environment.

In Execution and Reporting we perform the followings:

Elegant, Simple Test Execution

•Execution of different subsets of tests within projects

•Support for compatibility testing in different test environments

•Guide manual testers in executing tests

•Import results from test automation tools

Comprehensive Reporting

•See in real time which tests have been run and how much remains to do

•Automated regression comparison of new and previous results

•Assess the performance of individual testers

•Review requirements traceability and coverage

•Automatically detect areas of low project test coverage


•Unlimited number of saved report configurations

•Automatic notification of users via email for a variety of trigger events

TestManager is the central console for test activity management, execution and
reporting. Built for extensibility, it supports everything from pure manual test
approaches to various automated paradigms including unit testing, functional
regression testing, and performance testing. TestManager is meant to be accessed
by all members of a project team, ensuring the high visibility of test coverage
information, defect trends, and application readiness.

Test Execution

When it comes time to execute your tests, SQA Design offers flexible and cost
effective solutions.

Features:

•Test are assigned to users for execution. These users can be internal resources,
offsite, or even offshore resources.

•There is 100% transparency in the test execution process. Who's working on what
will never be an unanswered question. Curious what your offshore vendors are
doing, with a click of a button you can find out.

•Real time execution reporting provides you with detailed information letting you
know how many test failures there have been and how much more testing there is
to complete.

•Tests are tracked against builds providing an insightful history of your progress.
This data can be used to improve your process.

Reporting

Qualify offers built in reports to help you better assess your testing efforts.

Features:

•Test Plan Coverage Reports. These reports tell you how much test coverage your
getting in a given test plan. They also tell you what's missing if you wish to improve
your test coverage.

•Build Status report. This report gives you a breakdown of test completion and test
success by build.

•Printable test plan reports. This report allows you to print a test plan giving you the
option to distribute a hard copy to testers.

SOFTWARE TEST AUTOMATION


In business today, poor quality software applications can increase costs, impact
revenue and negatively affect reputation and brand recognition. Software test
automation has long been recognized for its potential to improve the breadth of
testing by reducing redundant, manual testing and maximizing repeatability and
test accuracy. However, high costs and maintenance have plagued these traditional
approaches.

An efficient and cost-effective approach to Software Test Automation helps ensure


that the software applications meet the performance, functional and service-level
expectations of your users, customers and business partners.

Software Test Automation reduce the business risk associated with releasing
applications of unknown quality, reliability, performance and compliance by
automating three critical software test areas: Continuous Integration and Test,
Automated Functional Testing and Software Performance Testing.

Automated Functional Testing solution provides robust, maintainable and cost-


effective functional test automation, ensuring reduced business costs and increased
application quality.

Automation is the integration of testing tools into the test environment in such a
fashion that the test execution, logging, and comparison of results are done with
minimal human intervention. Generally, most experienced testers and managers
have learned (in the school of hard knocks) that it's typically not fruitful, and
probably not possible or reasonable, to automate every test. Obviously if you're
trying to test the human/machine interface, you can't automate that process since
the human is a key part of the test.

Test automation is the use of software to control the execution of tests, the
comparison of actual outcomes to predicted outcomes, the setting up of test
preconditions, and other test control and test reporting functions. Commonly, test
automation involves automating a manual process already in place that uses a
formalized testing process.

Although manual tests may find many defects in a software application, it is a


laborious and time consuming process. In addition, it may not be effective in finding
certain classes of defects. Test automation is a process of writing a computer
program to do testing that would otherwise need to be done manually. Once tests
have been automated, they can be run quickly. This is often the most cost effective
method for software products that have a long maintenance life, because even
minor patches over the lifetime of the application can cause features to break which
were working at an earlier point in time.

There are two general approaches to test automation:


Code-driven testing. The public (usually) interfaces to classes, modules, or
libraries are tested with a variety of input arguments to validate that the results
that are returned are correct.

Graphical user interface testing. A testing framework generates user interface


events such as keystrokes and mouse clicks, and observes the changes that result
in the user interface, to validate that the observable behavior of the program is
correct.

Test automation tools can be expensive, and it is usually employed in combination


with manual testing. It can be made cost-effective in the longer term, especially
when used repeatedly in regression testing.

One way to generate test cases automatically is model-based testing through use of
a model of the system for test case generation but research continues into a variety
of alternative methodologies for doing so.

What to automate, when to automate, or even whether one really needs automation
are crucial decisions which the testing (or development) team must make. Selecting
the correct features of the product for automation largely determines the success of
the automation. Automating unstable features or features that are undergoing
changes should be avoided.

Testing tools can help automate tasks such as product installation, test data
creation, GUI interaction, problem detection (consider parsing or polling agents
equipped with oracles), defect logging, etc., without necessarily automating tests in
an end-to-end fashion.

DESIGN AND ARCHITECTURE FOR AUTOMATION

In test automation, code involved in testing is not only test logic, but also a bunch of
other supporting code, like url concatenation, html/xml parsing, UI accessing, etc.
Test logic can be buried in this unrelated code, which has nothing to do with test
logic itself, making test code hard to read and maintain. In this article, the layered
architecture of test automation is presented to solve this problem. In this layered
architecture, the test automation code is divided into three layers: (1) test cases,
focusing on the test logic of the application, (2) the domain layer, modeling the
system under test in domain terms, encapsulating http requests, browser control,
result parsing logic and providing an interface for the test cases layer, (3) the
system under test, which layer 2 will operate directly on.

While some tasks like exploratory testing require intuition and smarts, some others,
such as regression tests, are repetitive and laborious. As more features are added
to the system, time consumed by regression tests gets longer and longer.

Test automation solves this problem. With test automation, repetitive work like
regression tests is done by computer, and tests cases are translated to computer
program, so that QA can be freed from the burden of routine test repetition to focus
on more creative work.

In test automation, code involved in testing is not only test logic, but also a bunch of
other supporting code, like url concatenation, html/xml parsing, UI accessing, etc.
For example, to test a web service which carries out operations like search by
different keywords and return an xml containing certain information (like customer
information), the test automation code must:

1.Assemble a URL based on the operation under test,

2.Send out a http request with some http libraries,

3.Interpret the response sent back from the web server and parse the xml,

4.Compare results returned to expected results.

Objects are often interconnected. It makes for a pretty object model diagram, but
class re-use is all but impossible because of the interdependencies between objects.
This results in slow builds, cascading side effects when code is changed, and
complex testing.

In many designs, objects insufficiently abstract the concepts that they are trying to
represent. This introduces rigidity into an application. The less abstract an object is,
the more difficult it becomes to change its behavior. Applications become inflexible
to requirement changes and new technologies, and become unresponsive to market
changes.

1.Design, development and testing of large scale applications is too expensive;

2.The longer a development effort takes, the more likely the requirements will have
changed, either as a result of customer requirement changes, technology changes,
or competition;

3.Design and implementation methodologies vary greatly across a multi-team


project, resulting in code sharing and maintenance problems. A framework that
promotes a consistent design and implementation philosophy reduces this difficulty;

A brief description of three projects (of many) that currently use the AAL as
implemented in C++/MFC:

1.Automation Of Satellite Design: A large-scale, multiple developer, three year


project to automate the design of communication relay satellites.

2.Boat Yard Workflow Management: A medium-scale, single developer, multi-year


project to manage the workflow of boat yard operations work orders, job tracking,
inventory, payroll and customer billing. The AAL framework allows the support of
custom data representation and workflow processes for different boat yards, while
retaining the same code base.

3.Club Management: A medium-scale, single developer, multi-year project


managing the income of adult entertainment clubs and entertainers. This system
utilizes complex scheduling, threading and inter-process communication.

GENERIC REQUIREMENTS FOR TEST TOOL FRAMEWORK

We acknowledge the contributions of Dora Lam and Rabi Achrafi to this list. Please
note that the list does not imply a recommendation, nor does omission imply that
we disapprove of the tool. We urge you to carefully consider your requirements for
a tool before looking at any of them. Some of these companies have demonstration
versions available.

Pete Jones of Phonak AG suggests that when reviewing tools, you give vendors
five minutes to sell their tool. That is, the vendor's site should be able to tell you in
that amount of time what the tool can do for you. If the vendor has done his
requirements, then he should know that your main interest, and possibly your only
interest at this time, is whether the tool will work for you. You are not interested in a
laborious explanation of every button and menu choice possessed by the tool, not
are you interested in glorious promises, and most likely not having a salesman call.
The way the vendor addresses your review is a guide to how well the tool will work
for you.

Version 4.0 of Accept's industry-leading solution for enterprise product planning


extends Accept 360°'s functionality into Design, as well as adding exciting new
features in Strategic Planning, reports and analysis, and core platform capabilities.
Major highlights include:

•New module, Accept Design, for capturing and defining use cases

•Comprehensive new reporting capabilities for defect tracking using the Accept
Defects module

•Support for a library of product lifecycle models with which to plan and execute
product projects

•New web support center providing case management and knowledge sharing
capabilities to customers and partners

Accompa is an affordable, on-demand (SaaS) requirements management tool. It


simplifies the tasks of gathering, tracking, and managing requirements. Key
features include:

•Capture and define requirements, features and use cases


•Customize the tool right from your web browser to fit your organization's needs

•Easily create custom web forms to capture requirements from internal and
external stakeholders over the web

•Define and track relationships and dependencies; Automatically track complete


change history

•Add and track unlimited number of attachments

•Collaborate with your team using built-in discussion threads, social tags and
automatic alerts

•Use systematic methodology to prioritize your requirements

•30-day, fully-functional free trial is now available

Analyst Pro is a tool for requirements management, tracing and analysis. It is an


affordable, powerful, and easy-to-use specification, requirements tracking and
documentation tool. With Analyst Pro, requirements can be traced throughout the
design and test process. It also provides integrated configuration management to
simplify your development process. Analyst Pro is a very versatile tool that can be
used for many software, systems and product development projects. It can be used
with any process such as Agile, Incremental, Waterfall, Spiral, etc. This easy to
install and configure tool allows effective collaboration of dispersed teams. Analyst
Pro incorporates the following features:

•Requirements Specification and Tracking - Analyst Pro combines the power of a


word processor and a spreadsheet to enable you to control and manage projects.

•Repository (for Non-Requirements Objects) - Analyst Pro provides a repository for


non-requirements objects. UML and other models created by external tools can be
saved to the repository for sharing, collaboration, and configuration management,
and for linking them to requirements and specifications.

•Traceability - Analyst Pro features a unique and powerful traceability matrix for
impact analysis and effective testing of your product.

•Configuration Management - Analyst Pro simplifies the development process by


providing integrated configuration management for project artifacts. Analyst Pro
allows you to baseline and lock your project artifacts.

•Importing and Exporting - Analyst Pro provides robust import/export features for
interfacing with your external systems, and offline editing of requirements and
reconciliation.
•Other Features include: Reusability of project settings and specification templates
using project templates; Control of access by creating user groups with different
privileges; Ability to assign a requirement or other task to team members and
review their progress; Built-in diagramming editors for creating project diagrams;
Easy generation of system documentation and change history reports, baseline
comparison, traceability reports, status reports, etc.

ARCWAY Cockpit is a tool for managing requirements. It supports ARCWAY’s


concept of Visual Requirements Engineering (VRE). In VRE requirements are linked
to visual high-level models (called landscapes) of the system under design.

Requirements specified in ARCWAY Cockpit can be imported from and exported to


MS Excel. A fully customizable MS Word, HTML and Docbook report interface allows
for ad hoc reports of specific requirements or complete specification documents.

BambooRM allows project managers and teams to collaborate on projects more


easily with one repository of project requirements.

BambooRM simplifies requirements management by integrating use cases, business


and functional requirements -- all in one place. Managers, business analysts,
consultants, and team leaders can view, manage and trace project requirements
online in their browsers, including:

•Creating business requirements for any product or service.

•Defining and associating sub-level business requirements.

•Associating use cases with multiple associated business requirements.

•Associating functional requirements with each business requirement.

•Prioritizing business and functional requirements.

•Tracing the project requirements.

•Creating and printing requirements with easy-to-read summaries.

BambooRM features include:

1.Task Management -- allowing users to see allocated tasks, including those tasks
assigned to themselves.

2.Versioning -- to allow users to maintain different versions of a project.

3.Outline View -- a hierarchical view of the use cases, business and functional
requirements, along with a representation of the parent-child relationship between
them.

4.Shared Documents -- between and among team members.


5.Traceability -- to document the life cycle of both requirements and functions.

6.Move Release to Release -- enabling the migration of data from one release to
another release.

7.Download Pdf/Word -- supporting the creation of documents in both .PDF and


Microsoft(r)/Word(r) format.

8.Workflow -- designed for use case, business, and functional requirement reviews.

9.Bulk Import Wizard -- allowing users to import data from various data sources.

10.Discussion Items -- to enable collaborative discussions among team members.

TEST TOOL SELECTION

What are the testing requirements of the application/system being tested: does it
demand functional testing, performance testing, or both, or more? The choice of
tools depends initially on that answer, then other factors come into play: budget,
knowledge of staff, preferences for in-house versus third-party test tool
development, etc.

Once you decide on the scope and level of test automation, you can start looking for
automation solutions. Here are some guidelines:

• “Scriptless” representation of automated tests: testers should be able to visualize


each step of the business process and view and edit test cases intuitively

• Integrated data tables: testers should have the ability to pump large volumes of
data through the system quickly, manipulate the data sets, perform calculations,
and quickly create hundreds of test iterations and permutations with minimal effort

• Clear, concise reporting: reports should provide specifics about where application
failures occurred and what test data was used; provide application screen shots for
every step to highlight any discrepancies; and provide detailed explanations of each
verification point’s pass and failure

• Integration with requirements coverage and defect management tools

One more thing to consider: getting good results from using test tools still depends
on having effective and consistent QA processes in place.

The requirement is based on many factors, but obviously it has to be raised by the
QA Head and approved by Management for procurement. The justification of
requirement is based on it need, usefulness and utilization. The defining factors
could be:

Average number of running/upcoming projects at any moment of time


Number of testers in the team

Testing Strategy, Plan and Scope

Higher no. of projects and less no. of testers with a definite strategy/plan/scope in
place will place higher chances of need of the tool.

But keep in mind that one tool does not fit for all testing needs

Similarly one tool does not fit all organizations/Projects

General - Tools

•4Test Map information on Segue's 4Test language

•Automated Test Tool Comparison and another comparison

•Bug tracking and defect tracking resource Quality assurance/testing links, with bug
tracking and defect tracking tools, articles, sites, books and forums

•Call Center, Bug Tracking and Project Management Tools for liNUX

•Extreme Programming Test Tool Downloads

•HTML Conformance Testing info from W3C

•Java GUI testing with JUnit.

•Load Test Tools Evaluation. Opinions expressed are those of the authors and not of
ApTest or its employees.

•Microsoft Web server Stress tools With these tools you can stress test your Web
server to see how it reacts when several hundred users access your application at
peak times.

•Overview of Load Test Tools. Reviews the pros and cons of several commercial and
open source tools.

•Another Price Comparison of bug tracking tools.

•Problem Management Tools Summary

•Segue and Mercury™ products

•Technical issues for bug tracking tools.

•Winrunner Tips bits of advice on Winrunner.

TESTING IN OBJECT ORIENTED SYSTEMS


What is mainly required in OOLife Cycle is that there should be iterations between
phases. This is very important. One such model which explains the importance of
these iterations is the Fountain Model.

This Fountain Model was proposed by Henderson-Sellers and Edwards in 1990.


Various phases in the Fountain Model are as follows:

Requirements Phase

Object-Oriented Analysis Phase

Object-Design Phase

Implementation Phase

Implementation and Integration Phase

Operations Mode

Maintenance

Various phases in Fountain Model overlap:

Requirements and Object Oriented Analysis Phase

Object-Oriented Design, Implementation and Integration Phase

Operations Mode and Maintenance Phases.

Also, each phase will have its own iterations. By adopting the Object-Oriented
Application Development, it is scientifically proved that the Maintenance of the
software has a tremendous drop. The software is easy to manage and adding new
functionality or removing the old is well within controll. This can be achieved
without disturbing the overall functionality or other objects. This can help reduce
time in software maintenance.

My aim is not to provide informaiton on Object Oriented Application development,


but to provide information and techniques as to how to go about the testing of
Object Oriented Systems.

Testing of Object Oriented Testing is the same as usual testing when you follow the
conventional Black Box testing (of course there will be differences depending on
your Test Strategy).

But, otherwise, while testing Object Oriented Systems, we tend to adopt different
Test Strategies. This is because, the development cycle is different from the usual
cycle(s). Why? This is an interesting question. Why is testing of Object Oriented
Systems different? First, let us cover the basics of OOPS.
The Object Oriented Methodology

Let us look at the Object Modelling Technique(OMT) methodology:

Analysis:Starting from the statement of the problem, the analyst buildsa model of
the real-world situation showing its important properties.

System Design:The system designer makes high-level decisions about the overall
architecture. During system design, the target system is organized into subsystems
based on both the analysis structure and the proposed architecture.

Object Design:The object designer builds a design model based on the analysis
model but containing implementation detials. The designer adds details to the
design model in accordance with the strategy established during system design.

Implementation:The object classes and relationships developed during object


design are finally translated into a particular programming language, database, or
hardware implementation.

The Three Models

The OMT methodology uses three kinds of models to describe the Object Oriented
System:

1. Object Model, describing the objects in the system and their relationships, which
are static structures.

2. Dynamic Model, describing the interactions among objects in the system, which
change over the time.

3. Functional Model, describing the data transformations of the system.

Object Oriented Themes

1. Abstraction.

2. Encapsulation.

3. Polymorphism.

The object-oriented language features of inheritance and polymorphism introduce


new abilities for designers and programmers, but also new problems for testers. As
of this writing, it is still not clear how best to test these language features or what
criteria are appropriate. This text introduces the current state of knowledge; the
interested reader is encouraged to keep up with the literature for continuing results
and techniques for testing OO software. The bibliographic notes give some current
references, and further ideas are discussed in Chapter 7. The most obvious graph to
create for testing these features (which we collectively call “the OO language
features”) is the inheritance hierarchy. Figure 2.23 represents a small inheritance
hierarchy with four classes. Classes C and D inherit from B, and B in turn inherits
from A.

The coverage criteria from Section 2.2.1 can be applied to inheritance hierarchies in
ways that are superficially simple, but have some subtle problems. In OO
programming, classes are not directly tested because they are not executable. In
fact, the edges in the inheritance hierarchy do not represent execution flow at all,
but rather inheritance dependencies. To apply any type of coverage, we first need a
model for what coverage means. The first step is to require that objects be
instantiated for some or all of the classes. Figure 2.24 shows the inheritance
hierarchy from Figure 2.23 with one object instantiated for each class.

The most obvious interpretation of node coverage for this graph is to require that at
least one object be created for each class. However, this seems weak because it
says nothing about execution. The logical extension is to require that for each
object of each class, the call graph must be covered according to the call coverage
criterion above. Thus, the OO call coverage criterion can be called an “aggregation
criterion” because it requires call coverage to be applied on at least one object for
each class.

An extension of this is the all object call criterion, which requires that call coverage
be satisfied for every object that is instantiated for every class.

Вам также может понравиться