Вы находитесь на странице: 1из 10

What is Testing?

Defect can be caused by a flaw in the application software or by a flaw in the application specification. For
example, unexpected (incorrect) results can be from errors made during the construction phase, or from an
algorithm incorrectly defined in the specification. Testing is commonly assumed to mean executing software
and finding errors. This type of testing is known as dynamic testing, and while valid, it is not the most effective
way of testing. Static testing, the review, inspection and validation of development requirements, is the most
effective and cost efficient way of testing. A structured approach to testing should use both dynamic and static
testing techniques.

Testing and Quality Assurance

What is the relationship between testing and Software Quality Assurance (SQA)? An application that meets its
requirements totally can be said to exhibit quality. Quality is not based on a subjective assessment but rather on
a clearly demonstrable, and measurable, basis. Quality Assurance and Quality Control are not the same.
Quality Control is a process directed at validating that a specific deliverable meets standards, is error free, and
is the best deliverable that can be produced. It is a responsibility internal to the team. QA, on the other hand, is
a review with a goal of improving the process as well as the deliverable. QA is often an external process. QA is
an effective approach to producing a high quality product. One aspect is the process of objectively reviewing
project deliverables and the processes that produce them (including testing), to identify defects, and then
making recommendations for improvement based on the reviews. The end result is the assurance that the
system and application is of high quality, and that the process is working. The achievement of quality goals is
well within reach when organizational strategies are used in the testing process. From the client's perspective,
an application's quality is high if it meets their expectations.

What is the difference between a bug, a defect, and an error?


According to the British norm BS 7925-1: bug--generic term for fault, failure, error, human action that produces
an incorrect result.

Error: programmatically mistake leads to error.


Bug: Deviation from the expected result.
Defect: Problem in algorithm leads to failure.
Failure: Result of any of the above.

Compare those to these arbitrary definitions:

Error: When we get the wrong output i.e. syntax error, logical error
Fault: When everything is correct but we are not able to get a result
Failure: We are not able to insert any input

How to Write a Fully Effective Bug Report

To write a fully effective report you must:


- Explain how to reproduce the problem - Analyze the error so you can describe it in a minimum number of
steps.
- Write a report that is complete and easy to understand.

Write bug reports immediately; the longer you wait between finding the problem and reporting it, the more
likely it is the description will be incomplete, the problem not reproducible, or simply forgotten.

Writing a one-line report summary (Bug's report title) is an art. You must master it. Summaries help everyone
quickly review outstanding problems and find individual reports. The summary line is the most frequently and
carefully read part of the report. When a summary makes a problem sound less severe than it is, managers are
more likely to defer it. Alternatively, if your summaries make problems sound more severe than they are, you
will gain a reputation for alarmism. Don't use the same summary for two different reports, even if they are
similar. The summary line should describe only the problem, not the replication steps. Don't run the summary
into the description (Steps to reproduce) as they will usually be printed independently of each other in reports.

Ideally you should be able to write this clearly enough for a developer to reproduce and fix the problem, and
another QA engineer to verify the fix without them having to go back to you, the author, for more information.
It is much better to over communicate in this field than say too little. Of course it is ideal if the problem is
reproducible and you can write down those steps. But if you can't reproduce a bug, and try and try and still
can't reproduce it, admit it and write the report anyway. A good programmer can often track down an
irreproducible problem from a careful description. For a good discussion on analyzing problems and making
them reproducible, see Chapter 5 of Testing Computer Software by Cem Kaner.

The most controversial thing in a bug report is often the bug Impacts: Low, Medium, High, and Urgent. Report
should show the priority which you, the bug submitter, believes to be appropriate and does not get changed.

Software Testing 10 Rules


1. Test early and test often.

2. Integrate the application development and testing life cycles. You'll get better results and you won't have
to mediate between two armed camps in your IT shop.

3. Formalize a testing methodology; you'll test everything the same way and you'll get uniform results.

4. Develop a comprehensive test plan; it forms the basis for the testing methodology.

5. Use both static and dynamic testing.

6. Define your expected results.

7. Understand the business reason behind the application. You'll write a better application and better
testing scripts.

8. Use multiple levels and types of testing (regression, systems, integration, stress and load).

9. Review and inspect the work, it will lower costs.

10. Don't let your programmers check their own work; they'll miss their own errors.

Bug Impacts
Low impact
This is for Minor problems, such as failures at extreme boundary conditions that are unlikely to occur in normal
use, or minor errors in layout/formatting. These problems do not impact use of the product in any substantive
way.

Medium impact
This is a problem that a) Effects a more isolated piece of functionality. b) Occurs only at certain boundary
conditions. c) Has a workaround (where "don't do that" might be an acceptable answer to the user). d) Occurs
only at one or two customers. or e) Is very intermittent

High impact
This should be used for only serious problems, effecting many sites, with no workaround. Frequent or
reproducible crashes/core dumps/GPFs would fall in this category, as would major functionality not working.

Urgent impact
This should be reserved for only the most catastrophic of problems. Data corruption, complete inability to use
the product at almost any site, etc. For released products, an urgent bug would imply that shipping of the
product should stop immediately, until the problem is resolved.

Bug Report Components


Report number: Unique number given to a bug.
Program / module being tested: The name of a program or module that being tested
Version & release number: The version of the product that you are testing.
Problem Summary: (data entry field that's one line) precise to what the problem is.
Report Type: Describes the type of problem found, for example it could be software or hardware bug.
Severity: Normally, how you view the bug. Various levels of severity: Low - Medium - High - Urgent
Environment: Environment in which the bug is found.
Detailed Description: Detailed description of the bug that is found
How to reproduce: Detailed description of how to reproduce the bug.
Reported by: The name of person who writes the report.
Assigned to developer: The name of developer who assigned to fixed the bug.

Status:
Open: The status of bug when it entered.
Fixed / feedback: The status of the bug when it fixed.
Closed: The status of the bug when verified.
(Bug can be only closed by QA person. Usually, the problem is closed by QA manager.)
Deferred: The status of the bug when it postponed.
User error: The status of the bug when user made an error.
Not a bug: The status of the bug when it is not a bug.

Priority: Assigned by the project manager who asks the programmers to fix bugs in priority order.
Resolution: Defines the current status of the problem. There are four types of resolution such as deferred, not a
problem, will not fix, and as designed

Defects Severity and Priority

Question: One question on the defects that we raise. We are supposed to give a
severity and a priority to it. Now, the severity can be Major, Minor or
Trivial and the Priority can be 1, 2 or 3 (with 1 being a high priority
defect).
My question is - why do we need two parameters, severity and priority, for a defect Can't we do only with one?

Answer (1): It depends entirely on the size of the company. Severity tells us how bad the defect is. Priority tells
us how soon it is desired to fix the problem.
In some companies, the defect reporter sets the severity and the triage team or product management sets the
priority. In a small company, or project (or product), particularly where there aren't many defects to track, you
can expect you don't really need both since a high severity defect is also a high priority defect. But in a large
company, and particularly where there are many defects, using both is a form of risk management.

Major would be 1 and Trivial would be 3. You can add or multiply the two values together (there is only a
small difference in the outcome) and then use the event's risk value to determine how you should address the
problem. The lower values must be addressed and the higher values can wait.

I discovered a new method for Risk Assessment. It is based on a military standard, MIL-STD-882. If you want a
copy of the current version, search for MIL-STD-882D using Google or Yahoo! The main area of interest is
section A.4.4.3 and its children where they indicate the Assessment of mishap risk.
They use a four-point severity rating (rather than three): Catastrophic; Critical; Marginal; Negligible. They then
use a five-point (rather than three) probability rating: Frequent; Probable; Occasional; Remote; Improbable.
Then rather than using a mathematical calculation to determine a risk level, they use a predefined chart.

Blocker: This bug prevents developers from testing or developing the software.
Critical: The software crashes, hangs, or causes you to lose data.
Major: A major feature is broken.
Normal: It's a bug that should be fixed.
Minor: Minor loss of function, and there's an easy work around.0.
Trivial: A cosmetic problem, such as a misspelled word or misaligned text.
Enhancement: Request for new feature or enhancement.

Answer (2): Severity Levels can be defined as follow: S1 - Urgent/Showstopper. Like system crash or error
message forcing to close the window. Tester's ability to operate the system either totally (System Down), or
almost totally, affected. A major area of the users system is affected by the incident and it is significant to
business processes. S2 - Medium/Workaround. Exist like when a problem is required in the specs but tester can
go on with testing. Incident affects an area of functionality but there is a work-around which negates impact to
business process. This is a problem that:
a) Affects a more isolated piece of functionality.
b) Occurs only at certain boundary conditions.
c) Has a workaround (where "don't do that" might be an acceptable answer to the user).
d) Occurs only at one or two customers. or is intermittent
S3 - Low. This is for minor problems, such as failures at extreme boundary conditions that are unlikely to occur
in normal use, or minor errors in layout/formatting. Problems do not impact use of the product in any
substantive way. These are incidents that are cosmetic in nature and of no or very low impact to business
processes.

Documentation Tips
Important areas of "black box" when QA engineer write test plans:
* Acceptance test (into testing)
* Data flow and integrity
* Configuration and compatibility
* Stress test
* Regressions
* Performance
* Potential bugs
* Beta tests
* Release tests
* Utility
* User interfaces

It is a good practice to have developers review test cases after they have been written. Knowing design of the
specific feature or product in whole, developers can give you a valuable feedback on what is missing from test
cases and should be added, what areas to pay more attention while testing and even how to apply it Test cases
should be updated based on gotten feedback.

How to create a Test Plan without docs?


1. Try to break up your huge application into modules that are functionally independent.
2. Within each module you start with the functions one by one.
3. For a simple function write all possible test cases that arise to be tested, while using the application as there
are no specs.
4. In this way you could complete one function and in turn whole application.
5. To prepare test cases or plan make use of Excel sheet. Each sheet will define each function within the module.
This is best way to organize the test cases.

Test Plan Sample


1. Introduction
Description of this Document
This document is a Test Plan for the -Project name-, produced by Quality Assurance. It describes the testing
strategy and approach to testing QA will use to validate the quality of this product prior to release. It also
contains various resources required for the successful completion of this project.

The focus of the -Project name- is to support those new features that will allow easier development, deployment
and maintenance of solutions built upon the -Project name-. Those features include:
[List of the features]
This release of the -Project name- will also include legacy bug fixing, and redesigning or including missing
functionality from previous release
[List of the features]
The following implementations were made:
[List and description of implementations made]
Related Documents
[List of related documents such as: Functional Specifications, Design Specifications]
Schedule and Milestones
[Schedule information QA testing estimates]

2. Resource Requirements
Hardware
[List of hardware requirements]
Software
[List of software requirements: primary and secondary OS]
Test Tools
Apart from manual tests, the following tools will be used:
-

Staffing
Responsibilities
[List of QA team members and there responsibilities]
Training
[List of training's required]

3. Features To Be Tested / Test Approach


[List of the features to be tested]
Media Verification
[The process will include installing all possible products from the media and subjecting them to basic sanity
testing.]

4. Features Not To Be Tested


[List of the features not to be tested]

5. Test Deliverables
[List of the test cases/matrices or there location]
[List of the features to be automated]

6. Dependencies/Risks
Dependencies
Risks

7. Milestone Criteria

Key QA Documents
I. PRAD

Product Requirement Analysis Document is the document prepared/reviewed by marketing, sales, and
technical product managers. This document defines the requirements for the product, the "What". It is
used by the developer to build his/her functional specification and used by QA as a reference for the
first draft of the Test Strategy.

II. Functional Specification

The functional specification is the "How" of the product. The functional specification identifies how new
features will be implemented. This document includes items such as what database tables a particular
search will query. This document is critical to QA because it is used to build the Test Plan.

QA is often involved in reviewing the functional specification for clarity and helping to define the
business rules.

III. Test Strategy

The Test Strategy is the first document QA should prepare for any project. This is a living document that
should be maintained/updated throughout the project. The first draft should be completed upon
approval of the PRAD and sent to the developer and technical product manager for review.

The Test Strategy is a high-level document that details the approach QA will follow in testing the given
product. This document can vary based on the project, but all strategies should include the following
criteria:
· Project Overview - What is the project.
· Project Scope - What are the core components of the product to be tested
· Testing - This section defines the test methodology to be used, the types of testing to be executed
(GUI, Functional, etc.), how testing will be prioritized, testing that will and will not be done and the
associated risks. This section should also outline the system configurations that will be tested and the
tester assignments for the project.
· Completion Criteria - These are the objective criteria upon which the team will decide the product is
ready for release
· Schedule - This should define the schedule for the project and include completion dates for the PRAD,
Functional Spec, and Test Strategy etc. The schedule section should include build delivery dates,
release dates and the dates for the Readiness Review, QA Process Review, and Release Board
Meetings.
· Materials Consulted - Identify the documents used to prepare the test strategy
· Test Setup - This section should identify all hardware/software, personnel pre-requisites for testing.
This section should also identify any areas that will not be tested (such as 3rd party application
compatibility.)

IV. Test Matrix (Test Plan)

The Test Matrix is the Excel template that identifies the test types (GUI, Functional etc.), the test suites
within each type, and the test categories to be tested. This matrix also prioritizes test categories and
provides reporting on test coverage.
· Test Summary report
· Test Suite Risk Coverage report

Upon completion of the functional specification and test strategy, QA begins building the master test
matrix. This is a living document and can change over the course of the project as testers create new
test categories or remove non-relevant areas. Ideally, a master matrix need only be adjusted to include
near feature areas or enhancements from release to release on a given product line.

V. Test Cases

As testers build the Master Matrix, they also build their individual test cases. These are the specific
functions testers must verify within each test category to qualify the feature. A test case is identified by
ID number and prioritized. Each test case has the following criteria:
· Purpose - Reason for the test case
· Steps - A logical sequence of steps the tester must follow to execute the test case
· Expected Results - The expected result of the test case
· Actual Result - What actually happened when the test case was executed
· Status - Identifies whether the test case was passed, failed, blocked or skipped.
· Pass - Actual result matched expected result
· Failed - Bug discovered that represents a failure of the feature
· Blocked - Tester could not execute the test case because of bug
· Skipped - Test case was not executed this round
· Bug ID - If the test case was failed, identify the bug number of the resulting bug.

VI. Test Results by Build

Once QA begins testing, it is incumbent upon them to provide results on a consistent basis to
developers and the technical product manager. This is done in two ways: A completed Test Matrix for
each build and a Results Summary document.

For each test cycle, testers should fill in a copy of the project's Master Matrix. This will create the
associated Test Coverage reports automatically (Test Coverage by Type and Test Coverage by
Risk/Priority). This should be posted in a place that necessary individuals can access the information.
Since the full Matrix is large and not easily read, it is also recommended that you create a short Results
Summary that highlights key information. A Results Summary should include the following:
· Build Number
· Database Version Number
· Install Paths (If applicable)
· Testers
· Scheduled Build Delivery Date
· Actual Build Delivery Date
· Test Start Date
· Scope - What type of testing was planned for this build? For example, was it a partial build? A full-
regression build? Scope should identify areas tested and areas not tested.
· Issues - This section should identify any problems that hampered testing, represent a trend toward a
specific problem area, or are causing the project to slip. For example, in this section you would note if
the build was delivered late and why and what its impact was on testing.
· Statistics - In this section, you can note things such as number of bugs found during the cycle, number
of bugs closed during the cycle etc.
VII. Release Package

The Release Package is the final document QA prepares. This is the compilation of all previous
documents and a release recommendation. Each release package will vary by team and project, but
they should all include the following information:
· Project Overview - This is a synopsis of the project, its scope, any problems encountered during the
testing cycle and QA's recommendation to release or not release. The overview should be a "response"
to the test strategy and note areas where the strategy was successful, areas where the strategy had to
be revised etc.
The project overview is also the place for QA to call out any suggestions for process improvements in
the next project cycle.
Think of the Test Strategy and the Project Overview as "Project bookends".
· Project PRAD - This is the Product Requirements Analysis Document, which defines what functionality
was approved for inclusion in the project. If there was no PRAD for the project, it should be clearly
noted in the Project Overview. The consequences of an absent PRAD should also be noted.
· Functional Specification - The document that defines how functionality will be implemented. If there
was no functional specification, it should be clearly noted in the Project Overview. The consequences of
an absent Functional Specification should also be noted.
· Test Strategy - The document outlining QA's process for testing the application.
· Results Summaries - The results summaries identify the results of each round of testing. These should
be accompanied in the Release Package by the corresponding reports for Test Coverage by Test Type
and Test Coverage by Risk Type/Priority from the corresponding completed Test Matrix for each build.
In addition, it is recommended that you include the full Test Matrix results from the test cycle designated
as Full Regression.
· Known Issues Document - This document is primarily for Technical Support. This document identifies
workarounds, issues development is aware of but has chosen not to correct, and potential problem
areas for clients.
· Installation Instruction - If your product must be installed as the client site, it is recommended to
include the Installation Guide and any related documentation as part of the release package.
· Open Defects - The list of defects remaining in the defect tracking system with a status of Open.
Technical Support has access to the system, so a report noting the defect ID, the problem area, and
title should be sufficient.
· Deferred Defects - The list of defects remaining in the defect tracking system with a status of deferred.
Deferred means the technical product manager has decided not to address the issue with the current
release.
· Pending Defects - The list of defects remaining in the defect tracking system with a status of pending.
Pending refers to any defect waiting on a decision from a technical product manager before a developer
addresses the problem.
· Fixed Defects - The list of defects waiting for verification by QA.
· Closed Defects - The list of defects verified as fixed by QA during the project cycle.
The Release Package is compiled in anticipation of the Readiness Review meeting. It is reviewed by
the QA Process Manager during the QA Process Review Meeting and is provided to the Release Board
and Technical Support.
· Readiness Review Meeting: The Readiness Review meeting is a team meeting between the
technical product manager, project developers and QA. This is the meeting in which the team assesses
the readiness of the product for release.
This meeting should occur prior to the delivery of the Gold Candidate build. The exact timing will vary by
team and project, but the discussion must be held far enough in advance of the scheduled release date
so that there is sufficient time to warn executive management of a potential delay in the release.
The technical product manager or lead QA may schedule this meeting.

· QA Process Review Meeting: The QA Process Review Meeting is meeting between the QA Process
Manager and the QA staff on the given project. The intent of this meeting is to review how well or not
well process was followed during the project cycle.
This is the opportunity for QA to discuss any problems encountered during the cycle that impacted their
ability to test effectively. This is also the opportunity to review the process as whole and discuss areas
for improvement.
After this meeting, the QA Process Manager will give a recommendation as to whether enough of the
process was followed to ensure a quality product and thus allow a release.
This meeting should take place after the Readiness Review meeting. It should be scheduled by the lead
QA on the project.

· Release Board Meeting: This meeting is for the technical product manager and senior executives to
discuss the status of the product and the teams release recommendations. If the results of the
Readiness meeting and QA Process Review meeting are positive, this meeting may be waived.
The technical product manager is responsible for scheduling this meeting.
This meeting is the final check before a product is released.
Due to rapid product development cycles, it is rare that QA receives completed PRADs and Functional
Specifications before they begin working on the Test Strategy, Test Matrix, and Test Cases. This work is
usually done in parallel.
Testers may begin working on the Test Strategy based on partial PRADs or confirmation from the
technical product manager as to what is expected to be in the next release. This is usually enough to
draft out a high -level strategy outlining immediate resource needs, potential problem areas, and a
tentative schedule.
The Test Strategy is then updated once the PRAD is approved, and again when the functional
specifications are complete enough to provide management with a committed schedule. All drafts of the
test strategy should be provided to the technical product manager and it is QA's responsibility to ensure
that information provided in the document (such as potential resource problems) is clearly understood.
If the anticipated release does not represent a new product line, testers can begin the Master Test
Matrix and test cases at the same time the project's PRAD is being finalized. Testers can build and/or
refine test cases for the new functionality as the functional specification is defined. Testers often
contribute to and are expected to be involved in reviewing the functional specification.
The results summary document should be prepared at the end of each test cycle and distributed to
developers and the technical product manager. It is designed more to inform interested parties on the
status of testing and possible impact to the overall project cycle.

The release package is prepared during the last test cycle for the readiness review meeting.
Test Strategy Template

QA Test Strategy: [Product and Version]

[Document Version history in format MM-DD-YYYY]

1.0 PROJECT OVERVIEW

[Brief description of project]

1.2 PROJECT SCOPE

[More detailed description of project detailing functionality to be included]

2.0 MATERIALS CONSULTED

[Identify all documentation used to build the test strategy]

3.0 TESTING

· CRITICAL FOCUS AREAS

[Areas identified by developers as potential problems above and beyond specific feature enhancements
or new functionality already given priority 1 status by QA]

· INSTALLATION:

[Installation paths to be qualified by QA. Not all products require installation testing. However, those that
do often have myriad installation paths. Due to time and resource constraints, QA must prioritize.
Decisions on which installation paths to test should be made in cooperation with the technical product
manager. Paths not slated for testing should also be identified here.]

· GUI

[Define what if any specific GUI testing will be done]

· FUNCTIONAL

[Define the functionality to be tested and how it will be prioritized]

· INTEGRATION

[Define the potential points of integration with other MediaMap products and how they will be prioritized
and tested]
· SECURITY

[Define how security issues will be tested and prioritized]

· PERFORMANCE

[Define what if any performance testing will be done and its priority]

· FAILURE RECOVERY

[Define what if any failure recovery testing will be done and its priority]

3.1 TECHNIQUE

· [Technique used for testing. Automation vs. Manual]

3.2 METHODOLOGY

[Define how testers will go about testing the product. This is where you outline your core strategy.
Include in this section anything from tester assignments to tables showing the operating systems and
browsers the team will qualify. It is also important to identify any testing limitations and risks]

4.0 TEST SET-UP

4.1 TEST PRE-REQUISITES

[Any non-software or hardware related item QA needs to test the product. For example, this section
should identify contact and test account information for 3rd party vendors]

4.2 HARDWARE

QA has the following machines available for testing:

Workstations: Servers:
[Include processor, chip, and memory and disk space]

Other:
[Identify any other hardware needed such as modems etc.]

4.3 SOFTWARE

[Identify all those software applications QA will qualify with the product and those QA will not qualify. For
example, this is where you would list the browsers to be qualified. It is also important to identify what
will not be qualified (for example, not testing with Windows 2000)]

4.4 PERSONNEL

[Identify which testers are assigned to the project and who will test what. It is also important to identify
who is responsible for the creation of the test strategy, test plan, test cases, release package,
documentation review etc.]

5.0 COMPLETION CRITERIA

[Identify how you will measure whether the product is ready for release. For example, what is the
acceptable level of defects in terms of severity, priority, and volume?]

6.0 SCHEDULE

6.1 Project Schedule


· PRD Review completed by [MM-DD-YYYY] - [STATUS]
· Functional Specification completed [MM-DD-YYYY] - [STATUS]
· Release Date approved by [MM-DD-YYYY] - [STATUS]
· Test Strategy completed by [MM-DD-YYYY] - [STATUS]
· Core Test Plan (functional) completed by [MM-DD-YYYY] - [STATUS]
· Readiness Meeting - [STATUS]
· QA Process Review Meeting - [STATUS]
· Release Board Meeting - [STATUS]
· Release on [MM-DD-YYYY] - [STATUS]

6.2 Build Schedule


· Receive first build on [MM-DD-YYYY] - [STATUS]
· Receive second build on [MM-DD-YYYY] - [STATUS]
· Receive third build on [MM-DD-YYYY] - [STATUS]
· Receive fourth build on [MM-DD-YYYY] - [STATUS]
· Receive Code Freeze Build on [MM-DD-YYYY] - [STATUS]
· Receive Full Regression Build on [MM-DD-YYYY] - [STATUS]
· Receive Gold Candidate Build on [MM-DD-YYYY] - [STATUS]
· Final Release on [MM-DD-YYYY] - [STATUS]

7.0 QA Test Matrix and Test Cases:

What are Test Cases?


Question: What are Test Cases, Test Suites, Test Scripts, and Test Scenarios?

Answer: A test case is a specific set of steps, it has expected result, along with various additional pieces of
information. These optional fields are a test case ID, test step or order of execution number, related
requirement(s), depth, test category, author, and check boxes for whether the test is automatable and has been
automated. Larger test cases may also contain prerequisite states or steps, and descriptions. A test case should
also contain a place for the actual result. These steps can be stored in a word processor document,
spreadsheet, database or other common repository. In a database system, you may also be able to see past
test results and who generated the results and the system configuration used to generate those results.

The most common term for a collection of test cases is a test suite. The test suite often also contains more
detailed instructions or goals for each collection of test cases. It definitely contains a section where the tester
identifies the system configuration used during testing. A group of test cases may also contain prerequisite
states or steps, and descriptions of the following tests.

Collections of test cases are sometimes incorrectly termed a test plan. They may also be called a test script, or
even a test scenario. A test plan is the approach that will be used to test the system, not the individual tests.
Most companies that use automated testing will call the code that is used their test scripts.

A scenario test is a test based on a hypothetical story used to help a


person think through a complex problem or system. They can be as simple as a diagram for a testing
environment or they could be a description written in prose. The ideal scenario test has five key characteristics.
It is (a) a story that is (b) motivating, (c) credible, (d) complex, and (e) easy to evaluate. They are usually
different from test cases in that test cases are single steps and scenarios cover a number of steps. Test suites
and scenarios can be used in concert for complete system tests. See
http://www.kaner.com/pdfs/ScenarioIntroVer4.pdf and
http://www.kaner.com/pdfs/ScenarioSTQE.pdf

Вам также может понравиться