Вы находитесь на странице: 1из 26

XXX

TEST PLAN
Version 1.0
7/12/2013

VERSION HISTORY
Version #
1.0

Implemented
By
ABC

Revision
Date
7/12/2014

Approved
By

2 of 26

Approval
Date

Reason

Table of Content
VERSION HISTORY................................................................................................2
TABLE OF CONTENT.............................................................................................3
SECTION 1: PURPOSE AND SCOPE.....................................................................6
1.1 Purpose..........................................................................................................6
1.2 Background...................................................................................................6
1.3 Scope.............................................................................................................6
1.3.1 Unit Testing.........................................................................................................6
1.3.2 Functional Testing...............................................................................................6
1.3.3 Load/ Volume Testing.........................................................................................6
1.3.4 User Acceptance Testing....................................................................................7
1.3.5 Section 508 Compliance Testing.........................................................................7
1.3.6 Regression Testing.............................................................................................7
1.3.7 Operations Acceptance Testing..........................................................................7

SECTION 2: REFERENCE DOCUMENTS............................................................7


SECTION 3: RESOURCE REQUIREMENTS........................................................8
3.1 Testing Environment....................................................................................9
3.1.1 Server............................................................................................................9
3.1.2 Browser..............................................................................................................9
3.1.3 Other Software/ Hardware..................................................................................9

3.2 Other Testing Components.........................................................................9


3.3 Personnel......................................................................................................9
3.4 Staff Training.................................................................................................9

SECTION 4: ASSUMPTION, CONSTRAINTS AND RISKS...............................11


4.1 Testing Assumption....................................................................................11
4.2 Testing Constraints....................................................................................11
4.3 Testing Risk.................................................................................................11

SECTION 5.0: TESTING APPROACH.................................................................12


5.1 Methodology...............................................................................................12
5.1.1 Unit Testing.......................................................................................................12
5.1.2 Functional Testing.............................................................................................12
5.1.3 Load/ Volume Testing.......................................................................................12
5.1.4 User Acceptance Testing..................................................................................12
5.1.5 Section 508 Compliance Testing.......................................................................12
5.1.6 Regression Testing...........................................................................................12
5.1.7 Operations Acceptance Testing........................................................................13

5.2 Test Progression/ Order.............................................................................13


5.3 Test Milestones...........................................................................................13
5.4 Test Data......................................................................................................14
5.4.1 Content and documents....................................................................................14

5.5 Recording Results......................................................................................14


5.6 Analyzing Results.......................................................................................14
5.6.1 Pass/ Fail.......................................................................................................... 14
5.6.2 Number of Tests...............................................................................................14

5.7 Defect and Bug Resolution........................................................................14


5.7.1 Bug Tracking.....................................................................................................14
3 of 26

5.7.2 Bug Attributes...................................................................................................14

5.8 Entrance Criteria.........................................................................................15


5.8.1 Complete.......................................................................................................... 15
5.8.2 Unit Testing.......................................................................................................15
5.8.3 Test Plan and Script..........................................................................................15
5.8.4 Human Resources............................................................................................15
5.8.5 Test Environment..............................................................................................15

5.9 Exit Criteria..................................................................................................15


5.9.1 Requirements...................................................................................................15
5.9.2 High- Priority Bugs............................................................................................15
5.9.3 Medium or Low- Priority Bugs...........................................................................15
5.9.4 Test Cases........................................................................................................15
5.9.5 User Acceptance Testing..................................................................................16

5.10 Test Deliverables......................................................................................16

SECTION 6.0: UNIT TESTING.............................................................................17


6.1.1 Entrance Criteria......................................................................................17
6.1.2 Exit Criteria...............................................................................................17
6.1.3 Test Execution.........................................................................................17

SECTION 7.0:
7.1
7.2
7.3
7.4
7.5

FUNCTIONAL TESTING..........................................................18

Item to be tested/ not tested......................................................................18


Approach(s).................................................................................................18
Regulatory/ Mandate Criteria.....................................................................18
Test Pass/ Fail Criteria...............................................................................19
Test Suspension/ Resumption criteria.....................................................19

SECTION 8.0: LOAD/ VOLUME TESTING........................................................20


8.1
8.2
8.3
8.4
8.5
8.6
8.7

Test Risk/ Issues.........................................................................................20


Item to be Tested/ Not Tested....................................................................20
Test Approach (s)........................................................................................20
Regulatory/ Mandate Criteria.....................................................................21
Test Pass/ Fail Criteria...............................................................................21
Test Deliverables........................................................................................21
Test Suspension / Resumption Criteria...................................................21

SECTION 9.0: ACCEPTANCE TESTING.............................................................22


9.1
9.2
9.3
9.4
9.5
9.6
9.7
9.8

Test Risks/ Issue.........................................................................................22


Item to be Tested/ Not Tested...................................................................22
Test Approach(s).........................................................................................22
Regulatory/ Mandate Criteria.....................................................................22
Test Pass/ Fail Criteria...............................................................................22
Test Deliverables........................................................................................22
Test Suspension / Resumption Criteria...................................................22
Test Environmental / Staffing / Training Needs.......................................22

SECTION 10.0: 508 COMPLIANCE TESTING...................................................23


10.1 Test Risk/ Issues.........................................................................................23
10.2 Item to be Tested/ Not Tested....................................................................23
10.3 Test Approaches.........................................................................................23
10.4 Regulatory / Mandate Criteria....................................................................23
10.5 Test Pass/ Fail Criteria...............................................................................23
10.6 Test Deliverables........................................................................................23
4 of 26

10.7 Test Suspension/ Resumption Criteria.....................................................23


10.8 Test Environment........................................................................................23

SECTION 11.0: PROGRESS REPORTING...........................................................24


SECTION 12.0: APPENDICES.............................................................................25
APPENDIX A: TEST PLAN APPROVAL............................................................26
APPENDIX B: REFERENCES.............................................................................27
APPENDIX C: KEY TERMS.................................................................................28
APPENDIX D: TEST SUMMARY REPORT........................................................29
APPENDIX E: BUG REPORT / DEFECT LOG...................................................30

5 of 26

Section 1: Purpose and Scope


1.1 Purpose
The purpose of this document is to support following objectives: Describe the Test Approach for the NICHD Travel Concept System.
Define the Scope of the NICHD Travel Concept testing effort.
Describe the testing process, test types, test data generation, test execution, testing
tools and test reporting.
List the Defect Management process, Test Deliverables and other information
pertinent to this testing effort.
List the hardware/software environment in which testing will be conducted.
List the Test Entrance and Exit Criteria.
Define Roles and Responsibilities pertaining to test, development and business
involvement.
Identify the assumptions upon which the test activities were planned.
List the risks and contingencies that mitigate the risks, for the project.
The intended audience for this document includes the project manager, project team, testing
team and NICHD stakeholders.

1.2 Background
NICHD Travel Concept System will enable senior managers to review travel requests (aka
concepts) for planning purposes and approve requests before they are entered into the official
travel system. It will provide a centralized, web-based workflow system that will enable
requestors to enter key information about a proposed trip and submit to for review by the
appropriate reviewers based, determined by the requestors organization affiliation.

1.3 Scope
The scope of this plan is limited to the following task as part of the NICHD Travel Concepts as
performed by Woodburn Solution
Outlined below are the main test types that will be performed for this release. All system test
plans and conditions will be developed from the functional specification and the requirements
catalogue.

1.3.1 Unit Testing


This testing, performed by the development team isolates small pieces of testable software in
the application and determines whether it behaves as expected.

1.3.2 Functional Testing


The objective of this test is to ensure that each element of the application meets the functional
requirements of the business as outlined in the Requirements Document, Design Document and
other functional documents produced during the course of the project, such as issue resolutions
and changes made in response to client feedback.

6 of 26

1.3.3 Load/ Volume Testing


Multi-user testing and high data-transfer load testing will prove that it is possible for an
acceptable number of users to work with the system concurrently.

1.3.4 User Acceptance Testing


This test, which is planned and executed by NICHD stakeholders, Project Management staff in
the Information Resources Management Branch (IRMB) ensure that the system operates as
expected and that any supporting sections, such as forms, reports and print-out are accurate
and suit the intended purposes. It is high-level testing to ensure that there are no gaps in the
functionality.

1.3.5 Section 508 Compliance Testing


This test will ensure that the system is Section 508 compliant.

1.3.6 Regression Testing


Regression test will be performed after the release of each major build to ensure that the
changes have had no impact on previously-tested software and that the software remains
stable.

1.3.7 Operations Acceptance Testing


This phase of testing will be performed by the NIH installation and support group prior to
implementing the live system. The NIH installation and support group will define their own
criteria and carry out the tests.
Testing scope ensures the requirement is being verified and validated and covers technical and
non- technical requirement. All the test artifacts will be stored in MS Share Point and the testing
team will be responsible for maintaining Requirement Traceability Matrix (RTM).

7 of 26

Section 2: Reference Documents


Following are the list of all reference documents that support the test plan. Refer to the actual
version/ release number of document as stored in the MS SharePoint. . Documents that can be
referenced include
Reference Document
Requirement Document

Version/Date
3.3/ July 1, 2013

Test Scripts

Location
SharePoint: EPMS
SharePoint: EPMS

8 of 26

Section 3: Resource Requirements


The people, hardware, software, and test tools necessary to complete the test should be
included in the test plan for resource estimation, test build guidance, and historical recording
purposes. It is very important to accurately document the exact hardware and software versions
of the components that will be tested, as even small variations in hardware or software versions
can produce different results with certain test scenarios. This information will provide a valuable
baseline should operational issues occur further down the road.

3.1 Testing Environment


It contains hardware, instrumentation, simulators, software tools, and other support elements
needed to conduct a test. Here are the lists of software and hardware platform that cover and
use to generate the actual test environment that need to run the tests.

3.1.1 Server

Microsoft Windows Server 2003


IIS 6
SQL

3.1.2 Browser
IE version 7+ on PC

Safari version 3+ on Mac


Firefox version 3+ on Mac and PC
1024 x 768 Monitor

3.1.3 Other Software/ Hardware

Printer
Adobe Acrobat Reader
Snag it as Screenshot software

3.2 Other Testing Components


None

9 of 26

3.3 Personnel
Below table provides a list of personnel and their role in the testing effort.
Role/Function

Responsibilities

Project Manager

Monitor and update project plan test activities, facilitate the


development of the functional test plan and test cases.
Develop and document test design, create and document test cases
based on test design, coordinate testing activities, communicate test
updates to test coordinators and project manager.
Run test cases during designated test periods, document test results in
Jira, work with developers in troubleshooting problems, re-test fixes,
conduct regression tests, and communicate test updates to test lead.
Configure database, help troubleshoot issues relating to interaction
between public and private-facing sites and data transactions.
Conduct security audit of application.
Conduct user acceptance testing.

Test Lead
Testers

Database Administrator
Security Representative
User Testers

3.4 Staff Training


Testing team need to be provided user guide documentation on NICHD Travel Concept
workflow and overall use of any test management tools.

10 of 26

Section 4: Assumption, Constraints and Risks


Assumption is supposition about things that may or may not happen in the future during the
phase of testing where constraints are limitation that must be worked around to manage the test
efforts.

4.1 Testing Assumption


In the course of preparing this test plan, the following assumptions have been made:

All requirements have been reviewed.


Software will be delivered on time.
Required resources from Business team, Development team, CM team, Dbase team are
available during test execution.
All show-stopper bugs will be given immediate attention by the development team.
All bugs found in a version of the software will be fixed and unit tested by the
development team before the next version is released.
Functional and specifications and design document will be signed off by the stakeholder.

4.2 Testing Constraints


Following Constraints have been made during the course of preparing this test plan:

Issues with incorrect baseline deployed.


Interface connectivity issues.
Delays in support for test environment setup or technical issue resolution.
Test environment stability and accessibility.
Lack of available test data.
Missing features.

4.3 Testing Risk


i.

Incomplete or changing requirements


Mitigation: All testing will be done against the requirements and design documents. Any
new or changed requirements will be logged and considered post-launch for a version 2
of the system.

ii.

Introduction of third-party software


Mitigation: All third-party software for the system has been tested and fully integrated
with the product. No new third-party software will be introduced.

iii.

Lack of customer involvement


Mitigation: The customer will be given Web access to the system while it is still
undergoing internal testing. Customer stakeholders and representatives will be able to
review the software on their own schedule, and from their offices.

iv.

Not enough time to test


Mitigation: Prioritize testing to ensure application functionality works for critical
components and major processes.
11 of 26

Section 5.0: Testing Approach


The test approach presents an overview of the strategy for developing and executing tests,
communicating the test results as well as managing the incidents and defects identified during
testing.
Once the code is deployed and unit tested, it is packaged and released in the development
environment that replicates the production environment. Testers will interact with all components
of the system via a browser to ensure that the end-user test environment matches that of the
production end users. All bugs will be described, categorized, assigned and tracked through an
online bug-tracking system.

5.1 Methodology
5.1.1 Unit Testing
Unit testing is conducted by the members of the development team before the code they are
working on is integrated into the system code base. Each developer takes the smallest piece of
testable software he or she is working on, isolates it from the remainder of the code, and
determines whether it behaves as expected. Failures of unit testing are not captured in the bugtracking software. Rather, code is not integrated into the code base until it passes all unit tests.

5.1.2 Functional Testing


The objective of functional testing is to measure the quality of the functional (business)
components of the system. These tests will verify that the system behaves correctly from the
user/business perspective and functions according to the requirements, models, storyboards or
any other design paradigm used to specify the application. For functional testing, testers will
access the application via a browser and interact with the application as would a typical user,
but will follow testing scripts.

5.1.3 Load/ Volume Testing


Load testing will be conducted using the Microsoft Visual Studio load testing tools. The objective
of this test is to ensure that latency for each type of request in the system (content page, form
processing, and certificate generation) is within acceptable ranges.

5.1.4 User Acceptance Testing


After the developer completes functional testing, stakeholders at NICHD will be given full access
to the system on the development server and will do black-box testing to ensure that the
application meets all business and functional requirements.

5.1.5 Section 508 Compliance Testing


An automated tool will be used to determine Section 508 compliance. All instances of noncompliance identified by the tool will be examined and, where appropriate, remediated by
development staff.

12 of 26

5.1.6 Regression Testing


If any bugs are discovered that require significant rewriting of code, testers will repeat all
relevant functional and load testing to ensure that changes made in the new releases do not
adversely affect areas of existing code.

5.1.7 Operations Acceptance Testing


After all internal testing is complete, the application will be deployed to NICHD servers, where
data-center personnel will run appropriate tests to ensure that the final implementation meets
NICHD performance standards.

5.2 Test Progression/ Order


Below table provides a high level description of Test Progression/ Order associated with this
project.
Phase 1 Internal Testing

Phase 2 User
Acceptance Testing

Phase 3 - Operations Acceptance


Testing

Unit testing

Functional testing and


remediation.

Production set up and operational


testing.

Functional testing/Section
508 Compliance testing and
remediation.

Regression testing and


remediation.

Operation remediation (if necessary)

Regression testing and


remediation.
Load testing and
remediation.

5.3 Test Milestones


Below table provides a high level description of Test milestones associated with this project.
Milestones

Testing Type

Start Date

Phase1- Internal

Unit testing during development.


Functional testing/Section 508
Compliance testing and remediation.
Regression testing and remediation.
Load testing and remediation.

Phase2- User
Acceptance Testing

Functional testing and remediation.


Regression testing and remediation.

Phase 3 - Operations
Acceptance Testing

Production set up and operational


testing.
Operation remediation (if necessary).

13 of 26

End Date

5.4 Test Data


5.4.1 Content and documents
Testers will use sample zip files and MS Office document to test the upload features of the
application. Protocol and study center data will be populated from existing study participants.

5.5 Recording Results


Test scripts are maintained in a central repository. As testers work through the test scripts, they
indicate whether each step passed or failed. All failed tests are entered online in the Jira bugtracking system, which is more described in section 5.7.

5.6 Analyzing Results


5.6.1 Pass/ Fail
All pass/fail criteria is specified in the test scripts. During the test execution, if the actual results
do not match the expected results, the development team shall be consulted to check if the
finding is actually a defect or not a defect.
If the finding shall be admitted as a defect, the test case execution for that particular test case
shall be suspended with a status Failed. If any other test case depends on the failed test case,
its status is marked as Blocked. The Jira bug-tracking system allows testers and developers to
associate comments with bugs to ensure that we capture both quantitative and qualitative
results.

5.6.2 Number of Tests


All areas of the system that calculate results or insert database rows will be performed multiple
times to ensure accurate calculations and data integrity.

5.7 Defect and Bug Resolution


Defect is non-conformance of the application under test to its requirements. This includes any
flaw in a component or system capable of causing the component or system to fail to perform its
required function, e.g. an incorrect statement or data definition. A defect, if encountered during
execution, may cause a failure of the component or system.

5.7.1 Bug Tracking


All bug reporting and remediation monitoring is done using Jira, a Web-accessible bug-tracking
application. Each bug is given a unique ID to facilitate tracking throughout its lifecycle. When a
bug is entered, the tester must assign it to a specific developer. Once the bug is entered, the
system notifies the tester and the developer that the bug has been created. When a developer
remediates a bug, he/she can change its status in the application to ready for testing, and an
email is automatically sent to the tester who entered the bug. The bug can then be marked as
resolved or can be returned to the developer for further remediation.

14 of 26

5.7.2 Bug Attributes


Within Jira, all bugs may be tagged and sorted by the following attributes:Name
Description
Priority
Critical effect of the problem is seen throughout the site functionality and/or structure.
High problem makes further testing of the subject function impossible.
Medium problem that does not prevent further testing of the subject function.
Low- display or graphic issue that do not affect the site function.
Severity
Severe program ceases meaningful operation.
Major functional or computational error but program continues to operate.
Minor unexpected or inconsistent results.
Small cosmetic or design issue.
Due Date
Assignee
Client reviewable
Remediation time required
Level of effort

5.8 Entrance Criteria


The following conditions are the minimum entrance criteria that may exist before testing can
begin:-

5.8.1 Complete
The project is code complete and there are no missing features, content or media.
5.8.2 Unit Testing
All developed code has been unit tested.
5.8.3 Test Plan and Script
The test plan and test scripts have been reviewed and signed off by stakeholders.

5.8.4 Human Resources


Testers have been assigned and are in place.

5.8.5 Test Environment


The test environment described in section 3.1 is in place.

5.9 Exit Criteria


The following conditions are the minimum Exit Criteria for testing:-

5.9.1 Requirements
The System meets all requirements in the functional specification and traceability matrix.

5.9.2 High- Priority Bugs


All high-priority bugs are closed.
15 of 26

5.9.3 Medium or Low- Priority Bugs


If any medium or low-priority bugs exist, the implementation risk must be assessed and signed
off by stakeholders.

5.9.4 Test Cases


All test cases are executed
5.9.5 User Acceptance Testing
All user acceptance testing is complete and all issues have been addressed.

5.10 Test Deliverables


Testing team shall produce the following work products in addition to the formal testing
deliverables:

Test Plan.
Test Scripts.
Bug and Remediation Reports.
Metric reports from automated testing.

16 of 26

Section 6.0: Unit Testing


Unit testing occurs during code development. Unit tests are carried out each time a code is
written and updated. Developers will unit test his or her code before integrating it into the code
base. During unit testing, developers design and perform tests for each function to ensure that
the code they have written performs according to the applications design. Any code function
that defies testing is documented by the developer.

6.1.1 Entrance Criteria


The entrance criteria for unit testing are: New or updated software code ready.
Detailed requirement Document.
Approved Requirements Traceability Matrix.
Unit Test Environment.
Completion of the review of change requests received since the last baseline of the
requirements and design documents, and artifacts from the previous test phase.

6.1.2 Exit Criteria


The following conditions are the minimum exit criteria for unit testing cases: All unit tests pass.
All urgent and high priority defects identified during Unit testing are resolved.
All artifacts, reports, and metrics are updated by the development team to reflect the
current state of the components tested.
The PM has reviewed and approved all artifacts.

6.1.3 Test Execution


Unit tests are executed each time a code is compiled. Once the code is checked into a version
controlled repository, the unit tests are executed each time a new build is created.

17 of 26

Section 7.0: Functional Testing


Functional Testing is a black box test based on NICHD Travel Concept requirement version 1.0.
The purpose of this testing is to validate requirements when the application is deployed. In order
to achieve this, the code release will be deployed into a set of production-like environments that
will all undergo functional processing and validation. Functional Testing will therefore certify that
code release and its components behave as a system per the underlying requirement
specifications.

7.1 Item to be tested/ not tested


Following features shall be tested in NICHD Travel Concept application within testing process.

Workflow of the NICHD Travel Concept.


Concept Request Form of NICHD Travel Concept.
Reports generated by the request form.
Work list of the NICHD Travel Concept Request form.

Following features shall be not be tested in NICHD Travel Concept application within testing
process: System hardware
COTS products
White Box Testing

7.2 Approach(s)
The test approach presents an overview of the strategy for developing and executing tests,
communicating the test results as well as managing the incidents and defects identified during
testing. Following are the Test approach for the Functional Testing:

Break the product down into distinct areas and identify features of the product that are to
be tested.
Specify the procedures to be used for testing sign-off and product release.
Indicate the tools used to test the product.
List the resource and scheduling plans.
Indicate the contact persons responsible for various areas of the project.
Identify risks and contingency plans that may impact the testing of the product.
Specify bug management procedures for the project.
Specify criteria for acceptance of development drops to testing (of builds).

7.3 Regulatory/ Mandate Criteria


None

18 of 26

7.4 Test Pass/ Fail Criteria


An individual test case will be deemed to have failed if it produces anything other than the
expected result and the test case has been verified as correct. Failures will be recorded, as a
defect and an entry referencing the defect number will be made in the Test Case Execution
Report.

7.5 Test Suspension/ Resumption criteria

Needed hardware or software is not available.


The software contains serious defects that will prevent or limit further meaningful testing.
Assigned test resources are not available when needed by the testing team.

19 of 26

Section 8.0: Load/ Volume Testing


Load tests put the application under heavy loads, such as testing of a Web site under a range of
loads to determine at what point the system's response time degrades or fails. Automated
testing tools are used to conduct this type of test where as Volume tests subject a system to a
high volume of data in order to determine how many transactions can be supported by the
system.

8.1 Test Risk/ Issues


Following are the Test Risk/ Issue during Load/ Volume Testing:

Load testing cannot emulate all possible user environments.


Testing assumptions regarding a typical number of concurrent users may be inaccurate.

8.2 Item to be Tested/ Not Tested


Following higher-risk items will be tested: Tier-based business logic.
User-based access permissions.
Dynamic content display.
Following lower- risk item will not be tested: Static content display.

8.3 Test Approach (s)


Following are the test approach(s) for the load/ volume testing: The load test will use the MSDN load-testing module.
The test will take place in the development environment.
The system will need to perform adequately with 20 concurrent users.
The QA team will create scripts that emulate a user working through the entire workflow
functionality, and use the MSDN load-test software to measure performance degradation for
20 concurrent users.

8.4 Regulatory/ Mandate Criteria


None

8.5 Test Pass/ Fail Criteria


With 20 concurrent users, screens should not take more than 3 seconds from time of submit to
feedback display.

8.6 Test Deliverables


Load test summary report.

20 of 26

8.7 Test Suspension / Resumption Criteria

Needed hardware or software is not available.


The software contains serious defects that will prevent or limit further meaningful testing.
Assigned test resources are not available when needed by the testing team.

Section 9.0: Acceptance Testing


Testing by the client or sponsor to confirm that the system meets all requirements and is ready
for operational use. In Case of BUG Tracking system few users with the help of the testing team
will execute the main flows.

9.1 Test Risks/ Issue


In addition to the test risks/issues described in section 4.2.
1. User stakeholders will not have sufficient time to test the application.
Mitigation: The application will not be launched until stakeholders have completed
testing.
2. Critical defects in UAT phase.
3. Unavailability of environment.

9.2 Item to be Tested/ Not Tested


1. Users will be given access to all attached test scripts and the flows will be tested based on
priority.
2. Load testing will not be performed as part of UAT.

9.3 Test Approach(s)


1. The users will be given a KT regarding the application.
2. They will be provided with test scripts.
3. They will run the test scripts with the help of few testing team people.

9.4 Regulatory/ Mandate Criteria


None

9.5 Test Pass/ Fail Criteria


Pass/fail criteria will be as described in the attached test scripts

9.6 Test Deliverables


All bugs, issues or changes uncovered through user acceptance testing will be captured and
tracked via Jira, which is more fully described in section 5.7.

9.7 Test Suspension / Resumption Criteria

The software contains serious defects that will prevent or limit further meaningful testing.
Assigned test resources are not available when needed by the testing team
21 of 26

9.8 Test Environmental / Staffing / Training Needs


Separate UAT environment needs to be created with software for which is passed by the testing
team.

Section 10.0: Failover Testing


To test how well the redundancy mechanism works when the system encounters heavy load or
unexpected failure.

10.1 Test Risk/ Issues


1. Failure to properly replicate the failover scenarios.

10.2 Item to be Tested/ Not Tested


1. Functional testing will not be done.
2. Only hardware failure scenarios will be tested.

10.3Test Approaches
1.
2.

Simulation of the failure conditions (or specific combinations of conditions)

Evaluation of whether the failure condition was correctly detected and the required failover
mechanisms activated (possibly also within a maximum time period)
3. Verification that functionality and data are consistent with the pre-failover state

10.4 Regulatory / Mandate Criteria


Subpart B of section 508 (29 U.S.C. 794d) of the Rehabilitation Act.

10.5 Test Pass/ Fail Criteria


The application must meet the Accenture Digital Diagnostic Engine 508 validation.

10.6 Test Deliverables


Accenture Digital Diagnostic Engine analytics report.

10.7 Test Suspension/ Resumption Criteria

The software contains serious defects that will prevent or limit further meaningful testing.
Assigned test resources are not available when needed by the testing team.

22 of 26

10.8 Test Environment


The vendor will run the Accenture Digital Diagnostic Engine analytics engine on the system in
the staging environment.

Section 11.0: Progress Reporting


All testing progress will be captured in Jira, as fully described in Section 5.7. Incremental
progress will be reported to the project manager on weekly intervals via an Excel export from
Jira.

23 of 26

Section 12.0: Appendices


APPENDIX A: TEST PLAN APPROVAL
The undersigned acknowledge that they have reviewed the <Project Name> Test Plan
and agree with the information presented within this document. Changes to this Test
Plan will be coordinated with, and approved by, the undersigned, or their designated
representatives.
Date:

Signature:
Print Name:
Title:
Role:

Project Manager

APPENDIX B: REFERENCES
The following table summarizes the documents referenced in this document.
Document Name
Requirements Document
Version 3.3
Test Scripts

Description
Location
Contains business requirements for
MS Share point
application.
Scripts required running functional tests. MS Share point

24 of 26

APPENDIX C: KEY TERMS


The following table provides definitions and explanations for terms and acronyms relevant to the
content presented within this document.
Term
Definition

APPENDIX D: TEST SUMMARY REPORT


Project Name
Test Manager/Lead

Reference Number 1

Test Case ID

Description of Test Objectives

Results:
Pass/Fail

. Reference Number is used to quickly identify when a particular test will be executed within the
testing cycle. It is suggested that a decimal be used as outlined in the table below:
Position

X.
x. X

Explanation

= Identifies which group the tests will be run. 1= First test group (Unit/
Functional tests); 2= 2nd test group (Initial Integration); 3= 3rd test group
(Additional Integration); 4= Regression or Negative Testing; etc.
= The order the test case/ script/ scenario will be executed within the
overall test group.

25 of 26

x.x.X

= Indicates the run number, how many times this test has been or will be
executed.

APPENDIX E: BUG REPORT / DEFECT LOG


Project Name
Test Lead
Tester Completing Form

ID#

Source

Bug/Defect Description

Severity
1
Urgent
2
High
3
Medium
4
Low
Severity

Priority

26 of 26

Resolution Strategy

Priority
1
Critical
2
High
3
Moderate
4
Low
Associated
Change
Request

Date
Resolved

Вам также может понравиться