Академический Документы
Профессиональный Документы
Культура Документы
Version: 3.0
Prepared for
XXXX (XXXX)
Document Sign-Off
Title Signature
IT Director
Sr. App Engineer
Sr. Systems Analyst
App Engineer
QA Engineer
4.1 SCHEDULE..............................................................................................................................................18
4.2 TECHNICAL..........................................................................................................................................18
4.3 MANAGEMENT.....................................................................................................................................18
4.4 PERSONNEL.........................................................................................................................................18
4.5 REQUIREMENTS...................................................................................................................................18
5 TEST APPROACH............................................................................................ 19
1 Introduction
This document identifies the XXXX Software Quality Assurance Department’s methodology as
implemented across all projects. This test approach describes the high-level strategies and
methodologies used to plan, organize, and manage testing of software projects within XXXX. This
test approach also includes descriptions of XXXX Software Quality Assurance Department’s role at
various phases of the project development cycle. It also establishes the goals, processes, and
responsibilities required to implement effective quality assurance functions across all XXXX
software development and release projects.
The details outlined in this document provide the framework necessary to ensure a consistent
approach to software quality assurance throughout the project life cycle. It defines the approach
that will be used by the Quality Assurance (QA) personnel to monitor and assess software
development processes and products to provide objective insight into the maturity and quality of
the software. The systematic monitoring of the XXXX software products, processes, and services
will be evaluated to ensure they meet requirements and comply with XXXX policies, standards, and
procedures, as well as applicable Institute of Electrical and Electronic Engineers (IEEE) standards.
The overall purpose of this test approach strategy is to gather all of the information necessary to
plan and control the test effort for testing XXXX applications. It describes the approach to testing
the software, and will be the top-level plan used by testers to direct the test effort.
The approach is designed to create clear and precise documentation of the test methods and
processes that XXXX will use throughout the course of system verification testing.
The strategy covers SQA activities throughout the formulation and implementation phases of the
application mission. SQA activities will continue through operations and maintenance of the
system.
This documenting of the test methods and processes will serve as the basis for ensuring that all
major milestones and activities required for effective verification testing can efficiently and
successfully be accomplished. This plan may be modified and enhanced as required throughout all
future verification testing engagements.
2 Quality Objectives
2.1 Test Approach Objectives
This Test Approach supports the following objectives:
Outlines and defines the overall test approach that will be used;
Identifies hardware, software, and tools to be used to support the testing efforts;
Defines the types of tests to be performed;
Defines the types of data required for effective testing;
Defines the types of security threats and vulnerabilities against which each system will
be tested;
Identifies and establishes traceability from the Requirements Matrix to test cases and
from test cases to the Requirements Matrix;
Serves as a foundation for the development of Test Plans and Test Cases;
Defines the process for recording and reporting test results;
Defines the process for regression testing and closure of discrepancies;
Identifies the items that should be targeted by the tests;
Identifies the motivation for and ideas behind the test areas to be covered;
Identifies the required resources and provides an estimate of the test efforts;
List the deliverable elements of the test activities;
Define the activities required to prepare for and conduct System, Beta and User
Acceptance testing;
Communicate to all responsible parties the System Test strategy;
Define deliverables and responsible parties;
Communicate to all responsible parties the various Dependencies and Risks; and
Scope.
As an objective, this requires careful and methodical testing of the application to first ensure all
areas of the system are scrutinized and, consequently, all issues (bugs) found are dealt with
appropriately.
Assure that the system meets the full requirements of our customer(s);
Maintain the quality of the product; and
Remain within the cost range established at the project outset.
At the end of the project development cycle, the user should find that the project has met or
exceeded all of their expectations as detailed in the requirements. Any changes, additions, or
deletions to the Requirements document, Functional Specification, or Design Specification will
be documented and tested at the highest level of quality allowed within the remaining time of
the project and within the ability of the test team.
This test plan describes the unit, subsystem integration, and system level tests that will be
performed on components of the applications. It is assumed that prior to testing, each
subsystem to be tested will have undergone an informal peer review and only code that has
successfully passed a peer review will be tested.
Unit tests will be initially done by the software designer agency (i.e. Eureka, Avectra, etc.) and
subsequently by the XXXX Development Department; performing secondary unit testing,
boundary checking and basic black box testing.
Overview
There are four major milestones in the Development Cycle: Planning Phase, Design Phase,
Development Phase, and Stabilization Phase. The Planning Phase culminates in the completion
of the Planning Docs Milestone (Requirements plus Functional Specs). The Design Phase
culminates in the completion of the Design Specs and Test Plan/Test Specs. The Development
Phase culminates in the Code Complete Milestone. The Stabilization Phase culminates in the
Release Milestone.
During the first two phases, QA Testing plays a supporting role, providing ideas and limited
testing of the planning and design documents. Throughout the final two stages, QA Testing
plays a key role in the project.
During this phase, QA Testing may participate within the design reviews (with Development)
and have access to the Design Spec under construction. This will help QA Testing to better
prepare its Test Plan, Test Spec, and Test Cases. The Test Plan defines much of the detailed
strategy and specific testing information that will be used for testing the application.
Divide Design Spec into testable areas and sub-areas. This should be confused with
more detailed test specs. The plan will also identify and include areas that are to be
omitted (not tested);
Prepared by: André J. Jackson, Software QA Engineer Proprietary and Confidential
Standard Test Approach January 2011
Version: DRAFT 3 Page 9 of 41
Define testing strategies for each area and sub-area;
Define bug-tracking procedures;
Define release and drop criteria;
Define list of configurations for testing;
Identify testing risks;
Identify required resources and related information; and
Provide a testing schedule.
Development’s internal releases during this phase ultimately drive toward a static Alpha build
(recognized as code-complete). While working on the code for an interim milestone (builds
generated as features completed), QA Testing writes the test specification and test cases for that
feature set. During this phase, Development will also be conducting their Unit Testing (White
Box Testing) prior to every internal release to QA Testing.
The following areas of the project must be unit-tested and signed-off before being passed on to
QA Testing:
A build must pass the following Build Acceptance Test before moving into internal release
testing:
Passing this milestone indicates that Alpha Testing is ready to commence. Failure into
acceptance requires that the drop be rejected back to Development. This would only occur in
Throughout the course of Alpha Testing, a brief daily report should be submitted by the QA
Engineer to key senior management personnel indicating testing results to date. Also, frequent
and regular triage meetings shall be held with the project team (more frequent than in previous
phases of the development cycle). QA Testing shall present their bug findings as recorded
within the Test Manager and Team Foundation Server. The objective of the triage meeting is to
determine priorities for bug fixes.
The objective of this phase is to arrive at the Release Milestone with a robust release candidate
(build). There is still an emphasis on testing to continue to identify issues (bugs) and
regression test the bug fixes. All project members may become involved in this process during
this phase.
Application SETUP.EXE;
Installation instructions; and
All documentation (beta test scripts, manuals or training guides, etc.).
Throughout the beta test cycle, bug fixes will be focused on minor and trivial bugs (severity 3
and 4). QA Testing will continue its process of verifying the stability of the application through
Regression Testing (existing known bugs, as well as existing test cases). QA Testing will also
assist with confirmation feedback to beta test results (yes it is a bug; yes Development is
working on a fix, etc.).
The milestone target of this phase is to establish that the Application-Under-Test (AUT) has
reached a level of stability. The future web-based version of the product will insure that the
system is operating appropriately for its usage (transaction response times, HTTP hits per
second, throughput, number of simultaneous users, etc.), that it can be released to the client
users. BAT usually involves 1 – 2 weeks of focused testing for an average project and 5 – 8
weeks for a major version release.
QA Testing will ensure that the Final Release Candidate (RC) Set passes the following test cases:
Most Performance and Stress Test Cases are classic examples of Suggested Test Cases (although
some should be considered standard test cases). Other examples of Suggested Test Cases
include WAN, LAN, Network, and Load Testing.
It is recommended that a separate cycle of Regression Testing occur at the end of each phase to
confirm the resolution of Severity 1 and 2 bugs. The scope of this last cycle should be
determined by the test point person and product management.
The QA Engineer, Sr. Systems Analyst, Sr. Application Engineer, Application Engineer, and IT
Director should all be involved in these triage meetings. The QA Engineer will provide required
documentation and reports on bugs for all attendees. The purpose of the triage is to determine
the type of resolution for each bug and to prioritize and determine a schedule for all “To Be
Fixed” bugs. QA Testing will then assign the bugs to Development for fixing and report the
resolution of each bug back into Test Manager/TFS. The QA Engineer will be responsible for
tracking and reporting on the status of all bug resolutions.
Notice that some discretion is in order here on the part of the QA Engineer. To suspend testing,
the bug must be reproducible, it must be clearly defined, and it must be significant.
4.1 Schedule
The schedule for each phase is very aggressive and could affect testing. A slip in the schedule in
one of the other phases could result in a subsequent slip in the test phase. Close project
management is crucial to meeting the forecasted completion date.
4.2 Technical
Since these are new XXXX systems, in the event of a failure, the old system can be used. We will
run our test in parallel with the production system so that there is no downtime of the current
system.
4.3 Management
Management support is required so when the project falls behind, the test schedule does not get
squeezed to make up for the delay. Management can reduce the risk of delays by supporting the
test team throughout the testing phase and assigning people to this project with the required
skills set.
4.4 Personnel
Due to the aggressive schedule, it is very important to have experienced testers on this project.
Unexpected turnovers can impact the schedule. If attrition does happen, all efforts must be
made to replace the experienced individual. This may pose a challenge, since most of our
testers are XXXX employees with primary responsibilities in other business units.
4.5 Requirements
The test plan and test schedule are based on the current Requirements Document. Any changes
to the requirements could affect the test schedule and will need to be approved by senior
management.
The overall testing approach of the project will address and encompass the following tools and
processes:
Visual Studio Test Professional 2010 - An integrated testing toolset included with
Microsoft Visual Studio 2010 Ultimate; that delivers a complete plan-test-track
workflow for in-context collaboration between testers and developers. This tool
includes Test Manager 2010 and Test Runner and is used by the QA Engineer to
facilitate manual/automated testing and traceability.
Microsoft Visual Studio Team Foundation Server 2010 (TFS) - The collaboration
platform at the core of our application lifecycle management (ALM) process. Team
Foundation Server 2010 automates the software delivery process and enables everyone
on our team to collaborate more effectively, be more agile, and deliver better quality
software while building and sharing institutional knowledge. Project artifacts like
requirements, tasks, bugs, source code, build and test results are stored in a data
warehouse. The tool also contains the reporting, historical trending, full traceability,
and real-time visibility into quality and progress.
The objective of our test metrics is to capture the planned and actual quantities the effort, time
and resources required to complete all the phases of Testing of the XXXX application
improvement projects.
The SQA Engineer will use the test metrics as a mechanism to know the effectiveness of the
testing that can be measured quantitatively. It is a feedback mechanism to improve the testing
process that is followed currently and will be used to track actual testing progress against the
plan and therefore to be able to be proactive upon early indications that testing activity is
falling behind. The SQA Engineer has created a test metric as a standard means of measuring
different attributes of the software testing process. The metrics are a means of establishing test
progress against the test schedule and may be an indicator of expected future results. Our
metrics will be produced in two forms – Base Metrics and Derived Metrics as outlined below:
Base Metrics
Number of Test Cases
Number of New Test Cases
Number of Test Cases Executed
Number of Test Cases Unexecuted
Number of Test Cases Re-executed
Number of Passes
Number of Fails
Number of Test Cases Under Investigation
Number of Test Cases Blocked
Number of 1st Run Fails
Number of Testers
Test Case Execution Time
Derived Metrics
Percentage of Test Cases Complete
Percentage of Test Cases Passed
Percentage of Test Cases Failed
Percentage of Test Cases Blocked
Percentage of Test Defects Corrected
Lab Manager can fully provision and ready multiple environments for testing so that build
scripts can explicitly target a particular lab configuration at build time. Lab Management stores
Prepared by: André J. Jackson, Software QA Engineer Proprietary and Confidential
Standard Test Approach January 2011
Version: DRAFT 3 Page 19 of 41
the environments as virtual machine images in a library of pre-built images using System
Center Virtual Machine Manager (SCVMM) to ensure teams always begin their testing from a
known configuration.
The following operating systems and browser will be used in multiple combinations for testing
the XXXX applications:
Operating Systems
Windows XP, Service Pack 3 or greater
Windows Vista
Windows 7
Browsers
Firefox 3.0
Internet Explorer 7.0
Internet Explorer 8.0
6 Test Strategy
The test strategy consists of a series of different tests that will fully exercise the applications. The
primary purpose of these tests is to uncover the systems limitations and measure its full
capabilities. The list of the various planned tests and a brief explanation are:
All developed code must be unit tested. Unit and Link Testing must be completed and
signed off by development team;
System Test plans must be signed off by Sr. App Engineer and Sr. Systems Analyst;
All human resources must be assigned and in place;
All test hardware and environments must be in place, and free for System test use; and
The Acceptance Tests must be completed, with a pass rate of not less than 80%.
The Exit Criteria detailed below must be achieved before the Round 1 software can be
recommended for promotion to Acceptance status. Furthermore, I recommend that there be a
minimum 1 day effort Final Integration testing AFTER the final fix/change has been retested.
All High Priority errors from System Test must be fixed and tested;
If any medium or low-priority errors are outstanding - the implementation risk must be
signed off as acceptable by Sr. App Engineer and Sr. Systems Analyst;
Project Integration Test must be signed off by Sr. App Engineer and Sr. Systems Analyst;
and
Business Acceptance Test must be signed off by Business Experts.
8 Test Deliverables
Daily Status
Reports
As the diagram above shows, there is a progression from one deliverable to the next. Each
deliverable has its own dependencies, without which it is not possible to fully complete the
deliverable.
Documents
Reports
8.2 Documents
When this document is completed, the QA Engineer will distribute it to the IT Director, Sr.
Application Engineer/Architect, Sr. Systems Analyst, Application Engineer, and others as
needed for review and sign-off.
Specify the approach that QA Testing will use to test the product, and the
deliverables (extracted from the Test Approach);
Break the product down into distinct areas and identify features of the product that
are to be tested;
Specify the procedures to be used for testing sign-off and product release;
Indicate the tools used to test the product;
List the resource and scheduling plans;
Indicate the contact persons responsible for various areas of the project;
Identify risks and contingency plans that may impact the testing of the product;
Specify bug management procedures for the project; and
Specify criteria for acceptance of development drops to QA Testing (of builds).
Helps you track the team's progress toward resolving bugs and shows the number of
bugs in each state over time, a breakdown of bugs by priority or severity, and the
number of bugs that are assigned to each team member.
Helps you track the rate at which the team is discovering and resolving bugs. Shows a
moving average of bugs discovered and resolved over time.
Reactivation
Helps you track how effectively the team is resolving bugs and shows the number of
bugs that the team resolved over time in relation to the number of bugs that the team
resolved and later reactivated.
8.4 Reports
The QA Engineer will be responsible for writing and disseminating the following reports to the
appropriate XXXX project personnel as required:
9.5.2 Software
In addition to the application and any other customer specified software, the following list of
software should be considered a minimum:
Human Resources
Responsibilities include:
identify test ideas
define test details
determine test results
document change requests
evaluate product quality
QA Engineer 1 Defines the technical approach to the
implementation of the test effort.
Responsibilities include:
define test approach
define test automation architecture
verify test techniques
define testability elements
structure test implementation
Business Unit Testers 6 Implements and executes the tests.
Responsibilities include:
implement tests and test suites
execute test suites
log results
analyze and recover from test failures
document incidents
Database Administrator, 1 Ensures test data (database) environment and
Database Manager assets are managed and maintained.
Responsibilities include:
Support the administration of test data and
test beds (database).
Sr. Application 2 Identifies and defines the operations,
Engineer, Application attributes, and associations of the test classes.
Engineer, and Sr.
Systems Analyst Responsibilities include:
defines the test classes required to
support testability requirements as
defined by the test team
TCID
This field is automatically generated by the Test Manager application. It becomes the Test
Case Number and you cannot alter it.
Status
o Assigned To: The person currently working on the test case.
Closed – The test case is no longer required for future iterations of this team project
Design -The test case is being designed and has not yet been reviewed and approved
Ready –The test case has been reviewed and approved and is ready to be run
o Priority: Importance of the test case to the business goals of the product
o Automation Status: Identifies test case as manual or automated and is useful if you
plan to automate in future, such as:
Classification
o Area: The area of the product with which the test case is associated and maps to the
feature areas in the application under development
o Iteration: The phase within which the bug will be fixed
Steps
o You can write step and its expected result in the steps section,
o You can write a common step, or set of common steps by creating “Shared steps”.
Shared steps can be added to other tests. You may want to write common things like
– launching of application under test, logging in to the application, closing the
application as shared steps that you know you will require in many other tests,
o You can also attach a file to the step – like a document for reference or a screen
shot.
Summary
This is where you would add a detailed description of the test case
All Links
Bugs can be attached/linked to the test case by using “All links” tab
Attachments
This is where you can add any file to the test case. For example, you could add a video
recording file, a screen shot file or a log file
Associated Automation
If you have automated test case, you can link the test case to the automated test method
from the associated automation tab
Parameter Values
These are the variable values set to replace data on each iteration of the test
12 Bug Management
Every bug entered into TFS’s tracking system will have an associated Test Manager test case
number associated with it. Where a bug does not fit into an existing test case, a new test case
will be created and its number referenced in Test Manager/TFS bug entry. The new test case
will be categorized or listed as a Smoke Test item in the Test Plan. Older cases may be updated
to extend their potential for catching the new bug as long as it doesn’t significantly alter the
objective of the original test case. In the event that a bug is found by a person outside of QA
Testing, the bug should be reported to the QA Engineer who will then assure that further testing
is completed to verify the existence of the bug, refine the repro steps, incorporate it into Test
Manager, and add it to TFS for bug tracking.
The QA Engineer, Sr. App Engineer, App Engineer, Sr. Systems Analyst and IT Director will
participate in bug review meetings to assign the priority of all currently active bugs. This
meeting will be known as “Bug Triage Meetings”. The QA Engineer is responsible for setting up
these meetings on a routine basis to address the current set of new and existing but unresolved
bugs.
2 Should Fix These are important problems that should be fixed as soon as
possible. It would be an embarrassment to the company if this bug
shipped.
3 Fix When Have Time The problem should be fixed within the time available. If the bug
does not delay shipping date, then fix it.
4 Low Priority It is not important (at this time) that these bugs be addressed. Fix
these bugs after all other bugs have been fixed.
Status
o Assigned To - Select the person the bug is being assigned to.
o State - Select whether the bug is “Active” (default) or “Inactive” in the test cycle.
o Reason - Select whether the bug is related to a “New” defect or a “Build Failure”.
o Resolved Reason - Enter a short description of the resolution.
Classification
o Area - Select the appropriate area in the team project for this bug.
o Iteration - Select the appropriate iteration for this bug.
Planning
o Stack Rank - The stack rank is used as a way to prioritize your work. The lower the
stack rank the higher the priority the work item is.
o Priority - Select a priority rating from 1 to 4. 1 representing the most urgent.
o Severity - Select the severity of the bug from 1 to 4. 1 representing the most critical.
Details
The Details display the test steps and detailed actions that were automatically added to a
specific test step, such as input data, expected and actual results, comments, and
attachments.
System Info
The System Info displays detailed information about the configuration of the computer used
during the test.
Test Cases
Displays additional test cases related to the bug.
All Links
Displays test result attachments that are added as links. This includes diagnostic trace data.
Attachments
A set of files attached to help provide additional information to support the issue.
No Is it really a bug?
Yes Yes
Developer
Developer
receives email
receives email
confirmation
confirmation
from Eureka or
from Avectra
Vendor
Is it really fixed? acknowledging
acknowledging
Bug fix
Bug fix
Yes
13 Documentation
The following documentation will be available at the end of the test phase:
Prepared by: André J. Jackson, Software QA Engineer Proprietary and Confidential
Standard Test Approach January 2011
Version: DRAFT 3 Page 40 of 41
Standard Approach Document
Test Cases
Test Case review
Requirements Validation Matrix
Defect reports
Final Test Summary Report