Вы находитесь на странице: 1из 26

Quality:

What is Quality?

Quality is conformance to requirements.

For a piece of work to be considered as quality work, it must conform to the agreed-
upon requirements. Establishing these requirements is done through a dialog with
the customer, first understanding the customer’s expectations, and then translating
these expectations into specific measurable requirements for the product or service
to be delivered

A shorter definition is that a high-quality software system is one that’s delivered to


the users on time, costs no more than was projected, and, most importantly, works
properly. “Working properly” implies that the software must be as nearly bug-free as
possible.

Software QA:

Software QA involves the entire software development PROCESS - monitoring and


improving the process, making sure that any agreed-upon standards and procedures
are followed, and ensuring that problems are found and dealt with. It is oriented to
'prevention'

Who is responsible for Quality?

Management, Engineers .In fact everybody in the organization is responsible for


Quality.

Approach towards Quality: Technical and procedural and to ensure that approach
is procedural training is important.

Goal of QA should be continuous improvement in the process/quality of the product


and delivering a product with Zero Defects

1
What makes a good test engineer?

A good test engineer has a 'test to break' attitude, an ability to take the point of view
of the customer, a strong desire for quality, and an attention to detail. Tact and
diplomacy are useful in maintaining a cooperative relationship with developers, and
an ability to communicate with both technical (developers) and non-technical
(customers, management) people is useful. Previous software development
experience can be helpful as it provides a deeper understanding of the software
development process, gives the tester an appreciation for the developers' point of
view, and reduce the learning curve in automated test tool programming. Judgment
skills are needed to assess high-risk areas of an application on which to focus testing
efforts when time is limited.

What makes a good Software QA engineer?

The same qualities a good tester has are useful for a QA engineer. Additionally, they
must be able to understand the entire software development process and how it can
fit into the business approach and goals of the organization. Communication skills
and the ability to understand various sides of issues are important. In organizations
in the early stages of implementing QA processes, patience and diplomacy are
especially needed. An ability to find problems as well as to see 'what's missing' is
important for inspections and reviews.

What makes a good QA or Test manager?

A good QA, test, or QA/Test(combined) manager should:

• be familiar with the software development process


• be able to maintain enthusiasm of their team and promote a positive
atmosphere, despite what is a somewhat 'negative' process (e.g., looking for
or preventing problems)
• be able to promote teamwork to increase productivity
• be able to promote cooperation between software, test, and QA engineers
• have the diplomatic skills needed to promote improvements in QA processes
• have the ability to withstand pressures and say 'no' to other managers when
quality is insufficient or QA processes are not being adhered to
• have people judgment skills for hiring and keeping skilled personnel
• Able to communicate with technical and non-technical people, engineers,
managers, and customers.
• be able to run meetings and keep them focused

2
Software Product:

What is a software Product?


The complete set, or any of the individual items of the set, of computer programs,
procedures, and associated documentation and data designated for delivery to a
customer or end user.

Products of ValueLabs:

1.Mobile Data Suite.


2.Enterprise Mining solution.

Services:
1.QA
2.Develoment
3.CSR

Software Development Life Cycle:

What is SDLC?

The phases a software product goes through between when it is conceived and when
it is no longer available for use. The software life-cycle typically includes the
following: requirements analysis, design, construction, testing (validation),
installation, operation, maintenance, and retirement.

The development process tends to run iteratively through these phases rather than
linearly; several models (spiral, waterfall etc.) have been proposed to describe this
process.

Importance:

It is the base upon which we start developing a product. It allows us to follow a


process towards attaining the goal "software Product”. It defines the time in which
the process has to be finished.

Types Of SDLC:

1. Waterfall model
• Preliminary project plan: during feasibility study
• Estimation and planning: at beginning of analysis
• Monitoring and control: throughout the project
• Plans and estimates are quite fixed

2. Prototyping model
• Tentative, initial estimation: at very beginning
• Revised, more accurate estimate: after last prototype, before development
• Project planning: two phases

3
1. Before prototyping: plan prototyping phase, and tentative plan for main
development phase

2. After prototyping: detailed, more accurate plan for main development phase
• Monitoring and control: throughout the project

3. Fourth-generation model
• Initial estimate and project plan: at very beginning
• Revised estimates and plans: after each iteration (if any)
• Monitoring and control: throughout the project
• Strong initial planning, yet flexible

4. Spiral model
• Initial estimate and project plan: at very beginning
• Estimates and plans for current cycle: at beginning of each cycle
• Monitoring and control: throughout the project, and particularly at end-of-
cycle formal reviews
(For more detailed description refer

Few common problems in the software development process?

• Poor requirements - if requirements are unclear, incomplete, too general,


or not testable, there will be problems.
• unrealistic schedule - if too much work is crammed in too little time,
problems are inevitable.
• inadequate testing - no one will know whether or not the program is any
good until the customer complains or systems crash.
• Futurities - requests to pile on new features after development is underway;
extremely common.
• Miscommunication - if developers don't know what's needed or customer's
have erroneous expectations, problems are guaranteed.

Responsibilities of QA:

Reviews

Project review
A formal review of a project’s scope and application. QA’s role is to assess the
project and
influence the establishment of initial requirements with respect to reliability and
testing.

Architecture review

A formal review of a project’s architecture, network interfaces, and data flow. QA’s
role is to
influence the architecture towards reliable design, and to gain understanding of it in
order to

4
formulate the test design strategy.

Functional review

A formal review of a project’s functional design. QA’s role is to assure the project’s
requirements have been satisfied, and to gain further understanding of it in order to
block out
key testing requirements.

Design review

A detailed design review is presented by the developer of a specific aspect of the


project’s
overall design, and allows for the opportunity to assess the design with respect to
the input
requirements. QA’s role is to assure requirements have been addressed, that the
design will
produce a reliable implementation, and to further detail the developing test
procedure.

Design documentation review

There are a number of documents that are produced during a project. QA’s role in
reviewing
these documents is to help assure clarity, consistency, and completeness. Typical
documents
include the Software Requirements Specification (SRS), architectural spec, functional
spec,
design spec, and comments inserted into the actual code.

Code reviews

A code review is a detailed investigation into the developing, or developed, code


generated
from the SRS. QA’s role is to help assure the code meets acceptable coding
standards, and
that the implementation of the requirements have been robustly met.

Test specification review

A test spec review is presented by the QA analyst for the purpose of reviewing it
against the
project requirements and design specification. Key goals are to show adequate test
scope,
coverage, and methodologies.

Test documentation review

Test documentation reviews are presented by the QA analyst for the purpose of
reviewing the
test plan and procedures to show they are in line with project deliverables.

User documentation review

5
A review of the user documentation helps to assure that information being sent to
the
customer is correct, accurate, clear, and concise.

Training documentation review

A review of the training documentation helps to assure that information being sent to
the
trainers and trainees is correct, accurate, clear, and concise.

Need and Importance of testing:

No application will be bug free, unless it is tested thoroughly. Though an application


is developed after having architecture, design etc. reviews there is every chance of
getting an issue in the developed application. To deliver a bug free product to the
costumer it is important that application should be tested well. Hence testing is an
important phase in SDLC. Organizations reputation is based upon the quality of the
product delivered to the client.

Where the role of a tester does begins in SDLC?

The role of a tester begins once the design is finalized and functional and data flow is
finalized. There he has to start preparing test plans and once test plan is written it
has to be reviewed and should be finalized.

Software Testing Life Cycle:

The phases a software go through between when its specifications are given and
when it went is delivered to the customer.

Software Testing Life Cycle consists of six (generic) phases:

1) Planning, 2) Analysis, 3) Design, 4) Construction, 5) Testing Cycles, 6) Final


Testing and Implementation and 7) Post Implementation .

1. Planning (Product Definition Phase)

6
1.1. High Level Test Plan, (includes multiple test cycles)
1.2. Quality Assurance Plan (Quality goals, Beta criteria, etc ..)
1.3. Identify when reviews will be held.
1.4. Problem Reporting Procedures
1.5. Identify Problem Classification.
1.6. Identify Acceptance Criteria - for QA and Users.
1.7. Identify application testing databases
1.8. Identify measurement criteria, i.e. defect quantities/severity level
and defect origin (to name a few).
1.9. Identify metrics for the project
1.10. Begin overall testing project schedule (time, resources etc.)
1.11. Requisite: Review Product Definition Document
1.1.1. QA input to document as part of the Process Improvement
Project

1.1.2. Help determine scope issues based on Features of the Product

1.1.3. 5 - 10 hours / month approximately

7
1.12. Plan to manage all test cases in a database, both manual or
automated.

2. Analysis ( External Document Phase)

8
1.13. Develop Functional validation matrix based on Business
Requirements.
1.14. Develop Test Case format - time estimates and priority assignments.
1.15. Develop Test Cycles matrices and time lines
1.16. Begin writing Test Cases based on Functional Validation matrix
1.17. Map baseline data to test cases to business requirements
1.18. Identify test cases to automate.
1.19. Automation team: begin to setup variable files and high level scripts
in Auto Tester.
1.20. Setup TRACK and Auto Adviser for tracking components of
automated system.
1.21. Define area for Stress and Performance testing.
1.22. Begin development of Baseline Database as per test case data
requirements.
1.23. Define procedures for Baseline Data maintenance, i.e. backup,
restore, validate.
1.24. Begin planning the number of test cycles required for the project,
and Regression Testing.
1.25. Begin review of documentation, i.e. Functional Design, Business
Requirements, Product Specifications, Product Externals etc..
1.26. Review test environments and lab, both Front End and Back End.
1.27. Prepare for using McCabe tool to support development in white box
testing and code complexity analysis.
1.28. Setup Requite and start inputting documents.
1.29. Requisite: Review Externals Document

1.29.1. QA input to document as part of the Process


Improvement Project

1.29.2. Start to write test cases from Action Response Pair


Groups

1.29.3. Start to develop metrics based on estimated number of test


cases, time to execute each case and if it is “automat-able” .

1.29.4. Define baseline data for each test case

1.29.5. 25 hours / month approximately

3. Design (Architecture Document Phase)

9
1.30. Revise Test Plan based on changes.
1.31. Revise Test Cycle matrices and timelines
1.32. Verify that Test Plan and cases are in a database or Requisite.
1.33. Revise Functional Matrix
1.34. Continual to write out test cases and add new ones based on
changes.
1.35. Develop Risk Assessment Criteria
1.36. Formalize details for automated testing and multi-user testing.
1.37. Select set of test cases to automate and begin scripting them.
1.38. Formalize detail for Stress and Performance testing
1.39. Finalize test cycles. (number of test case per cycle based on time
estimates per test case and priority.)
1.40. Finalize the Test Plan
1.41. (Estimate resources to support development in unit testing)
1.42. Requisite: Review Architecture Document

1.42.1. QA input to document as part of the Process


Improvement Project

1.42.2. Actual components or modules that development will


code.

1.42.3. Unit testing standard defined here, pass/fail criteria,


etc.

1.42.4. Unit testing reports, what they will look like, for both
white and black box testing including input/outputs and all
decision points.

1.42.5. List of modules that will be unit tested.


4. Construction (Unit Testing Phase)

10
1.43. Complete all plans
1.44. Complete Test Cycle matrices and timelines
1.45. Complete all test cases. (manual)
1.46. Complete Auto Tester scripting of first set of automated test cases.
1.47. Complete plans for Stress and Performance testing
1.48. Begin Stress and Performance testing
1.49. McCabe tool support - supply metrics
1.50. Test the automated testing system and fix bugs.
1.51. (Support development in unit testing)
1.52. Run QA Acceptance test suite to certify software is ready to turn
over to QA.

2. Test Cycle(s) / Bug Fixes (Re-Testing/System Testing


Phase)
2.1. Test Cycle 1, run first set of test cases (front and back end)
2.2. Report bugs
2.3. Bug Verification - ongoing activity
2.4. Revise test cases as required
2.5. Add test cases as required
2.6. Test Cycle II
2.7. Test Cycle III

5. Final Testing and Implementation (Code Freeze Phase)

11
2.8. Execution of all front end test cases - manual and
automated.
2.9. Execution of all back end test cases - manual and
automated.
2.10. Execute all Stress and Performance tests.
2.11. Provide on-going defect tracking metrics.
2.12. Provide on-going complexity and design metrics.
2.13. Update estimates for test cases and test plans.
2.14. Document test cycles, regression testing, and update
accordingly.

6. Post Implementation

2.15. Post implementation evaluation meeting to review entire project.


(lessons learned)
2.16. Prepare final Defect Report and associated metrics.
2.17. Identify strategies to prevent similar problems in future project.
2.18. Create plan with goals and milestone how to improve processes.
2.19. McCabe tools - produce final reports and analysis.
2.20. Automation team - 1) Review test cases to evaluate other cases
to be automated for regression testing, 2) Clean up automated
test cases and variables, and 3) Review process of integrating
results from automated testing in with results from manual
testing.
2.21. Test Lab and testing environment - clean up test environment, tag
and archive tests and data for that release, restore test machines
to baseline, and etc. ..

Testing Methodology

The following is an overview of the quality practices of Software Quality Assurance


team:

-The iterative approach to software development presents a significant challenge for


SQA. The iterative, rapid deployment process is characterized by a lack of strict
adherence to a traditional waterfall development methodology (marketing first specs
the feature set, then engineering refines the marketing requests into more detailed
specifications and a schedule, then engineering starts building to specification and
SQA starts building tests, then a formal testing cycle, and finally product release).
Here is a variant of development:

-As progress is made toward a release, the first priority features are done to a
significant level of completion before much progress is made on the second priority
features. A similar approach is taken for the hopefully and third priority features. The
first priority feature list is all that has to be completed before a product is feature

12
complete, even though, there has been time built into the schedule to complete the
second priority, as well.

-Other than the initial OK from the executive team that they want a particular
product built, there is not a rigorous set of phases that each feature must pass.

-Developers (designers, coders, testers, writers, managers) are expected to interact


aggressively and exchange ideas and status.

-By not going heavily into complete specifications, the impact of a better idea along
the way need not invalidate a great deal of work.

-One prototype is worth a pound of specification. However, this does not mean that
large scale changes should not be specified in writing. Often times, the effort to do
paper based design is significantly cheaper than investing in a working prototype.
The right balance is sought here.

Complementing the strategy of iterative software development, the SQA testing


assessment is accomplished through personal interaction between SQA engineers
and Development engineers. Lead SQA engineers meet with the development team
to assess the scope of the project, whether new features for an existing product, or
the development of a new product. Feature, function, GUI, and cross-tool interaction
are defined to the level of known attributes. When development documentation is
provided, the understanding of the SQA engineer is greatly enhanced. The lead SQA
engineer then meets with the test team, to scope the level and complexity of testing
required. An estimate of test cases and testing time is arrived at and published,
based upon the previous discussions.

-Working with the development team, the SQA team takes the builds, from the first
functioning integration, and works with the features as they mature, to determine
their interaction and the level of testing required to validate the functionality
throughout the product.

-The SQA engineers, working with existing test plans and development notations on
new functionality, as well as their notes on how new features function, develop
significant guidelines for actual test cases and strategies to be employed in the
testing. The SQA engineers actively seek the input of the development engineers in
definition and review of these tests.

-Testing is composed of intertwined layers of manual ad hoc and structured testing,


supplemented by automated regression testing which is enhanced as the product
matures.

What if there isn't enough time for thorough testing?

Use risk analysis to determine where testing should be focused.


Since it's rarely possible to test every possible aspect of an application, every
possible combination of events, every dependency, or everything that could go
wrong, risk analysis is appropriate to most software development projects. This

13
requires judgment skills, common sense, and experience. (If warranted, formal
methods are also available.) Considerations can include:

• Which functionality is most important to the project's intended purpose?


• Which functionality is most visible to the user?
• Which functionality has the largest safety impact?
• Which functionality has the largest financial impact on users?
• Which aspects of the application are most important to the customer?
• Which aspects of the application can be tested early in the development
cycle?
• Which parts of the code are most complex, and thus most subject to errors?
• Which parts of the application were developed in rush or panic mode?
• Which aspects of similar/related previous projects caused problems?
• Which aspects of similar/related previous projects had large maintenance
expenses?
• Which parts of the requirements and design are unclear or poorly thought
out?
• What do the developers think are the highest-risk aspects of the application?
• What kinds of problems would cause the worst publicity?
• What kinds of problems would cause the most customer service complaints?
• What kinds of tests could easily cover multiple functionalities?
• Which tests will have the best high-risk-coverage to time-required ratio?

How can it be known when to stop testing?

This can be difficult to determine. Many modern software applications are so


complex, and run in such an interdependent environment, that complete testing can
never be done. Common factors in deciding when to stop are:

• Deadlines (release deadlines, testing deadlines, etc.)


• Test cases completed with certain percentage passed
• Test budget depleted
• Coverage of code/functionality/requirements reaches a specified point
• Bug rate falls below a certain level
• Beta or alpha testing period ends

What if the application has functionality that wasn't in the requirements?

It may take serious effort to determine if an application has significant unexpected or


hidden functionality, and it would indicate deeper problems in the software
development process. If the functionality isn't necessary to the purpose of the
application, it should be removed, as it may have unknown impacts or dependencies
that were not taken into account by the designer or the customer. If not removed,
design information will be needed to determine added testing needs or regression
testing needs. Management should be made aware of any significant added risks as a
result of the unexpected functionality. If the functionality only effects areas such as
minor improvements in the user interface, for example, it may not be a significant
risk.

What if the project isn't big enough to justify extensive testing?

14
Consider the impact of project errors, not the size of the project. However, if
extensive testing is still not justified, risk analysis is again needed and the same
considerations as described previously in 'What if there isn't enough time for
thorough testing?' apply. The tester might then do ad hoc testing, or write up a
limited test plan based on the risk analysis.

What can be done if requirements are changing continuously?

A common problem and a major headache.

• Work with the project's stakeholders early on to understand how


requirements might change so that alternate test plans and strategies can be
worked out in advance, if possible.
• It's helpful if the application's initial design allows for some adaptability so
that later changes do not require redoing the application from scratch.
• If the code is well-commented and well-documented this makes changes
easier for the developers.
• Use rapid prototyping whenever possible to help customers feel sure of their
requirements and minimize changes.
• The project's initial schedule should allow for some extra time commensurate
with the possibility of changes.
• Try to move new requirements to a 'Phase 2' version of an application, while
using the original requirements for the 'Phase 1' version.
• Negotiate to allow only easily-implemented new requirements into the
project, while moving more difficult new requirements into future versions of
the application.
• Be sure that customers and management understand the scheduling impacts,
inherent risks, and costs of significant requirements changes. Then let
management or the customers (not the developers or testers) decide if the
changes are warranted - after all, that's their job.
• Balance the effort put into setting up automated testing with the expected
effort required to re-do them to deal with changes.
• Try to design some flexibility into automated test scripts.
• Focus initial automated testing on application aspects that are most likely to
remain unchanged.
• Devote appropriate effort to risk analysis of changes to minimize regression
testing needs.
• Design some flexibility into test cases (this is not easily done; the best bet
might be to minimize the detail in the test cases, or set up only higher-level
generic-type test plans)
• Focus less on detailed test plans and test cases and more on ad hoc testing
(with an understanding of the added risk that this entails).

Test Cycle:

15
A test cycle will consist of the following tasks; regression, execution of functionality &
workflow tests and documentation review. It is estimated that a test cycle can be
completed in five working days. This does not mean that all regressions, or all
functionality and workflow testing will be performed for every cycle. It is the
responsibility of the QA Lead and Team to ensure that the appropriate test coverage
is provided in order to meet all milestones. Testing software usually requires a cycle
of tests on each new build of the app-under-test.

Test Plan:

Description: Testing strategy and approach to testing QA will use to validate the
quality of the product prior to release.

A test plan provides the following information:

General description of the project, its objectives, and the QA test schedule.
Resource requirements including hardware, software and staff responsibilities.
Features to be tested, as well as features not to be tested.
Details for the test approach.
Lists of test deliverables such as test cases and test scripts.
References to related test plans (which focus on specific topics) and project
documentation.
Dependencies and/or risks.
Descriptions of how bugs will be tracked
Milestone criteria.
Lists of required reviewers who must provide approval of the test plan.

Test Plan Components

Test requirements based on new features or functions.

Specific testing based on the features defined as Development Priority 1. There must
be a plan in place for these features and they must be scheduled for testing. A
product release date will be slipped in order to complete adequate testing of the
Priority 1 features.

Specific testing based on new features or functions defined as Development Priority


2. There must be a plan in place for these features and they must be scheduled for
testing. If testing of the Priority 1 features impacts adequate testing of these, they
may be dropped from the product.

Specific testing based on new features or functions defined as Development Priority


3. Software Quality Assurance will not schedule or plan for these features. However,
Priority 3 completed prior to Functional Freeze will be added to the SQA Priority 2 for
testing and appropriate risk assessment will be taken with respect their inclusion in
the released product.

SQA has its own set of Priority 1, Priority 2, Priority 3, which include not only the
Development activities, but also testing required as due diligence for product
verification prior to shipment.

16
-Priority 1, features include the testing of new features and functions, but also a
defined set of base installations, program and data integrity checks, regression
testing, documentation (printed, HTML and on-line Help) review and final
"confidence" (high level manual or automated tests exercising the most frequently
used features of the product) checks on all media to be released to the public.
Products being distributed over the Web, also have their Web download and
installation verified.

-Priority 2, include a greater spectrum of installation combinations, boundary


checking, advanced test creation and more in-depth "creative" ad hoc testing.

-Priority 3, usually reflect attempts to bring greater organization to the SQA effort in
documentation of test scripts, creation of Flashboards for metric tracking, or
expanded load testing.

Importance of Test Plan:

1. To maintain a set of scenarios and other strategies documented. This will be


useful in making an approach towards other similar projects.
2. Will be used for referring when the existing team member leaves the team for any
reason.
3. Will be useful in estimating the testing efforts required.

Approach towards writing a Test Plan:

1. Have to come-up with positive test cases means test cases related to
functionality.
2. Approach should also be from end user point of view.
3. Have to come up with negative test cases.

In a brief:

If you are testing a website then you must check for

• Interface
• Headers and Tags
• Secured pages.
• Field validations and boundaries
• Mandatory fields.
• Links,Logos,buttons,background color, text fields and combo boxes.
• Dynamic links or objects or buttons
• Check for page navigations.
• Check for browser compatibility.
• How are cgi programs, applets, java scripts, ActiveX components, etc. to be
maintained, tracked, controlled, and tested?
• Have to do load, performance and stress testing.

If you are testing a desktop application:

• Check for proper installation.


• Check for appropriate UN installation.

17
• Basic functionality.
• Interface
• Field validations and boundaries
• Logos,buttons,background color
• Compatibility with different operating systems

If you are testing an Application:

• Interface(If provided)
• Data validation.
• Check for handling improper data
• Check for performance
• Check for load.
• Check for data flow.
• Check for data base and Q connection fluctuations.
• Check Stored Procedures, initial scripts, table design.
• Check the complete functionality.
• Check for error handling and error logging.
• Have to do white box testing.
• Have to go for integration testing.
• Have to do end to end testing(if required)
• Have to check compatibility with 3rd party applications

Test Case

Description: A specific set of steps and data along with expected results for a
particular test
objective. A test case should only test one limited subset of a feature or
functionality. A test case does the following:
Details test setup.
Details test procedure (steps).
Describes testing environment.
Specifies the pass/fail criteria.
References results of test.
A good test case is one that has a high probability to find an error

What are the different levels of testing?

1.Unit testing.

Unit testing is done by Developer to ensure that every module is working fine and to
ensure that all modules are integrated successfully.

2.Integration testing

18
a)Assembly testing

During assembly testing, the integration of the software components is tested

b)System Integration Testing:

During system integration testing, the communication with external systems is


tested.

The objective of integration testing is to test the integration of and communication


between components. Additionally, it may include testing the integration of
subsystems or communication with external systems. Integration testing may be
done by the programmer, but it may also be done by the build captain, or the team
lead, the project manager, or even a configuration management group.

On some projects, integration testing may be divided into two levels: assembly
testing and system integration testing. During assembly testing, the integration of
the software components is tested. During system integration testing, the
communication with external systems is tested. For example, on a project to develop
a set of EJBs for use by external applications, assembly testing could be done to test
the integration of the EJBs and the components from which they are built, and
system integration could be done to test communication between the EJBs and the
external applications.

3.System Testing.

The objectives of system testing are to find defects that are attributable to the
behavior of the system as a whole, rather than the behavior of individual
components, and to test that the software functions as a complete system. This level
of testing is different from integration testing in that the tests are concerned with the
entire system, not just the interactions between components. Other than system
functionality and behavior, system testing may include testing configuration,
throughput, security, resource utilization, and performance.

Types of Testing

• Black box testing: Not based on any knowledge of internal design or code.
Tests are based on requirements and functionality.
• White box testing: Based on knowledge of the internal logic of an
application's code. Tests are based on coverage of code statements, branches,
paths, conditions.

• Unit testing: the most 'micro' scale of testing; to test particular functions or
code modules. Typically done by the programmer and not by testers, as it
requires detailed knowledge of the internal program design and code. Not always
easily done unless the application has a well-designed architecture with tight
code; may require developing test driver modules or test harnesses.

19
• Incremental integration testing : continuous testing of an application as
new functionality is added; requires that various aspects of an application's
functionality be independent enough to work separately before all parts of the
program are completed, or that test drivers be developed as needed; done by
programmers or by testers.
• Integration testing : Testing of combined parts of an application to
determine if they function together correctly. The 'parts' can be code modules,
individual applications, client and server applications on a network, etc. This type
of testing is especially relevant to client/server and distributed systems.
• Functional testing : Black-box type testing geared to functional
requirements of an application; this type of testing should be done by testers.
This doesn't mean that the programmers shouldn't check that their code works
before releasing it (which of course applies to any stage of testing.)
• System testing : black-box type testing that is based on overall
requirements specifications; covers all combined parts of a system.
• End-to-end testing : similar to system testing; the 'macro' end of the test
scale; involves testing of a complete application environment in a situation that
mimics real-world use, such as interacting with a database, using network
communications, or interacting with other hardware, applications, or systems if
appropriate.
• Sanity testing : typically an initial testing effort to determine if a new
software version is performing well enough to accept it for a major testing effort.
For example, if the new software is crashing systems every 5 minutes, bogging
down systems to a crawl, or destroying databases, the software may not be in a
'sane' enough condition to warrant further testing in its current state.
• Regression testing : re-testing after fixes or modifications of the software
or its environment. It can be difficult to determine how much re-testing is
needed, especially near the end of the development cycle. Automated testing
tools can be especially useful for this type of testing.
• Acceptance testing : final testing based on specifications of the end-user or
customer, or based on use by end-users/customers over some limited period of
time.
• Load testing : testing an application under heavy loads, such as testing of a
web site under a range of loads to determine at what point the system's response
time degrades or fails.
• Stress testing : term often used interchangeably with 'load' and
'performance' testing. Also used to describe such tests as system functional
testing while under unusually heavy loads, heavy repetition of certain actions or
inputs, input of large numerical values, large complex queries to a database
system, etc.

• Performance testing : term often used interchangeably with 'stress' and


'load' testing. Ideally 'performance' testing (and any other 'type' of testing) is
defined in requirements documentation or QA or Test Plans.
• Usability testing : Testing for 'user-friendliness'. Clearly this is subjective,
and will depend on the targeted end-user or customer. User interviews, surveys,

20
video recording of user sessions, and other techniques can be used.
Programmers and testers are usually not appropriate as usability testers.

• Install/uninstall testing: Testing of full, partial, or upgrade install/uninstall


processes.
• Recovery testing: Testing how well a system recovers from crashes,
hardware failures, or other catastrophic problems.
• Security testing: Testing how well the system protects against unauthorized
internal or external access, willful damage, etc; may require sophisticated testing
techniques.
• Compatibility testing : Testing how well software performs in a particular
hardware/software/operating system/network/etc. environment.
• Exploratory testing : Often taken to mean a creative, informal software test
that is not based on formal test plans or test cases; testers may be learning the
software as they test it.
• Ad-hoc testing: Similar to exploratory testing, but often taken to mean that
the testers have significant understanding of the software before testing it.
• User acceptance testing: Determining if software is satisfactory to an end-
user or customer.
• Comparison testing: Comparing software weaknesses and strengths to
competing products.
• Alpha testing: Testing of an application when development is nearing
completion; minor design changes may still be made as a result of such testing.
Typically done by end-users or others, not by programmers or testers.

• Beta testing: Testing when development and testing are essentially


completed and final bugs and problems need to be found before final release.
Typically done by end-users or others, not by programmers or testers.
• Mutation testing: A method for determining if a set of test data or test
cases is useful, by deliberately introducing various code changes ('bugs') and
retesting with the original test data/cases to determine if the 'bugs' are detected.
Proper implementation requires large computational resources.

Manual Testing & Automation Testing:

Manual Testing

-GUI - SQA team members upon receipt of the Development builds, walk through the
GUI and either update existing hard copy of the product Roadmaps, or create new
hard copy. This is then passed on to the Tools engineer to automate for new builds
and regression testing. Defects are entered into the bugs tracking database, for

21
investigation and resolution. Questions about GUI content are communicated to the
Development team for clarification and resolution. The team works to arrive at a GUI
appearance and function which is "customer oriented" and appropriate for the
platform, Web, UNIX, Windows, Macintosh. Automated GUI regression tests are run
against the product at Alpha and Beta "Hand off to QA" ,(HQA) to validate that the
GUI remains consistent throughout the development process. During the Alpha and
Beta periods, selected customers validate the customer orientation of the GUI.

-Features & Functions - SQA test engineers, relying on the team definition, exercise
the product features and functions accordingly. Defects in feature/function capability
are entered into the defect tracking system and are communicated to the team.
Features are expected to perform as expected and their functionality should be
oriented toward ease of use and clarity of objective. Tests are planned around new
features and regression tests are exercised to validate existing features and
functions are enabled and performing in a manner consistent with prior releases.
SQA using the exploratory testing method, manually tests and then plans more
exhaustive testing and automation. Regression tests are exercised which consist of
using developed test cases against the product to validate field input, boundary
conditions and so on... Automated tests developed for prior releases are also used
for regression testing.

-Installation - Product is installed on each of the supported operating systems in


either default, flat file configuration, or with one of the supported databases. Every
operating system and database, supported by the product, are tested, though not in
all possible combinations. SQA is committed to executing, during the development
life cycle, the combinations most frequently used by the customers. Clean and
upgrade installations are the minimum requirements.

-Documentation - All documentation, which is reviewed by Development prior to


Alpha is reviewed by the SQA team prior to Beta. On-line help and context sensitive
Help are considered documentation as well as manuals, HTML documentation and
Release Notes. SQA not only verifies technical accuracy, clarity and completeness,
they also provide editorial input on consistency, style and typographical errors.

Automated Testing

-GUI - Automated GUI tests are run against the product at Alpha and Beta "Hand off
to QA" (HQA) to validate that the GUI has remained consistent within the product
throughout the development process. The automated Roadmaps, walk through the
client tool windows and functions, validating that each is there and that it functions.

-Data Driven - Data driven scripts developed using the automation tools and auto
driver scripts are exercised for both UNIX and Windows platforms to provide
repeatable, verifiable actions and results of core functions of the product. Currently
these are a subset of all functionality. These are used to validate new builds prior to
extensive manual testing, thus assuring both Development and SQA of the
robustness of the code.

-Future - Utilization of automated tools will increase as our QA product groups


become more proficient at the creation of automated tests. Complete functionality
testing is a goal, which will be implemented feature by feature.

22
Different Testing Tools used at ValueLabs:

1.Winrunner
2.Silk
3.WebLoad

Bug:

An unwanted and unintended property of a program or piece of hardware, especially


one that causes it to malfunction.

The status and resolution field define and track the life cycle of a bug.

STATUS RESOLUTION

The status field indicates the general The resolution field indicates what
health of a bug. Only certain status happened to this bug.
transitions are allowed.

UNCONFIRMED No resolution yet. All bugs which


This bug has recently been added are in one of these "open" states
to the database. Nobody has have the resolution set to blank. All
validated that this bug is true. other bugs will be marked with one
Users who have the "can confirm" of the following resolutions.
permission set may confirm this
bug, changing its state to NEW. Or,
it may be directly resolved and
marked RESOLVED.
NEW
This bug has recently been added
to the assignee's list of bugs and
must be processed. Bugs in this
state may be accepted, and
become ASSIGNED, passed on to
someone else, and remain NEW,
or resolved and marked
RESOLVED.
ASSIGNED
This bug is not yet resolved, but is
assigned to the proper person.
From here bugs can be given to
another person and become NEW,
or resolved and become
RESOLVED.
NEEDINFO
More information from the reporter
is needed to proceed further in

23
fixing this bug.
REOPENED
This bug was once resolved, but
the resolution was deemed
incorrect. For example, a
WORKSFORME bug is
REOPENED when more
information shows up and the bug
is now reproducible. From here
bugs are either marked
ASSIGNED or RESOLVED.

RESOLVED FIXED
A resolution has been taken, and it A fix for this bug is checked into the
is awaiting verification by QA. tree and tested.
From here bugs are either re- WONTFIX
opened and become REOPENED, The problem described is a bug
are marked VERIFIED, or are which will never be fixed. This
closed for good and marked should be reserved for "unfixable"
CLOSED. things, otherwise use NOTGNOME
VERIFIED or NOTABUG.
QA has looked at the bug and the LATER
resolution and agrees that the The problem described is a bug
appropriate resolution has been which will not be fixed in this
taken. Bugs remain in this state version of the product.
until the product they were REMIND
reported against actually ships, at The problem described is a bug
which point they become CLOSED. which will probably not be fixed in
CLOSED this version of the product, but
The bug is considered dead, the might still be.
resolution is correct. Any zombie DUPLICATE
bugs who choose to walk the earth The problem is a duplicate of an
again must do so by becoming existing bug. Marking a bug
REOPENED. duplicate requires the bug# of the
duplicating bug and will at least put
that bug number in the description
field.
INCOMPLETE
All attempts at reproducing this bug
were futile, or not enough
information was available to
reproduce the bug. Reading the
code produces no clues as to why
this behavior would occur. If more
information appears later, please
reopen the bug.
NOTGNOME
The bug report describes a problem
in software not produced by the
GNOME project. It should be
reported elsewhere. This is also a
suitable resolution for bugs that

24
appear to have been introduced by
someone creating a distribution of
GNOME.
NOTABUG
The bug report describes behavior
which is the correct behavior of the
software or was reported in error.

Other Fields

25
Severity
This field describes the impact of a bug Priority
on a user. If a bug occurs with great This field describes the importance and
frequency, it can be moved up in order in which a bug should be fixed. This
severity even if it doesn't meet the other field is used by maintainers and release
criteria in that category. coordinators to prioritize work that still
We should fix and push needs to be done. The available priorities
an update immediately. are:
Blocker
This will mostly be used This bug blocks development
for security fixes. or testing work and should be
crashes, loss of data, Immediate fixed ASAP, or is a security
Critical issue in a released version of
severe memory leak.
A major part of the the software.
Major component is This bug blocks usability of a
nonfunctional. large portion of the product,
A minor part of the Urgent and really should be fixed
Normal component is before the next planned
nonfunctional. release.
The component mostly Seriously broken, but not as
works, but causes some high impact. Should be fixed
Minor irritation to users. A before next major release.
workaround should Can include cosmetic bugs of
usually exist. High particularly high visibility,
The component works regressions from functionality
with 100% provided in previous releases,
Trivial functionality, but has and more minor bugs that are
visible typos or other frequently reported.
cosmetic problems. Either a fairly straightforward
Generally a feature workaround exists or the
request for functionality Normal functionality is not very
the program doesn't important and/or not
Enhancement already have. These frequently used.
can be useful as guides Just not all that important; fix
Low
for future product when time permits
improvements.

How to write a Bug Report?

"If you want and expect a program to work, you will be more likely to see a working
program - you will miss failures. If you are expect it to fail, you'll be more likely to see
the problems."

One of the most important (and most common) things an SQA Engineer does is to write
"bug reports". How well you report a bug directly affects how likely the programmer is
to fix it. You should spend a minimum of time needed to describe a problem in a way
that maximizes the probability that it will be fixed. The content and manner of your
reports affect that probability.

To write a fully effective report you must:

- Explain how to reproduce the problem


- Analyze the error so you can describe it in a minimum number of steps.
- Write a report that is complete and easy to understand.

Write bug reports immediately; the longer you wait between finding the problem and
reporting it, the more likely it is the description will be incomplete, the problem not
reproducible, or simply forgotten.

Writing a one-line report summary (Bug's report title) is an art. You must master it.
26
Summaries help everyone quickly review outstanding problems and find individual
reports. The summary line is the most frequently and carefully read part of the report.
When a summary makes a problem sound less severe than it is, managers are more

Вам также может понравиться