Вы находитесь на странице: 1из 40

CSTE Review

Skill Categories
6-10

1
Skill Categories 6-10
• Test Reporting Process (321)
• User Acceptance Testing (381)
• Testing Software Developed by
Contractors (395)
• Testing Internal Control (419)
• Testing New Technologies (471)

2
6 – Test Reporting Process (321-379)
• Prerequisites to Test Reporting (321)
• Test Tools Used to Build Test Reports (331)
• Test Tools Used to Enhance Test Reporting
(351)

• Reporting Test Results (356)

3
Prerequisites to Test Reporting
• Define and collect test status data (322)
• Define the test metrics used in reporting (323)
• Define effective test metrics (325)

4
Test Tools Used to Build Test
Reports
• Pareto Charts (331, Fig 50-52)
• Pareto Voting (334)
• Cause and Effect Diagrams (335, Fig 54,55)
• Check Sheets (337)
• Histograms (340. Fig 58)
• Run Charts (342, Fig 59 [not labeled!])
• Scatter Plot Diagrams (343, Fig 60,61)
• Regression Analysis (347, Fig 62)
• Multivariate Analysis (348, Fig 63)
• Control Charts (349, Fig 64)

5
Test Tools Used to Enhance Test
Reporting
Benchmarking is a10-Step Process, with 4 Phases (351)
Planning Phase
1. Identify benchmarking subject and teams
2. Identify and select benchmarking partners
3. Collect data
Analysis Phase
1. Determine current competitive gap
2. Project future performance levels
Integration Phase
1. Communicate findings and gain acceptance
2. Establish functional improvement goals
Action Phase
1. Develop an Action Plan
2. Implement plans and monitor progress
3. Recalibrate and reset BM performance levels
See 353, Fig 65 6
Test Tools Used to Enhance Test
Reporting
Quality Function Deployment (QFD)
• Aimed specifically at satisfying the end user
• Concentrates on maximizing end-user
satisfaction, measure by metrics, such as
end-user compliments
• Dr. Yoji Akao was principal developer of
QFD
• Includes tools and techniques for improving
the software product
• See 355, Fig 66
• See also “Quality Function Deployment (QFD) for Software”, by
Richard E. Zultner, Feb 1992, p.1-12 7
Reporting Test Results
• Reporting test results is a continuous process.
• Significant problems are reported to decision-makers
who can determine appropriate action.
• These questions should be answered by test reports:
(357)
• What information do stakeholders need?
• How can testers present that information in easy-to-
understand format?
• How can information be presented so it is believable?
• What can testers tell stakeholder to help determine action
to take?

8
Reporting Test Results
• Two required types of test reporting:
• Current Status Test Reports (357)
• Interim reports occur in any phase of life cycle, at pre-defined checkpoints,
or when important information needs to be reported

• Final Test Reports (374)


• Prepared at conclusion of each level of testing.
• Reports at end of unit and integration testing are normally informal and have
purpose of indicating there are no remaining defects at that level.
• Reports at end of system and acceptance testing are for customer or user to
make decision whether to release software. If known defects exist, user can
develop strategies to address potential weaknesses.
• Details on following slides…

9
Reporting Test Results
Current Status Test Reports (357)
• Function Test Matrix (358, Table 20)
• Defect Status Report (359, Fig 67)
• Function Testing Status Report (360, Fig 68)
• Expected versus Actual Defects Uncovered Timeline (362, Fig 70)
• Defects Uncovered versus Corrected Gap Timeline (363, Fig 71)
• Average Age of Uncorrected Defects by Type (364, Fig 72)
• Defect distribution Report (365, Fig 73)
• Relative Defect Distribution Report (366, Fig 74)
• Testing Action Report (367, Fig 75)
• Individual Project Component Test Results (369, Fig 76)
• Summary Project Status Report (370, Fig 77)
• Individual Project Status Report (371, Fig 78)
10
Reporting Test Results
Final Test Reports (374)
Designed to accomplish three objectives: (375)
• Define the scope of testing
• Present results of testing
• Draw conclusions and make recommendations
Examples of Final Test Reports (375-378)
• Unit Test Report
• Integration Test Report
• System Test Report
• Acceptance Test Report
See Guidelines for Report Writing, 378-379

11
7 – User Acceptance Testing
• Acceptance Testing Concepts (381)
• Roles and Responsibilities (385)
• Acceptance Test Planning (386)
• Acceptance Test Execution (391)

12
Acceptance Testing Concepts

• Acceptance testing (381)


• Software acceptance (382)
• Acceptance decisions (382)
• Final acceptance decision (382)
• Final acceptance testing (382)

13
Acceptance Testing Concepts
• As a life cycle process, software acceptance
enables: (382-383)
• early detection of software problems
• preparation of appropriate test facilities
• early consideration of user’s needs during
software development
• ensure user involvement in developing system
requirements
• … (more on 382-383)

14
Acceptance Testing Concepts
• Acceptance testing is designed to determine
whether software is fit for use.
• Four components of fit: (383)
• Data
• People
• Structure
• Rules
• See Fig 82

15
Acceptance Testing Concepts
Differences between Acceptance Test and
System Test (384)

System Testing Acceptance Testing


Performed by developers Performed by user personnel.
and/or software testers May include assistance from
software testers
Before acceptance test After system test
Focus on input processing Focus on s/w specs
16
Acceptance Testing Concepts
Roles and Responsibilities (385)
• User - Six user roles defined 385-386
• Begins with determining if acceptance testing will occur.
• Primary responsibility for planning and conducting acceptance testing (if
competency exists)
• If competency does not exist, user will need to acquire it or outsource
acceptance testing.
• Software Tester has one of three roles
• No involvement at all. User accepts full responsibility for acceptance
testing.
• Advisor. User will develop and execute test plan, but testers will provide
quality control.
• Active participant. Can include any or entire acceptance testing
activities.
• Software testers do not define acceptance criteria, or make decision as to
whether software can be released.
17
Acceptance Test Planning
• Acceptance Criteria (387-389)
• Acceptance Test Plan (389-390)
• Use Case Test Data (390-391)

18
Acceptance Test Execution
• Execute the Acceptance Test Plan (391)
• determine if acceptance criteria has been met
• reviews at various points
• testing the executable software system.
• Which technique to use (or both) is based on criticality,
size, resources, and time
• Acceptance Decision (392)
• usually means developer has no more responsibility, other
than maintenance
• See list of typical decisions on p 392

19
8 – Testing Software Developed by
Contractors (395-417)
• Challenges in Testing Acquired Software (395)
• COTS Software Test Process (399)
• Contracted Software Test Process (405)

20
Challenges in Testing Acquired
Software (395)
• COTS – commercial off the shelf software
• Outsourcing Organizations – contractors who
are not part of the organization
• Offshore Software Developers – contractors
working in another country
• Contractors – refers to contractors,
outsourcers, offshore software developers,
and developers of COTS.
21
Differences between software
developed in-house and developed by
contractors (396)
• Relinquishment of control
• developers are not employees of organization, making it
difficult to oversee development process
• No control over day-to-day decisions
• Loss of control over reallocation of resources
• Contractor cannot take workers off one project and assign
them to another project if work needs to be done to correct
problems or speed up development.
• … more differences (396-398)

22
COTS Software Test Process
• Seven step process designed to test high-risk COTS
(399)
• Assure completeness of needs specs (399)
• Define critical success factor (400)
• Determine compatibility with computer environment (400)
• Assure software can be integrated into business system
work flow (402)
• Demonstrate software in operation (403)
• Evaluate people fit (405)
• Acceptance test the COTS (405)

23
Contracted Software Test Process
• Nine tester responsibilities (405)
• Assure contracting process is adequate (406)
• Assure requirements and contract criteria are testable (411
[no header!])
• Review adequacy of contractors test plan (412)
• Perform acceptance testing on the software (412)
• Issue report on adequacy of software to meet organization
needs (413)
• Ensure knowledge transfer occurs, and intellectual proper
rights are protected (413)
• Incorporate copyrighted material into contracting
organization’s manuals (414)
• Assure ongoing operation and maintenance of software
(414)
• Assure effectiveness of contractual relations (416)
24
9 – Testing Internal Control (419-470)
• Principles and Concepts (419)
• Internal Control Models (434)
• Testing Internal Controls (440)
• Testing Security Controls (444)

25
Principles and Concepts
• COSO (Committee of Sponsoring
Organizations) developed internal control
framework in 1990’s (419)
• COSO definition of internal control (420)
• …A process … designed to provide reasonable
assurance regarding achievement of objectives in
these categories:
• Effectiveness and efficiency of operations
• Reliability of financial reporting
• Compliance with applicable laws and regulations.

26
Principles and Concepts
• Four key terms related to internal control and
security: (420)
• Risk – The probability than an undesirable event
will occur.
• Exposure – The amount of loss that might occur if
an undesirable event occurs.
• Threat – A specific event that might cause an
undesirable event to occur.
• Control – Anything that will reduce the impact of
risk.
27
Risk versus Control
• Risk = Frequency x Occurrence (422)
• To calculate the loss due to risk, one must
first determine: (422-423)
• The frequency with which an unfavorable event
will occur, and
• The probable loss associated with that
unfavorable occurrence.

28
Environmental vs Transaction
Processing Controls
• Environmental Controls (423)
• means by which organization is managed
• organizational policies
• organization structure
• method of hiring, training, supervising, evaluating
• Transaction Processing Controls (424)
• Every business application has two systems
• one that processes business transactions
• one that controls the processing of business transactions

29
Preventive, Detective and Corrective
Controls
• Preventive Controls (426-430)
• Act as a guide to help things happen as they should.
• Most desirable because it stops problems from occurring.
• Detective Controls (431-433)
• Raise awareness of a problem
• Will not prevent problems, but points out that a problem
has occurred.
• Corrective Controls (433-434)
• Assist investigation and correction of causes of detected
exposures. Collect evidence.
• Difficult and time consuming, but important

30
Internal Control Models
• COSO Enterprise Risk Management (ERM)
Model (434-436)
• COSO Internal Control Framework Model (436-
439)

• CobiT Model (439-440)

31
Testing Internal Controls
• Difference between auditors and testers: (440)
• Auditors assess adequacy of internal controls.
• Testers assure control requirements are testable,
then test to see if controls were implemented as
specified
• Perform Risk Assessment (441)
• Test Transaction Processing Controls (442)

32
Testing Security Controls
• Understand where security is vulnerable to
penetration (444)
• Build a penetration point matrix (447)
• Assess security awareness training (458)
• Understand attributes of effective security
control (465)
• Select techniques to test security (466)

33
10 – Testing New Technologies
• Risks associated with new technology (471)
• Newer IT technologies that impact software
testing (473)
• Testing the effectiveness of integrating new
technology (482)

34
Risks associated with new
technology (471-473)
• Unproven technology
• Technology is defective
• Insufficient technology
• new technology obsoletes existing implemented technologies
• Variance between documentation and technology execution
• Staff not competent to use new technology
• Lack of understanding how to optimize the new technology
• Technology not incorporated into organization’s work
process
• Obsolete testing tools
• Inadequate vendor support.

35
Newer IT Technologies that Impact
Software Testing (473-482)
• Web-based applications
• Distributed application architecture
• Wireless technologies
• New application business models
• e-commerce
• e-business
• New communication methods
• voice and messaging
• hand-held and internet-enabled devices
• data networking
• Wireless local Area Networks (WLAN)
• Broadband wireless
• Bluetooth
• New testing tools
36
Testing the Effectiveness of
Integrating New Technology
Could be assigned to software testers, software
developers or process engineering and quality
assurance groups.
Involves two tasks:
• Determine the Process Maturity Level of the
Technology (482-484)
• Test the Controls over Implementing the New
Technology (484-488)

37
Determine the Process Maturity
Level of the Technology (482-484)

• Level 1 - People-dependent technology


• Level 2 – Use description-dependent technology
processes
• Level 3 – Use of technology
• Level 4 – Quantitatively measure technology
• Level 5 – Optimized use of technology

38
Test the Controls over Implementing
the New Technology (484-488)
• Test actual performance versus stated
performance (484)
• Test adequacy of current processes to control
the technology (485)
• Test adequacy of staff skills to use the
technology (486)

39
CSTE Review

Skill Categories
6-10

This is the end!


40

Вам также может понравиться