Вы находитесь на странице: 1из 26

What is the best approach to software test estimation?

Nga Huynh

Agenda
Introduction
Why good estimates are important and reasons for poor estimates? Test Estimation vs Estimation (of any other activities) Confidence rating

Estimation Methods ways to estimate test effort Considerations Questions?

Introduction
Successful test estimation is a challenge Difficulty to estimate the software project development accurately even harder to estimate for testing effort Lack of detailed information about, example
Detailed requirements Organisation's experience with the similar projects in the past Understanding of what to should be included in a testing effort

Why good estimates are important and reasons for poor estimates?
Why good estimates are important?
Testing is often blamed for late delivery Testing time is squeezed It promotes early risk assessment

Reasons for poor estimates?


Incomplete/ambiguous requirements New technology Skills (Testers & developers) Project delays Poor environments

Estimation vs Test Estimation


Any estimation involves the following:
Identify/estimate/assign tasks Start and finish for each task Resources and skills Dependencies (task precedencies) No. of iterations/cycles Availability of the resources (test environment, people etc) Quality (Software, test environment, test cases etc) Delivery of the software to be tested Reviews of test artefacts

Test Estimation

Confidence rating

System under test


System A

Test Effort
400 hrs

Confidence Rating
20 %

Start of the project


System A 350 hrs

More detailed info

80%

Estimation Methods ways to estimate test effort


What is the best approach to software test estimation? Ways to estimate test effort
Guessing Formula Based Parkinsons Law versus Pricing to Win
work expands to fill the time allocated for it Minimum amount that we can get away with

Consensus Of Experts Past Project Knowledge Work Breakdown Structure MITs Model (MITs = Most Important Tests)

What is the best approach to software test estimation?


Highly dependent on
Organisation Project Experience of the resources involved Criticality of the system
Life-critical equipment software Low-cost computer game

Complexity and size of the system

Allocate time for estimation and re-estimation ~ 5% - 7.5% of the testing budget on test estimation (Ross Collard,) Overhead More than one method to check each other Involve the team

Estimation Methods
Guessing - Finger In Air (FIA)
Pure guess/gut feel Based on experience
past project experiences expertise of the estimator

Bad method to base our final estimate on Easily questioned and challenged Try to delay Historical estimation Estimation based on previous test efforts
e.g. 40 hours, 80 tests (0,5 hrs/test ) 100 tests, 50 hours

Past Project Knowledge

Best predictor only if the historical data is available applicable and accurate

Estimation Methods
Formula Based
40% of Development Effort
Quick method Dependent of accurate development effort estimates System and acceptance test

Function Point Analysis (FPA)


Formal technique to estimate size of the system under test Based on five elements plus adjustment factors (inputs, outputs, queries, files, interfaces etc)

Test Point Analysis (TPA)


Test estimation technique based on FPA Described in TMAP TPA takes risk into account

Consensus of Experts
Useful when unsure of system, e.g. new technology or application 3-4 knowledgeable people for independent judgement
If the estimates are similar then take the average If the estimates are different then a consensus must be produced

Work Breakdown Structure (micro-estimating)


Identify tasks & activities Identify dependencies
What tasks must be start/finish before others

Effort & resources, start & end times


Who should do what & when?

Schedule
What is the critical path?

Involve the team

Work Breakdown Structure (micro-estimating)


Phase 1 2 3 4 5 6 7 8 9 10 11 Project Startup Early Project Support (requirements analysis, etc.) Decision to Automate Testing Test Tool Selection and Evaluation Test Tool Introduction Test Planning Test Design Test Development Test Execution Test Management and Support Test Process Improvement PROJECT TOTAL Historical Value 140 120 90 160 260 530 540 1,980 870 470 140 5,300 % of Project 2.6 2.2 1.7 3 5 10 10 37 17 9 2.5 100% Preliminary Estimate 179 152 117 207 345 690 690 2,553 1,173 621 173 6,900 Adjusted Estimate 179 152 345 690 690 2,553 1,173 621 6,403

The Most Important Tests (MITs) Method


Sizing test efforts based on the risk of failure in the system
Criticality Complexity Objective Weighting Planned Test Effort

The Most Important Tests (MITs) Method


Function Complexity Criticality Objective Weighting Planned Test Effort 35

Function A

50 1.75 40% Complexity determined by counting

Function B

Function C

e.g. Business decisions 150 2.25 5% Number of screens involved Number of data values entered 60 etc 2.00 20%

17

24

Total Units

76

The Most Important Tests (MITs) Method


Function Complexity Criticality Objective Weighting 40% Planned Test Effort 35

Function A

50

1.75

Function B

150

Function C

60

Criticality consider the following: e.g. 2.25 5% 17 Customer Affected User Affected Business Affected 2.00 20% 24 Frequency
76

Total Units

Determining Criticality
Customer Affected High (3) Directly User Affected Business Affected Loss of business Frequency

Cannot work

Everyday

Medium (2)

Could be

Work around Loss of money Occasional

Low (1)

Not at all

User doesnt No effect on know business

Once only

Total: 7

Average: 7/4 = 1.75

Function

Complexity

Criticality

Objective Weighting

Planned Test Effort

Function A Function B Function C

50 150

1.75 2.25

40% 5%

35 17

60

2.00

20% 76

24

Total Units

Objective Weighting
Purpose: Allows testers to specify an appropriate amount of test effort. Example of Objective Weighting 5% - 25% 30% - 50% 55% - 75% > 100% minimal regression testing thorough regression testing minimal testing rigorous testing

80% - 100%thorough testing

Planned Test Effort


Planned Test Effort (units) = Complexity x Criticality x Objective Weighting Planned Test Effort figures give the proportion of test effort that will spent in each function/area Specify what a unit means
Hour Test case

Determining Criticality Other Criteria


Visibility
Number of people who will see a failure

Reliability
Number of faults that have occurred in the past

Tolerability
Rating how forgiving end users are to failure

Considerations... 1(2)
The testing target Level of testing Type of test The quality of the system under test Test environment setups (test tools) Test maturity of the organisation Scope of test requirement Test engineer skill level

Considerations... 2(2)
Domain knowledge Involve the team Test iterations/Cycles (typical 3-4 iterations) Defects found in previous testing/to be found Time to report defects the more time spent on reporting the less testing you will actually do!

Finally... Follow up your estimations A combination of methods to get a more comprehensive and accurate estimate

Questions?

Thank You!

Вам также может понравиться