Академический Документы
Профессиональный Документы
Культура Документы
Nga Huynh
Agenda
Introduction
Why good estimates are important and reasons for poor estimates? Test Estimation vs Estimation (of any other activities) Confidence rating
Introduction
Successful test estimation is a challenge Difficulty to estimate the software project development accurately even harder to estimate for testing effort Lack of detailed information about, example
Detailed requirements Organisation's experience with the similar projects in the past Understanding of what to should be included in a testing effort
Why good estimates are important and reasons for poor estimates?
Why good estimates are important?
Testing is often blamed for late delivery Testing time is squeezed It promotes early risk assessment
Test Estimation
Confidence rating
Test Effort
400 hrs
Confidence Rating
20 %
80%
Consensus Of Experts Past Project Knowledge Work Breakdown Structure MITs Model (MITs = Most Important Tests)
Allocate time for estimation and re-estimation ~ 5% - 7.5% of the testing budget on test estimation (Ross Collard,) Overhead More than one method to check each other Involve the team
Estimation Methods
Guessing - Finger In Air (FIA)
Pure guess/gut feel Based on experience
past project experiences expertise of the estimator
Bad method to base our final estimate on Easily questioned and challenged Try to delay Historical estimation Estimation based on previous test efforts
e.g. 40 hours, 80 tests (0,5 hrs/test ) 100 tests, 50 hours
Best predictor only if the historical data is available applicable and accurate
Estimation Methods
Formula Based
40% of Development Effort
Quick method Dependent of accurate development effort estimates System and acceptance test
Consensus of Experts
Useful when unsure of system, e.g. new technology or application 3-4 knowledgeable people for independent judgement
If the estimates are similar then take the average If the estimates are different then a consensus must be produced
Schedule
What is the critical path?
Function A
Function B
Function C
e.g. Business decisions 150 2.25 5% Number of screens involved Number of data values entered 60 etc 2.00 20%
17
24
Total Units
76
Function A
50
1.75
Function B
150
Function C
60
Criticality consider the following: e.g. 2.25 5% 17 Customer Affected User Affected Business Affected 2.00 20% 24 Frequency
76
Total Units
Determining Criticality
Customer Affected High (3) Directly User Affected Business Affected Loss of business Frequency
Cannot work
Everyday
Medium (2)
Could be
Low (1)
Not at all
Once only
Total: 7
Function
Complexity
Criticality
Objective Weighting
50 150
1.75 2.25
40% 5%
35 17
60
2.00
20% 76
24
Total Units
Objective Weighting
Purpose: Allows testers to specify an appropriate amount of test effort. Example of Objective Weighting 5% - 25% 30% - 50% 55% - 75% > 100% minimal regression testing thorough regression testing minimal testing rigorous testing
Reliability
Number of faults that have occurred in the past
Tolerability
Rating how forgiving end users are to failure
Considerations... 1(2)
The testing target Level of testing Type of test The quality of the system under test Test environment setups (test tools) Test maturity of the organisation Scope of test requirement Test engineer skill level
Considerations... 2(2)
Domain knowledge Involve the team Test iterations/Cycles (typical 3-4 iterations) Defects found in previous testing/to be found Time to report defects the more time spent on reporting the less testing you will actually do!
Finally... Follow up your estimations A combination of methods to get a more comprehensive and accurate estimate
Questions?
Thank You!