Академический Документы
Профессиональный Документы
Культура Документы
In the 4th quarter of 2002 and the 1st quarter of 2003, the authors conducted their fifth annual Software
Metrics Best Practices study. This study was sponsored in part by Rational Software, and covered the
following topics:
Figure 1 below shows the importance that is placed on metrics use with software development activities.
52% of the respondents rated metrics as either very important or extremely important. These respondents
represent the “Best Practices” organizations in the study. This shows a 7% increase from the results of
last year’s study.
100%
80%
60%
40%
32% 26% 20%
20% 15%
7%
0%
0%
Somewhat
important
Minimally
important
important
Important
Extremely
important
important
5 - Very
1 - Not
2-
4-
6-
3-
Demographics
630 individuals worldwide participated. The most common types of organizations that responded were
software manufacturers (30%) consulting organizations (23%), and MIS/IT groups (15%). Over half of
the respondents were a Manager or Project Leader, with almost 25% of the respondents being at the
senior or executive level. In total, over 80% of the respondents had some level of managerial
responsibility. Over 40% of the responding organizations had more than 100 software developers. The
majority of organizations had not had their CMM level formally assessed. Almost 90% of the
respondents’primary development sites were located in North America, Europe, or Asia/Pacific Rim.
The charts in Figure 2 summarize the results.
Survey respondents were self-selecting through various interactive media. The sample included
previously registered website visitors, as well as unregistered visitors during the study. Some of the
participating organizations included BearingPoint, Boeing, EDS, IBM, Lucent, Motorola, NASA, and
Siemens.
Which of the following best describes your Approximately how many software
position? developers and testers are in your
organization?
Executive 1001 - 10000 More than
0-5
11% 7% 10000
7%
Software 1%
Manager 501 - 1000
developer or 5% 6 - 20
33%
tester 17%
16% 201 - 500
18%
Senior 21 - 50
manager 16%
13% 101 - 200
Project lead 13% 51 - 100
27% 16%
What CMM Level has your organization Where is your primary development site
achieved through formal assessment? located?
South
5- Australia
America
Optimizing 4%
4 - Managed 4%
7% Other
5%
3%
North
3 - Defined Europe America
10% 23% 43%
2- CMM level
Repeatable not formally
11% assessed
57% Asia/Pac
1 - Initial Rim
10% 23%
Characteristics of “Best Practices” organizations showed high correlations with “All Other” organizations
in the following ways:
° “Best Practices” organizations tended to rely more on the use of metrics (63%) by considering
them important to extremely important when making project-related decisions compared to “All
Other” organizations (23%).
° 50% of “Best Practices” organizations responded as having more than 100 software developers,
while 39% of “All Other” organizations responded this way. Also, “Best Practices” organizations
were more likely to have over 20 software developers on one team (31%) than “All Other”
organizations (24%).
° The types of projects that teams were most commonly responsible for in “Best Practices”
organizations did not differ significantly from “All Other” organizations. The most common
types of projects for both types of organizations were software development projects (56%), real-
time embedded systems projects (11%), and system integration projects (10%).
° “Best Practices” organizations were more likely to have formally assessed their SEI-CMM Level;
49% of “Best Practices” organizations have been formally assessed compared to 35% of “All
Other” organizations.
Figure 4 below shows the most commonly used quality measurements/metrics by survey respondents.
The measurements with highest total usage rate include the following: schedule metrics (64%),
requirements metrics (56%), and trends of defects (52%). “Best Practices” organizations have a
significantly higher usage rate for these measurements compared to “All Other” organizations.
20%
0%
adequacy
Trends of
Test coverage
Fault density
Defect density
Schedule
Resource
defects
close rates
distribution
tests
of code
metrics
Figure 5 shows the usage differences between “Best Practices” organizations and “All Other”
organizations. Measurements/metrics used by 40% or more of respondents are compared below. “Best
Practices” respondents use a much higher number of measurements than “All Other” survey respondents.
The most common tool used by organizations to capture and analyze software metrics is Microsoft Excel.
There is not a great disparity in the usage of each specific tool between “Best Practices” organizations and
“All Others.” Figure 6 indicates the following:
° On average, “Best Practices” organizations use 1.7 tools while “All Other” organizations use 1.4
tools.
° Of the respondents that answered “other,” the following were commonly written in for tools
used: Microsoft Project, Microsoft Access, PSM Insight, PQM Plus, Mercury Test Director,
MiniTab, SPSS, custom-developed tool, homegrown tool, and manual effort.
What tools does your organization use regularly to capture and analyze
software metrics?
100%
40% 35%
21%
20% 16%
6% 6% 5% 4% 4% 3%
0%
MetricCenter
ClearQuest
QSM-SLIM
Other
Public Domain
ProjectConsole
RationalSuite
McCabe toolset
MS-Excel
Function Point
Rational
Workbench
Rational
tool
Figure 7 shows the overall satisfaction of metrics tools (on a scale of 1 to 5 where 1 is “extremely
dissatisfied,” 5 is “extremely satisfied,” and 3 is neutral) among survey respondents. This list does not
include all of the tools available to choose from but reports the tools that were included in the survey and
commonly used by respondents. The tools with the highest satisfaction as reported by participants were
MetricCenter, QSM-SLIM, Public Domain Tool, and Rational ProjectConsole. These tools were rated
favorably compared to the overall average rating of 2.7.
Figure 8 shows the benefits of metrics usage with software development projects. Each respondent was
limited to choosing only three benefits. The most significant benefits (25% or more of total respondents)
chosen include the following:
° More accurate estimates of project size and/or cost
° Better understanding of project status
° Improved quality of delivered software
° More predictable project schedules
Figure 8 indicates that “Best Practices” organizations experience more benefits from the use of software
measurements/metrics. Even though each respondent was able to choose three benefits, the average
number of benefits chosen by “All Other” organizations was 2.7, indicating that some of the “All Other”
respondents chose less than three benefits, while all “Best Practices” respondents designated three
benefits.
What do you feel are the top three (3) benefits your organization has seen from
software measurement / metrics?
100%
All Others
80% Best Practices
60%
60%
42% 38%
40%
29% 26%
23% 22%
20% 14% 12%
8% 7% 6% 5% 3% 1% 1%
0%
Clear indication when “done”
Other
Better teamwork
Improved quality software
Improved morale
Quicker, better-informed mgt
Improved communication with mgt
costs
costs
Figure 9 shows the differences in the amount spent on software measurements/metrics. “Best Practices”
organizations spent an average of 3.2% of their R&D/IT Budget on software measurements/ metrics
compared to “All Other” organizations, which spent an average of 2.4%.
80%
Figure 10 below shows the extent that metrics are aligned with business or strategic objectives. 46% of
“Best Practices” organizations had close or very tight alignment between metrics and business objectives
compared to only 7% of “All Other” organizations.
To what extent are the metrics you use aligned with your organization’s
business or strategic objectives?
100%
80% All Others
60% Best Practices
40% 35%
28%
20% 15% 11%
6% 6%
0%
5 - Very tight
deliberate
applicable
alignment
alignment
alignment
3 - Some
consideration
4 - Close
alignment
1 - No
2 - Loose
Not
given
27% of “Best Practices” organizations responded that reliance on metrics and measurements when
making project-related decisions was very or extremely important. This compares to just 2% of “All
Other” organizations that responded this way. The results are summarized in Figure 11 below.
To what extent do executives, departments and program
managers rely on software metrics and measurements when
making project-related decisions?
100%
80%
60%
All Others
40% 36%
Best Practices
21% 19%
20% 11% 5% 8%
0%
Don’t know
Extremely
Important
important
important
important
Somewhat
important
4 - Very
1 - Not
3-
5-
2-
The project process methodology followed by “Best Practices” organizations compared to “All Other”
organizations did not differ greatly and is summarized in Figure 12 below. The most significant project
management processes as indicated by the total number of respondents were a homegrown process (42%),
RUP (17%), and RAD (14%).
What project management process and methodology do you follow?
100%
80%
All Others
60% Best Practices
40%
40%
20%
20% 13% 13% 10%
2% 2% 1%
0%
Homegrown
MSF
PMBOK
XP
RUP
RAD
Other
SDLC
process
Conclusions
Study of the “Best Practices” organizations in this study suggests the following:
° With greater investment in metrics and better alignment of metrics with business objectives comes
greater benefits:
° More accurate estimates of project size and/or cost (60%),
° Improved quality of delivered software (42%),
° Better understanding of project status (38%),
° More mature software processes (29%).
° Metrics are relied on when making project-related decisions. Of “Best Practices” organizations, 27%
responded that reliance on metrics and measurements when making project-related decisions was
very or extremely important.
° The metrics kept by more than half of “Best Practices” organizations include:
° Schedule metrics – useful in tracking project status and providing “early warning” of problems
° Requirements metrics – can be used to control scope creep and effects on schedule and cost
° Trend of defects – helps predict readiness for release to customers
° Defect density – a common measure of quality, often used when setting deployment criteria
° Source lines of code – invaluable for post-mortem analysis and estimation of future projects
Organizations who want to improve their metrics practices can use the information in this study to align
their measurement program with business objectives, set a budget for measurement, identify metrics to
keep and tools to assist them, and make project decisions based on real data to meet business objectives.
References
1. Kulik, Peter and Weber, Catherine, “Software Metrics State of the Art – 2002”, March 2002.
2. Dekkers, Carol, “Tame your Process with Metrics”, Enterprise Development, June 1999.
3. Yourdon, Edward, Rise and Resurrection of the American Programmer, Yourdon Press, 1996.
4. Payne, Jeffery E., “Quality Meets the CEO”, Software Testing and Quality Engineering, May/June
1999.
5. Brown, William J., et. al., AntiPatterns: Refactoring Software, Architectures, and Projects in Crisis,
John Wiley & Sons, 1998.
6. Putnam, Lawrence H., and Myers, Ware, Measures for Excellence, Yourdon Press, 1992.
7. Kulik, Peter, “A Practical Approach to Software Metrics”, IEEE IT Pro, January/February 2000.