Вы находитесь на странице: 1из 12

$SSOLFDWLRQRIWKH*RDO4XHVWLRQ0HWULFV

WRWKH5HTXLUHPHQWV0DQDJHPHQW.H\
3URFHVV$UHD
$QQDEHOOD/RFRQVROH
'HSDUWPHQWRI&RPSXWLQJ6FLHQFH
8PHn8QLYHUVLW\6(±8PHn6ZHGHQ
SKRQH)D[
(PDLOEHOOD#FVXPXVH
85/KWWSZZZFVXPXVHaEHOOD

$EVWUDFW
The purpose of this paper is to provide software metrics for
implementation of the goals of one of the Key Process Areas (KPA) within
the Capability Maturity Model (CMM), namely the "Requirements
Management" KPA. The CMM developed by the Software Engineering
Institute (SEI) is not well supported by measurement. An application of
the Goal/Question/Metrics (GQM) paradigm to the "Requirements
Management" KPA is therefore presented. The metrics obtained will help
companies whose maturity level is the lowest, to satisfy the goals of the
Requirements Management KPA.

 ,QWURGXFWLRQ
Software permeates our world making our lives more comfortable and effective. In recent
years, the quality of life has come to depend on software. By examining several examples
of software failure we notice how much effort is needed to improve software
development. Customers are often unhappy with the results of software products.
Software engineers have a difficult time producing quality software and poor quality is
costly.
Today, many software development organisations are planning or implementing
either some kind of improvement activities or a measurement programme. However,
measurement is inherent to the concept of improvement. Software Process Improvement
(SPI) should always include measurement, because it is necessary to compare the state
the software process before action is taken to improve it (Orci, 1999).
Software measurement allows for defining quantitatively success and failure, and/or
the degree of success or failure, of a product, a process, or a person. It facilitates the
identification and quantification of improvement, and the lack of improvement, or
degradation, in our products, processes, and people. It helps to make meaningful and
useful managerial and technical decisions, to identify trends, and to make quantified and
meaningful estimates. Even when a project runs without problems, measurement is
necessary, because it allows us to quantify the health of the project (Fenton and Pfleeger,
1996).
The Capability Maturity Model (CMM) developed by the Software Engineering
Institute (SEI) is intended to help software organisations to improve the maturity of their
software processes (Paulk, 1993a), but it is not well supported by a measurement
programme. The purpose of this paper is to provide software metrics for implementation
of the goals of one of the Key Process Areas (KPAs) within the CMM, namely the
"Requirements Management" KPA. This KPA has been chosen, because requirements are
the foundation on which the entire software system is built. The success of a project is
directly affected by the quality of the requirements. Poor understanding, documentation,
and management of requirements can lead to many problems. For instance, the cost of
correcting an error after the system has been delivered to the customer is estimated to be
several times the cost of correcting a similar error detected in the requirements analysis
phase (Pfleeger, 1998). The approach to this purpose is to apply the Goal/Que-
stion/Metric (GQM) paradigm to the Requirements Management KPA. The result of this
process, described in section 4, is a set of metrics, which can help immature companies,
satisfy the goals of the Requirements Management KPA.
The remainder of this paper is organised as follows: the section 1.1 describes related
research in software measurement, sections 2 and 3 present an overview of the CMM and
the GQM, the application of the GQM to the CMM is described in section 4, and
concluding remarks and future directions are presented in section 5.

 5HODWHG:RUN
Several studies on measurements have been presented prior to this work. The relationship
between measurement and Software Process Improvement (SPI) is stated, for example,
by Terttu Orci (1999). She writes that measurement programmes and SPI are strongly
intertwined and every organisation initiating a SPI programme should increase the
metrics maturity. One of the most notable contributions to this paper is given by Baumert
and McWhinney (1992). Their report presents a GQM analysis of the Capability Maturity
Model (CMM) and discusses the metrics implied by the CMM. The metrics are presented
in the perspective of the quality attribute they fulfil. They are not grouped by each of the
Key Process Areas. Fenton and Pfleeger (1996) and Joseph Raynus (1999) confirm the
connection between measurement and the CMM. In his book, Joseph Raynus reviews the
CMM demonstrating that measurement can be used to improve the behaviour of a
software development organisation. His book represents a quantitative approach to
software management and the SPI. Application of the GQM approach to each Key
Process Area of the CMM level 2 has been made by several students at Calgary
University on a graduate course on Software Engineering (Goodbrand and Wang, 1997;
Li, 1997; Jones, 1998). They define some questions, which, however, are not grouped by
goals. In this paper, a more comprehensive collection of questions and metrics will be
presented.
The importance of good requirements is stated by Hammer et al. (1998). They
describe metrics that can be applied to requirements and present an example of a
requirements specification analysis.
Basili is the original creator of the Goal/Question/Metrics (GQM) paradigm. Basili
and Rombach (1988) provide a comprehensive description of the paradigm. An example
of the GQM methodology is given by Rosenberg et al (1996). Their paper can be used as
a guideline for the application of the GQM approach. A suggestion of how to create a
measurement programme is made by Dave Zubrow (1998); in a procedural way, he
shows the necessary steps in constructing an action plan.

 7KH&DSDELOLW\0DWXULW\0RGHO
The Capability Maturity Model (CMM) for software, as defined by the Software
Engineering Institute, is a process model, which provides guidance for companies. It is a
navigation tool for their Software Process Improvement journey.
The CMM is structured in stages, evolving from one well-defined maturity level to
the next. This means that companies depart from a chaotic software development process
(level 1), go through levels of improved visibility and control of the process to reach a
mature process and subject it to statistical control. The result is increased process
capability in the organisation. As shown in Figure 1, the CMM is composed of 5 distinct
levels (Initial, Repeatable, Defined, Managed, Optimising) (Paulk et al., 1993a).
)LJXUHThe Key Process Areas by Maturity Level (Paulk et al., 1993a).
Each CMM level, except the initial level, has several Key Process Areas (KPA),
which indicate the areas of the software process to be improved. In essence, they define
what skills, abilities, policies, and practices are needed for a company to be considered to
be at that particular level. For level two, known as the repeatable level, there are six
defined KPAs. These address activities related to planning, managing, and tracking
several aspects of the software project. A level 1 organisation must establish basic project
management controls and discipline to achieve the repeatable level (Paulk et al., 1993a).
One level 2 KPA is the Requirements Management. "The purpose of Requirements
Management is to establish a common understanding between the customer and the
software project of the customer’s requirements that will be addressed by the software
project" as defined in Paulk et al. (1993b). This means that the requirements of a software
project should be complete, documented, unambiguous, and controlled, etc., in order to
design a software product, which satisfies the customer’s needs. Very often, requirements
change through the software development life cycle but the control of the changes of the
requirements is poor. The activity of “Requirements Management” is focused on the
control of the requirements gathering, establishing an agreement between the customer
and the software team on the requirements, checking, reviewing and managing the
changes on requirements. This activity is the process of ensuring that a software product
produced from a set of requirements will meet those requirements.

 7KH*RDO4XHVWLRQ0HWULF3DUDGLJP
The Goal/Question/Metric (GQM) Paradigm is a method for helping an organisation to
focus the measurement program on its goals. Victor Basili was its creator in 1984 at the
University of Maryland. It states that an organisation should have specific goals in mind
before data are collected. There are no specified goals but rather a structure for defining
goals and refining them into a set of quantifiable questions that imply a specific set of
metrics and data to be collected in order to achieve these goals. The GQM paradigm
consists of three steps:
• 6SHFLI\ D VHW RI JRDOV EDVHG RQ WKH QHHGV RI WKH RUJDQLVDWLRQ DQG LWV SURMHFWV.
Determine what the organisation wants to improve or learn. The process of goals
definition is supported by templates like the ones defined in Basili and Rombach
(1988). By using these templates it is possible to define the goals in terms of purpose,
perspective, and environment. The identification of subgoals, entities, and attributes
related to the subgoals is made in this step.
• *HQHUDWH D VHW RI TXDQWLILDEOH TXHVWLRQV. Business goals are translated into
operational statements with a measurement focus. Basili and Rombach (1988)
provide different sets of guidelines to classify questions as product-related or process-
related. The same questions can be defined to support data interpretation of multiple
goals.
• 'HILQHVHWVRIPHWULFVWKDWSURYLGHWKHTXDQWLWDWLYHLQIRUPDWLRQQHHGHGWRDQVZHU
WKH TXDQWLILDEOH TXHVWLRQV. In this step, the metrics suitable to give information to
answer the questions are identified and related to each question. Generally, each
metric can supply information to answer several questions, and sometimes a
combination of metrics is needed to make up the answer of a question.
Once these steps are identified, data are collected and interpreted to produce an answer to
the quantifiable questions defined, to fulfil the goals of the organisation (Rosemberg and
Hyatt, 1996; Basili and Rombach, 1988; Zubrow, 1998).
 $SSOLFDWLRQRIWKH*RDO4XHVWLRQ0HWULFV
The first step in the GQM paradigm is to identify the measurable goals for the
Requirements Management Key Process Area (KPA). The CMM defines two distinct
goals. The first goal of the Requirements Management KPA states the following:
6\VWHPUHTXLUHPHQWVDOORFDWHGWRVRIWZDUHDUHFRQWUROOHGWRHVWDEOLVKDEDVHOLQH
IRUVRIWZDUHHQJLQHHULQJDQGPDQDJHPHQWXVH (Paulk et al., 1993b).
It focuses on the control of requirements to set up a baseline. If the requirements are
not controlled, there will be no clear picture of the final product, because the final
product is based on the requirements. The second goal of the Requirements Management
KPA states the following:
6RIWZDUH SODQV SURGXFWV DQG DFWLYLWLHV DUH NHSW FRQVLVWHQW ZLWK WKH V\VWHP
UHTXLUHPHQWVDOORFDWHGWRVRIWZDUH 3DXONHWDOE 
The main focus of this goal is the consistency between the requirements and any
software product created from those requirements. This consistency will result in the
design of the product required by the customer.
The second step in the GQM paradigm is to generate a set of quantifiable questions.
In the next section, the following methods and sources have been used to produce the
questions. Some questions are generated by analysing the goals word by word, and some
are defined by analysing the Key Practices (Paulk et al., 1993a) of the Requirements
Management KPA. Other questions are formulated by Li (1997), Jones (1998),
Goodbrand and Wang (1997), and Baumert and McWhinney (1992). For some questions
a rationale will be given to understand better the meaning and/or utility of a certain
question.

 4XHVWLRQVIRUWKH)LUVW*RDORIWKH5HTXLUHPHQWV0DQDJHPHQW.3$
By analysing the first goal, two distinct questions have arisen: how can the requirements
be controlled and how can the baseline be established? To increase the control of the
requirements, their status as well as their stability could be investigated.
1. What is the current status of each requirement?
The possible status of a requirement could be: new, analysed, approved, rejected,
documented, incorporated (into the baseline), designed, implemented, tested, etc.
2. What is the level of requirements stability?
Requirements stability is concerned with the changes made in requirements. A set of
questions about requirements changes can be defined as follows:
3. Why are the requirements changed?
4. What is the cost of changing the requirements?
5. Is the number of changes to requirements manageable?
6. Is the number of changes to requirements decreasing with time?
7. How are affected groups and individuals informed about the changes?
8. How many other requirements are affected by a requirement change?
9. In what way are other requirements affected by a requirement change?
10. Is the size of the requirements manageable?
The level of requirements stability can be measured also by localising problematic
requirements:
11. How many requirements have potential problems?
This question can be divided into the following subquestions:
12. How many incomplete, inconsistent, and missing allocated requirements are
identified?
13. Is the number of “To Be Done” (TBD) decreasing with time, i.e., have the TBDs been
resolved in a timely manner?
To establish a baseline it might be useful to document the requirements.
14. How are the requirements defined and documented?
15. Are the requirements scheduled for implementation in a particular release actually
addressed as planned? 1
The metrics proposed to answer all the questions listed above are shown in Table 1.
Please observe that some of these questions can also be used for the second goal of the
Requirements Management KPA, for instance, questions 14 and 15.

1
Questions 2 and 3 are taken from Jones (1998), and questions 1 and 4 are taken from Goodbrand and
Wang (1997). Questions 5, 6, 13, and 15 are extracted from Baumert and McWhinney (1992), and
questions 7 and 14 are taken from Li (1997).
*RDO  System requirements allocated to software are controlled to establish a baseline for
software engineering and management use.
4XHVWLRQV 0HWULFV
1. What is the current status of each requirement? Status of each requirement
2. What is the level of requirements stability? Number of initial requirements
Number of final requirements
Number of changes per requirement
3. Why are the requirements changed? Number of initial requirements
Number of final requirements
Number of changes per requirement
Number of function points per requirement
Number of tests per requirement
Type of change to requirements
Reason for change to requirements
Major source of request for a requirements change
Phase in which the change was requested
4. What is the cost of changing the requirements? Cost of changing
5. Is the number of changes to requirements Total Number of Requirements (TNR)
manageable? Number of requirement changes proposed/TNR
Number of requirement changes open/TNR
Number of requirement changes approved/TNR
Number of requirement changes incorporated into
baseline/TNR
The computer software configuration item(s) (CSCI)
affected by a requirement change
Major source of request for a requirement change
Requirement type for each requirement change
Number of requirements affected by a change
Number of requirements rejected
6. Is the number of changes to requirements Number of requirement changes per unit of time
decreasing with time?
7. How are affected groups and individual informed Notification of Changes (NOC) shall be documented
about the changes? and distributed as a key communication document
Number of affected groups informed about NOC/Total
number of affected groups
8. How many other requirements are affected by a Number of requirements affected by a change
requirement change?
9. In what way are the other requirements affected Type of change to requirement
by a requirement change? Reason for change to requirement
Phase in which the change was requested
10. Is the size of the requirements manageable? Total Number of Requirements
Function Points per requirement
12. How many incomplete, inconsistent, and missing Number of incomplete requirements
allocated requirements are identified? Number of inconsistent requirements
Number of missing requirements
13. Is the number of “To Be Done” (TBD) Number of TBDs in requirements specifications
decreasing with time? Number of TBDs per unit of time
14. How are the requirements defined and Type of documentation
documented?
15. Are the requirements scheduled for Number of requirements scheduled for each software
implementation in a particular release actually build or release
addressed as planned?

7DEOHQuestions and metrics for the first goal of the Requirements Management KPA.
 4XHVWLRQVIRUWKH6HFRQG*RDORIWKH5HTXLUHPHQWV0DQDJHPHQW.3$
One immediate question that arises is “is there traceability between requirements and the
software project?” This question is important, because traceability between requirements
and the software project facilitates the analysis of the effects of a software change and
reduces the effort to locate the causes of a product failure. The question above can be
answered by tracking the requirements and changes made to the requirements.
1. Does the software product satisfy the requirements?
2. What is the impact of requirements changes on the software project?
3. What is the status of the changes to software plans, work products, and activities?
The status of changes to software plans, work products, and activities can be identified,
evaluated, assessed, documented, planned, communicated, and tracked.
4. Are the requirements scheduled for implementation in a particular release actually
addressed as planned?

*RDOSoftware plans, products, and activities are kept consistent with the system requirements
allocated to software.
4XHVWLRQV 0HWULFV
1. Does the software product satisfy the Functionality of the software
requirements? Number of initial requirements
Number of final requirements
Number of tests per requirement
Type of change to requirements
2. What is the impact of requirements changes on the Kind of function point per requirements
software project? Effort expended on Requirements Management
activity
Time spent on upgrading
Number of documents affected by a change
3. What is the status of the changes to software Status of software plans, work products, and
plans, work products, and activities? activities
4. Are the requirements scheduled for Number of requirements scheduled for each software
implementation in a particular release actually build or release
addressed as planned?
5. How are the requirements defined and Type of documentation
documented?
6. Does the number of TBDs prevent satisfactory Number of TBDs in requirements specifications
completion of the product?
7. Are all development work products consistent Number of inconsistencies
with the requirements?

7DEOH  Questions and metrics for the second goal of the Requirements Management
KPA.

5. How are the requirements defined and documented?


6. Does the number of TBDs prevent satisfactory completion of the product?
7. Are all development work products consistent with the requirements?

 0HWULFVIRUWKH*RDOVRIWKH5HTXLUHPHQWV0DQDJHPHQW.3$
The third step is to define sets of metrics that provide the quantitative information
necessary to answer the questions. The metrics are shown in Tables 1 and 2. There are
overlaps between the questions for the two goals and between the metrics. The same
metric can be obtained with different questions. Some metrics are numerical, others are
nominal. The list of metrics provided above is not a complete list of metrics for the
Requirements Management KPA. More questions could be asked and more metrics could
be produced.
These metrics provide the organisation with improved visibility to see at which level
the Requirements Management process is currently operating. A level 1 organisation has
most probably poorly defined requirements. Therefore, it is suggested that they count the
number of requirements and changes to those requirements to establish a baseline. If the
process is repeatable, more information on requirements can and should be collected,
such as type of each requirement (database requirement, interface requirement,
performance requirement, etc.) and changes to each type. In general, the metrics
collection will vary with the maturity of the process (Fenton and Pfleeger, 1996).

 &RQFOXVLRQVDQG)XWXUH:RUN
An application of the GQM to the Requirements Management KPA has been reported.
The set of questions and metrics presented should be tailored to the particular
organisation. All level 1 companies that want to improve the Requirements Management
activity could find a subset of metrics especially useful to start with. Repeating the same
approach in another environment may lead to different measurement needs, and therefore
the study cannot be objective. Furthermore, there are no rules to be followed to terminate
the GQM process. The GQM approach only provides guidelines for finding metrics.
The metrics produced provide better insight into the Requirements Management
activity advancing a small step towards the goal of a repeatable process. The results can

2
Question 1 is taken from Goodbrand and Wang (1997), 2 from Jones (1998), questions 4 and 6 from
Baumert and McWhinney (1992), and 5 from Li (1997). Between the questions proposed by the authors
above, three questions (Li, 1997) have been discarded because they are somewhat complex and not fully
related to the Requirements Management activity.
be placed in a major area of studies such as Requirements Management, measurements,
Software Process Improvement, software quality, etc.
The author of this paper will propose the metrics, obtained as a result of the
described approach, to software development companies, and use them for the elicitation
of requirements information. Based on the data collected, the author will give suggestions
for improvement of the Requirements Management activity. A future purpose is also to
analyse the goals in more detail, and to implement a metrics plan. Finally, the author will
also apply the GQM paradigm to all the KPAs of the CMM levels.

 $FNQRZOHGJHPHQWV
Special thanks to my supervisor Jürgen Börstler for his suggestions and contribution to
this paper, to Lena Palmquist, Stefano Salmaso, and Johan Fransson for their guidance at
the peer meetings, to Åsa Sundh for improving the English of this paper, and to all the
people who have supported me and contributed to my work.

5HIHUHQFHV
(Basili et al, 1988) Basili, V.R. and Rombach, H.D.: The TAME project: Towards
improvement-oriented software environments, in ,((( 7UDQVDFWLRQV RQ
6RIWZDUH(QJLQHHULQJ 14(6), pp.758-773, 1988.

(Baumert and McWhinney, 1992) Baumert, J.H. and McWhinney, M.S.: 6RIWZDUH
0HDVXUHV DQG WKH &DSDELOLW\ 0DWXULW\ 0RGHO, Software Engineering
Institute Technical Report, CMU/SEI-92-TR-25, ESC-TR-92-0, 1992.
(Fenton and Pfleeger, 1996) Fenton, N.E. and Pfleeger, S.L.: Software Metrics – A
Rigorous & Practical Approach, 2nd Edition, International Thomson
Publishing, Boston, MA, 1996.

(Goodbrand and Wang, 1997) Goodbrand, A.D. and Wang, Q.: Software Measurement
Plan for the Requirements Management Key Process Area of the Capability
Maturity Model for SENG623 Inc., March 1997, SE-623-01,
http://www.cpsc.ucalgary.ca/~alang/SENG623/, (04 april 2000).
(Hammer et al, 1998) Hammer, T.F., Huffman, L.L. and Rosenberg, L.H.: Doing
Requirements Right the First Time, &URVV7DON, 20-25, Dec 1998.
(Jones, 1998) Jones, B.C.: Requirements Management Measurement Plan Prepared For
A.D. Ho & Company, March 1998,
http://www.cpsc.ucalgary.ca/~jonesb/seng/623/requirementsPlan.html, (04
April 2000).
(Li, 1997) Li, B.:Metrics For CMM Level 2,
http://www.enel.ucalgary.ca/~lib/sum623.html, (04 April 2000).
(Paulk et al, 1993a) Paulk, M.C., Curtis, B., Chrissis, M.B. and Weber, C.V.: Capability
Maturity Model for Software, Version 1.1 Software Engineering Institute
Technical Report, CMU/SEI-93-TR-24, ESC-TR-93-177, Pittsburgh, PA,
15213-3890, USA, 1993.
(Paulk et al, 1993b) Paulk, M. C., Weber, C.V., Garcia, S., Chrissis, M.B. and M. Bush:
Key Practices of the Capability Maturity Model Version 1.1, Software
Engineering Institute Technical Report, CMU/SEI-93-TR-25 ESC-TR-93-
178, Pittsburgh, PA, 15213-3890, USA, Feb 1993.
(Pflegeer, 1998) Pflegeer, S.L.: 6RIWZDUH(QJLQHHULQJ7KHRU\DQG3UDFWLFH, Prentice Hall,
Upper Saddle River, New Jersey, 1998.
(Raynus, 1999) Raynus, J.: 6RIWZDUH 3URFHVV ,PSURYHPHQW ZLWK &00, Artech House
Publishers, Boston, 1999.
(Rosenberg and Hyatt, 1996) Rosenberg, L.H. and Hyatt, L.: Developing an Effective
Metrics Program, European Space Agency Software Assurance Symposium,
the Netherlands, March 1996.

(Orci, 1999) Orci, T.: Software Process Improvement or Measurement Programme –


Which One Comes First?, 3URFHHGLQJV )(60$ 7KH QG (XURSHDQ
6RIWZDUH 0HDVXUHPHQW &RQIHUHQFH, Amsterdam, the Netherlands, 127-140,
Oct 1999.

(Zubrow, 1998) Zubrow, D.: Measurement with a Focus Goal-Driven Software


Measurement, &URVV7DON, 24-26, 15, Sep 1998.

Вам также может понравиться