Академический Документы
Профессиональный Документы
Культура Документы
Khaled EI Emam *
4
Nazim H. Madhavji
204
0-8186-7017-7/95 $04.00 1995 IEEE
concept a n d of indicator s for its assessment. tion and one concerned with productivity/cost-effec
Section 4 constitutes a detailed account of the tiveness of the RE process.
results, and section 5 concludes the paper.
3 Research Method
2 Theoretical Foundations
In this research, the RE process studied was the
Previous software engineering works that have iden requirements engineering phase of a software sys
tified indicators for the assessment of RE success tem development method. The method (henceforth
were reviewed and summarized in our base report referred to as method X) has been developed and is
[12]. The purpose of the review was to provide a marketed by company Y. Company Y is an informa
sound theoretical foundation for our instrument tion systems consultancy firm based in Canada with
development efforts. clients worl(jwide. The ultimate objective of the RE
It was evident from this review that the predomi plhase of method X is to determine the cost-effec
nant conceptualization of RE success in the soft tiveness of the information system to be developed
ware engineering literature has been the quality of and make a go/no-go decision based on it.
RE products. Thus, a vast majority of the pro The resea.rch method followed in the study pre
posed/used indicators are assessing some aspect of sented here draws from both, the normative and the
RE product quality. Example indicators are: the descriptive literature. The normative literature pre
extent to which every requirement stated in the scribes the procedures to be used in instrument
requirements speCification has only one interpreta development, for example [16][18][21]. The descrip
tion, the extent to which everything the software is tive literature specifies the procedures used by par
supposed to do is included in the specification, and tiGular authors for developing instruments, for exam
the number of detected errors in a requirements ple [2][9)[15]1. This research method consisted of two
document that are categorized as being inclusions steps1: (a) define the RE success content domain,
of implementation facts. and (b) instrument development and pretest.
Although there is a predominant focus on the qual
ity of RE products as the primary dimension of RE 3 1 Define the RE Success Content Domain
..
Sions, one concerned with user/customer satisfac- total does not add up to 100%. For example, some senior
analysts take on the role of a project m anager.
205
shows the informant characteristics for some of the ence measure. This measure is a direct estimate of
other steps of the research method; these will be an ordinal scale, and hence can be used for the pur
discussed below. The outcome of the first step was poses of ranking.
a set of 34 criteria for the assessment of RE suc A survey was then conducted to prioritize the
cess. These criteria served as inputs to the following dimensions of RE success. The dimensions were
steps. the outcome of the categorization performed during
Subsequently, we checked the completeness of the step described above. For the survey, a ques
the criteria, identified the dimensions of RE success, tionnaire was given to 25 senior company Y consul
and prioritized the criteria tapping each dimension. tants worldwide. A total of 18 responses were
For this, ten experts were interviewed. The charac received, giving a response rate of 72%. A summary
teristics of these ten experts are summarized in the of respondent characteristics is presented in Figure
column labeled "Cat. & Prio." of Figure 1. 1 under the heading "Survey". The questionnaire,
The interviewees were initially requested to put the among other things, requested that the respondents
34 RE success criteria into two or more categories rank order the dimensions of RE success in terms of
such that the criteria in each category were most how important each is perceived to constitute an
similar to each other and most dissimilar from the indicator of overall RE success.
other categories. The criteria were randomly ordered The rank orderings from the survey were trans
before each interview. The interviewees were sub formed into a preference using the total proportion of
sequently requested to provide an interpretation of respondents who have ranked a dimension higher
each category, comment on its completeness and than the other dimensions. The data analysis from
the existence of any overlapping criteria (i.e., criteria this survey indicated which dimension of RE suc
that were perceived to be exactly the same). cess is considered the most important, and which
For the prioritization task, the interviewees were the least important.
requested to rank order the criteria in each category
in terms of how well they assess their interpretation 3.2 Instrument Development and Pretest
of the category. For example, if the category was
The starting point for instrument development was
interpreted as "user satisfaction", then the intervie
the 34 criteria that were already formulated. For this
wee was requested to rank order the criteria in
instrument. a semantic differential scale was utilized
terms of how well they assess "user satisfaction".
[19]. This scale consists of a concept and adjective
The results from all the ten interviews were initially
pairs at the extremes of a 7-point scale. Each adjec
cluster analyzed, and subsequently prioritized.
For the cluster analysis, a distance matrix was
tive pair will be referred to as an item.
Each criteria was converted to one or more con
constructed. The distance matrix was derived from
cepts. Two items were developed for each concept.
an incidence matrix as follows: dij 1 - Sij' where dij
=
Total 30 10 18
For each criterion, the total proportion of intervie
wees who have ranked it higher than all the other Figure 1: Summary of informant characteristics
criteria in the same cluster was used as a prefer- (background & location) by research study activity.
206
covering a particular concept. Standardized instruc pal components analysis and item-total correlations
tions were also included with each instrument. [18]. For item-total correlations, each scale score
The initial instrument was administered to collect was subtracted from the total to avoid a spurious
data on the success of RE processes. A total of 32 part-whole correlation, and the correlation of each
data points were collected, each data point repre scale with the new score was computed.
senting a particular RE process. The characteristics Effectiveness refers to the extent to which a scale
of the respondents and the RE processes are sum is measuring a construct relative to the other scales
marized in Figure 2. that are measuring the same construct. Attaining a
As part of instrument pretest, the reliability, con reasonable level of effectiveness is important so as
struct validity, and effectiveness of the instrument not to have a lengthy instrument. Multiple criteria
were evaluated. Each of these is defined below. were utilized to determine effectiveness. The pur
Reliability is defined as the extent to which an pose was to eliminate concepts from the instrument
experiment, test, or any measuring procedure yields without negatively affecting its reliability and validity.
the same results on repeated trials, and is con The following criteria were utilized:
cerned with the problem of random measurement 1. Using the priorities assigned in an earlier step: if a
error [18]. The reliability of the instrument was evalu concept was tapping a dimension that was con
ated using the Cronbach alpha coefficient [8]. sidered to be of low priority, then it was a strong
Construct validity is an operational concept that candidate for elimination. If a concept had a low
asks whether the scales chosen are describing the priority in tapping a particular dimension, then it
true construct(s) [18]. In this study, the construct is was a strong candidate for elimination.
RE success. Construct validity includes two other 2. Using the results of reliability analysis: if removal
concepts: convergent validity and discriminant validi of a concept from the instrument resulted in a
ty. Convergent validity determines whether the
large increase in the value of Cronbach alpha
scales chosen are measuring one underlying con then it was a strong candidate for elimination.
struct. Discriminant validity determines whether a
3. Using the results of principal components analy
scale differentiates between constructs. Construct
sis: if a concept loaded relatively low or did not
validity of the instrument was evaluated using princi-
load on its associated factor, then it was a strong
Charac. Type Attribute Value candidate for elimination.
Functional Area Finance 22% dimensions of RE success that have been derived,
of Information Purchasing 15.6% (b) the priority of each criterion and dimension within
System Sales/Marketing 12.5% the RE success content domain, and (c) the instru
Inventory Control & Planning 12.5%
Transportation/Logistics 12.5% ment and its characteristics based on the pretest
Other 25% study.
207
M1 The awareness of users of the business changes required in order to Mi8 The extent to which key issuesc have been resolved.
implement the recommended solution,
M19The extent to which the users have understood what the new system will
M2 The clarity of the links between the (processa and data) models and the do and will not do,
system objectivesb
M20The extent of user consensus on the recommended solution.
M3 The clarity of the business process in the architecture,
M21 The extent to which top management is convinced that the expected
M4 The thoroughness with which solutions alternative to the recommended benefits are likely to materialize.
solution were explored.
M22The relationship between the users and the requirements engineering
M5 The cost and effort compared to other similar requirements engineering staff.
phases in the same organization or similar organizations.
M23Whether the users have approved all the documentation.
M6 The amount of changes to the RE documentation.
M24The fit between the available funding profile and the necessary funding
M7 The users' reaction to the cost estimate. profile to implement the recommended solution.
M8 The willingness of the users to defend the recommended solution in
M25The fit between the architecture and the way the users work.
front of executive management.
M26The ability of the organization to make the necessary changes to imple
M9 The completeness of coverage of the cost/benefits analysis. ment the recommended solution.
M10The fraction of the cost of the requirements engineering phase com M27The fit between the recommended solution and the strategic orientation
pared to the (estimated) total system development cost. of the organization.
M11The amount of deliverables that were not used in formulating the recom
M28The fit between the recommended solution and the technical orientation
mended solution and the cost/benefits analysis.
of the organization.
M12The amount of benefits that are expected to be brought to the organiza
'" M29The willingness of the organization to make the necessary changes to
o tion by implementing the recommended solution compared to altemative
00 implement the recommended solution.
solutions.
M30The accuracy of the cost estimates compared to the accuracy required
M13The clarity of the links between the (processa and data) models and the by the organization.
key issuesc.
M31The degree of top management support for changes necessary to imple
M14The adequacy of the diagnosis of the existing system. ment the recommended solution.
M15The soundness of the approach(es) taken to quantify the intangible ben M32The fit between the system architecture and the corporate architecture or
efits. information plan.
M16The clarity of the links between the weaknesses and strengths of the M33The degree of match between the functionality of the 1st release of the
existing system and the weaknesses and the strengths of the recom software system and user expectations.
mended solution.
M34The extent to which the presentation of the costlbenefits analysis follows
M17The extent to which the (processa and data) models conform to the rules the accounting procedures of the organization.
of modeling.
a A process model includes processes of the software system (functions) as well as manual work activities of the business process.
b These are objectives 01 the system not objectives of the software development project, and they are supposed to be specific and measurable. Example objectives are "to
improve the accuracy of the planning process by 10%", "to provide the ability to find a volume in the automated catalogue in less than 5 seconds, 95% of the time", and "to
satisfy 95% of the requests for bibliographic information within 5 minutes",
c Key issues are issues that concern the system not the project. They are critical aspects of the system that require examination; without their resolution the system cannot be
completely defined and developed, When an issue can be resolved without r equir ing a formal examination of alternatives by the decision makers, then it is not a key issue.
Example key issues are "the system must be simple and inviting to use and must enable members to find answers to their queries easily. This is to ensure member accep
tance of the system. Staff assistance should be required only in exceptional cases. ", and "selVlces such as consultation, location of titles, and processing of loans and reser
vations must continue In the event of malfunctions or failure of the automated system. This Is to ensure continuity of service ".
2. Quality of RE Products
implement it.
RE SUCCESS
A
M5(1.599) REService
M10(0.900)
(1.438)
M6(0.80l)
Ml1 (0.300)
Quality Of Quality Of
Architecture CostiBenefits
Analysis
M2(3.402)
M3(2.702) M9(2.700)
M14(2.597) M2l : M24(1.902)
M13(2.499) M12 (1.698)
M30(1.398) User Satisfaction
M16(2.198)
M34 (0.789 ) And Commlttment
M4 (1.799)
M15 (0.702) M19 (3.904)
M18 (1.498) /
M17(0.399) M8(3.800)
Fit Of Recommended
M20(3.304)
Solution With The
M1 (2.600)
Organization
M22(1.800)
M26('1.600) M7(1.600)
M27(1.500) M25 : M23(1.400)
M31(1.200) M33(1.104)
M29(0.900)
M32 : M26(0.600)
Figure 4: The dimensions of RE success, the priorities of the criteria tapping each dimension and the priorities
of each dimension in tapping RE success.
209
in priority order, with the highest priority criteria at In particular, it was identified that concepts covering
the top. A colon in Figure 4 indicates that the two cri the SUb-dimension "fit of the recommended solution
teria are at the same priority. This prioritization indi with the organization" (M27, M26, M29, M31) and
cates how well, compared to other criteria, a particu "user satisfaction and commitment" (M1, M8, M20,
lar criterion is a measure of the cluster's interpreta M22) load highly on the first factor and have low
tion. Next to each criteria/criterion is the preference loadings on the second factor. Furthermore, con
measure value that was used for p r ior iti z at io n . It cepts covering the "quality of architecture" ( M16,
may be recalled that the total proportion of respon M14, M13, M2) and "quality of cost/benefits analy
dents who have ranked a criterion higher than other sis" (M9a, M9b, M12, M30) load highly on the sec
criteria in the same cluster was used as a prefer ond factor and have low loadings on the first factor.
ence measure. These results demonstrate good convergent and
The data from the survey, which asked respon discriminant validities.
dents to rank order the three dimensions of RE suc The factor loadings shown in Figure 5 are in gen
cess, were used to determine whether there was eral very high, lending reasonably strong support to
any difference among the perceived importance of the instrument's construct validi t y. It should be
the dimensions of RE success. Figure 4 shows the noted, however, that construct validity cannot be
preference measures for each of these dimensions. claimed until these same results are replicated in
It is evident that quality of RE service is perceived to subsequent studies. The results presented here pro
be the most important dimension of RE success vide some i n i tial evi dence supporting construct
(with a preference measure of 1.438), and the cost validity, and hence encouraging further studies.
effectiveness of the RE process is perceived to be Further evidence of construct validity was obtained
the least important dimension of RE success (with a from the results of item-total correlations. Overall, 15
preference measure of 0.438). out of 16 correlations are above 0.4 and significant
To test the null hypothesis that the three dimen at an alpha level of 0.05.
sions of RE success are of equal importance versus
the alternative hypothesis that the three dimensions 5 Conclusions
of RE success are ordered in the specific sequence
The research study described in this paper was
described above, a nonparametric statistic is used.
based on the premise that it is critical to understand
The particular statistic is L [20]. The test was con
ducted at an <x=0.01, and resulted in rejecting the
Quality Qf Quality of RE
null hypothesis, hence, further supporting the order RE Service Products
ing presented above. E 1ii
:I Mean 77.6333 79.6250
en iii
Std. Dev. 26.6102 16.7898
4.3 RE Success Instrument
210
the RE process in order to improve it. A central con [4] V Basili and D. Weiss. "Evaluation of a software
tributor to this understanding is the development of requirements document by analysis of change data".
In Proceedings of the Fifth International Conference
an instrument for measuring RE success. An exten
on Software Engineering, pages 314-323, 1981.
sive research investigation described in this paper [5] B. B o ehm. "Software engineering economics".
has resulted in such an instrument. This is, as far as Prentice Hall, 1981.
we know, the first comprehensive research effort [6] F. Brooks. "No silver bullet: Essence and accidents of
that has resulted in an RE success instrument. software engineering". In C o mp ut er, 20(4):10-19,
April 1987.
The potential applications of these results are dis
[7] D. Cordes and D. Carver. "Evaluation method for user
cussed from the research and practice perspectives.
requirements documents". In Information a nd
From the research perspective, the concern is pri Software Technology, 31(4):181-188, May 1989.
marily with advancing the state of knowledge about [8] L. Cronbach. "Coefficient alpha and the internal con
RE success. The primary significance of this work sistency of tests". In Psychometrika, pages 297-334,
September 1951.
for the research community is therefore the exis
[9] F. Davis. "Perceived usefulness, perceived ease of
tence of a general and standardized instrument for u s e , and user acceptance o f Information
measuring RE success. It is now an easier task for Technology". I n MIS Quarterly, 13(3):319-340,
researchers to conduct studies (for example, using September 1989.
survey or experimental empirical research methods) [10] A. Davis. "Software requirements: Objects, functions,
and states". Prentice Hall, 1993.
whose purpose is to test hypotheses (such as those
[11] J. Dawson. "Toronto laboratory requirements process
reported in [13]) about the effect of existing and reference guide". Technical Report (Unpublished),
emerging practices and tools on RE success. IBM Canada Ltd. Laboratory, 1991.
From the practice perspective, the concern is pri [12] K. EI Emam and N. H. Madhavji. "An instrument for
marily with improving RE practices in order to attain measuring the success of the requirements engineer
ing process in information systems development".
greater RE success. The instrument for measuring
Technical Report SE-94.12, School of Computer
RE success may be applied by practitioners, for
Science, McGill Un iversity, 1994.
example, in the evaluation of RE phase pilot pro [13] K. EI Ernam and N. H. Madhavji. "A field study of
jects. Thus, if an organization is adopting a new requirements engineering practices in information
requirements engineering method, then the outcome systems development". In Proceedings of the Second
IEEE International Symposium on Requirements
of a pilot project can be evaluated and compared to
Engineering, 1995.
the baseline value of RE success that is more com [14] B. Farbey. "Software quality metrics: Considerations
mon within the organization. This would allow man about requirements and requirements specifications".
agement to gauge the benefits of the new method. In Information and Software Technology, 32(1):60-64,
While the above list of the applications of this January/February 1990.
[15] B. Ives, M. Olson, and J. Baroudi. "The measurement
research may not be comprehensive, it is contended
of user information satisfaction". In Communications
that they address important contemporary issues of of the ACM, 26(10):785-793, October 1983.
concern to both researchers and practitioners. [16] F. Kerlinger. "Foundations of behavioral research".
As for future research, greater confidence with this Holt, Rinehart, and Winston, 1986.
instrument would be established if the evidence sup [17] G. Miller. "A psychological method to investigate ver
b a l concep ts". In Journal of Mathematical
porting its reliability and validity can be replicated in
Psychology, 6: 169-191, 1969.
studies conducted by other researchers. It is expect [18] J. Nunnally. " Psychometric theory". McGraw Hill,
ed that the instrument can be improved by testing 1967.
further, for example, by determining the test-retest [19] C. Osgood, G. Suci, and P. Tannenbaum. 'The mea
reliability of the instrument, and determining its relia surement of meaning". University of Illinois Press,
1967.
bility and validity with different samples.
[20] E. Page.. "Ordered hypotheses for multiple treat
ments: A significance test for linear ran ks". In
References American Statistical Association Journal, pages 216-
230, March 1963.
[1] M. Aldenderfer and R. Blashfield. "Cluster analysis".
[21] D. Straub. "Validating instruments in MIS research".
Sage Publications, 1984.
In MIS Quarterly, pages 147-169, June 1989.
[2] J. Bailey and S. Pearson. "Development of a tool for
[22] M. Wu. "Selecting the right software application pack
measuring and analyzing computer user satisfaction".
age". In Journal of Systems Management, pages 28-
In Management Science, 29(5):530-545, May 1983.
32, September 1990.
[3] V. Basili and B. Perricone. "Software errors and com
[23] C. Zagorsky. "Case study: Managing the change to
plexity: An emp irical investigation". In
C A S E". In Journal of I n formation Systems
Communications of the ACM, 27(1):42-52, January
Management, pages 24-32, Summer 1990.
1984.
211