Вы находитесь на странице: 1из 10

Juhani Iivari

Why Are

The author examines attitudes toward CASE usage in a wide variety of organizations.

CASE T ools
Not Used?
CASE (Computer-Aided Software/Systems Engineerempirical studies indicate positive rather than negative impact of CASE tools on systems development effectiveness. Norman and Nunamaker [17] reported that software engineers generally perceived their productivity being improved with the use of CASE technology. Banker and Kauffman [2] found an order of magnitude increase in software development productivity in their analysis of 20 projects that applied a CASE tool emphasizing reusability. Aaen et al. [1] found that about 72% of their sample perceived the objectives of improved quality of systems, improved systems development procedures, and increased standardization were met to a significant degree, but about 60% of respondents assessed the objective of improved productivity was met only to a minor degree. 47% of the respondents indicated good profitability of CASE investments while 31% assessed it as poor. Kusters and Wijers [14]

ing) tools are claimed to increase information systems and software development effectiveness in terms of productivity of systems development and the quality of the developed systems. However, the results of prior research on CASE adoption [1, 6, 14, 17] are to some extent paradoxical. While most research reports positive rather than negative impact on the quality of developed systems, and to a lesser extent on the productivity of the development process, the actual use of CASE technology has been much less than one would expect. Kemerer [13], for example, reports that one year after introduction, 70% of the CASE tools are never used, 25% are used by only
one group, and 5% are widely used, but not to capacity. Aaen et al. [1], based on a survey of 102 CASE user organizations in Denmark and Finland, report that less than 20% of the organizations were close to routine users of CASE tools, even though 24% had used them more than three years. About half the respondents had used the tools for two projects or less, and in the majority of the organizations, less than one-fourth of the analysts used
94
October 1996/Vol. 39, No. 10

CASE tools. Kusters and Wijers [14] found in their analysis of 262 Dutch organizations that in 20% of the organizations more than 75% of analysts used CASE tools on a regular basis and in 18% the rate was between 51% and 75%. Even so, in 37% of the organizations, 25% or less of the analysts used CASE tools on a regular basis. Besides the waste of the initial investment in CASE tools, the low rate is alarming because most

COMMUNICATIONS OF THE ACM

reported that CASE tools realized to a reasonable degree objectives such as quality of documentation, consistency of developed systems products, standardization, adaptability, and system quality, the remaining objectives being realized only slightly. Finlay and Mitchell [6] found an 85% increase in the productivity and a 70% reduction in the delivery time when Information Engineering methodology was introduced with an associated CASE tool in one British firm. Assuming CASE adoption decisions are based on rational arguments concerning the impact on the systems quality and systems development productivity, one could expect organizational CASE use to be higher. One can imagine a host of explanations for this discrepancy. One explanation is that CASE tools are relatively expensive and associated training costs may exceed the original price of the tool [11]. Another explanation might be that the benefits of CASE do not accrue uniformly to all stakeholder groups affected, leading to resistance by some groups [19]. The paradox may also be explained by the theoretical and methodological weaknesses of earlier CASE research. To our knowledge there is no piece of research that takes CASE usage into account when studying CASE impacts. It is obvious a CASE tool cannot make an impact if it is not used. The research reviewed indicates CASE use varies considerably from organization to organization. This variation in CASE usage may also explain some of the variation in the results concerning CASE effectiveness. One should also note that with a few exceptions (e.g., [21, 22]), CASE research has paid scant attention to the reliability and validity of the measures used, also decreasing the reliability of the findings. Most of the empirical studies on CASE impacts are based on subjective, perceptual data. Moore and Benbasat [16] contend that subjective perceptions provide a sounder basis for theory development than more objective data. Based on retrospective, perceptual

data obtained from 105 respondents, the study reported in this article seeks to shed light on factors affecting CASE usage. Compared with earlier CASE research, it pays special attention to the reliability and validity of the measures used. The factors include participation in the CASE selection and

sion/adoption theory (see [23] for a classic treatise). CASE tools are viewed as information systems that mainly support IS and software professionals in their work. Accordingly, the results of IS implementation research are applied to CASE adoption as the first approximation.3 When an innovation is interpreted

Participation

H1 ) (+
H2 ) (+
H3 (+ )

Management support Training Expectation realism Complexity Compatibility Relative advantage Voluntariness

H4 (+)
() H5 ) (+ 6 H

CASE usage

H9 (+)

CASE effectiveness

Figure 1. The conceptual model and hypothesized associations in this article

H8

H ( ) 7 (+ )

implementation planning, management support, training, realism of expectations, perceived complexity, compatibility and relative advantage of the CASE tool, and voluntariness of the CASE use. This article also analyzes the relationship between CASE usage and CASE impacts. The selection of factors is partly influenced by the fact that they are reasonably controllable during the CASE adoption process.1
Theoretical Background and Research Hypotheses

as an idea, practice, or object that is perceived as new by an individual or other unit of adoption ([23], p.
Observe that behavioral variables such as participation and usage were not measured in terms of the personal behavior of the respondent but in more organizational-level terms. This implies that one cannot simply generalize findings that perceptual measures of behavior (e.g., self-reported use) and more objective measures of behavior (e.g., system-monitored use) differ significantly from each other [24] to the present case. 2 Implementation in this context is used in the sense of organizational implementation (adoption) rather than technical implementation. 3 It is recognized, however, that CASE tools are exceptional as information systems. IS and software professionals as users can be expected to differ from ordinary users because they are experts in information technology. CASE tools are mostly acquired as commercial software packages, while prior implementation research has mainly focused on the implementation of in-house developed systems. Because of these differences, CASE tools can be considered a special instance that can be used to test the limits of prior IS implementation research. Therefore, in addition to the increased understanding of the CASE adoption process, the present study tests to what extent the results of prior IS implementation research are directly applicable to CASE.
1

With a few exceptions ([19, 21, 25]) earlier research on CASE adoption has mainly been descriptive without offering theoretical orientation or attempting to explain factors affecting the adoption. This article draws on research on information systems (IS) implementation (see [15] for a review)2 and the innovation diffu-

COMMUNICATIONS OF THE ACM

October 1996/Vol. 39, No. 10

95

11), CASE tools can be considered a class of innovations. Consequently, the innovation diffusion/adoption theory is adopted as a reference theory that applies to IS implementation and CASE adoption. The innovation diffusion/adoption theory has recently aroused considerable interest in the IS community (see [20] for a comprehensive review). Fichman [5] notes the classical innovation diffusion theory [23] was developed mainly by looking at the adoption of relatively simple innovations by individuals making autonomous choices that do not require extensive specialized knowledge about the innovation before adoption. He recognizes four trends in the recent innovation theory: organizational adoption, adopter interdependencies, managerial influence

tools form relatively complex technology, imposing considerable knowledge barriers to adoption, even though IS and software professionals as their users can be expected to affect the knowledge barrier. Fichman [5] also proposes a framework for information technology (IT) diffusion research that distinguishes both the locus of adoption (individual vs. organization) and the class of technology (type 1 of technology with low knowledge burden and low user interdependence vs. type 2 technology with high knowledge burden or high user interdependence). His message is that classical innovation diffusion theory has given the most conclusive results in the case of individual adoption of type 1 innovations, while the most valu-

tor close to the managerial influence emphasized in [5]. Figure 1 also suggests a number of hypotheses about the factors affecting CASE usage and CASE effectiveness (+ sign with the positive association and - sign with the negative association). The relationship between user participation and IS success is one of the most investigated topics in IS research. Participation in Figure 1 refers to the participation of the IS and software professionals in the selection of the CASE tool and in the planning of the organizational implementation (adoption) of a CASE tool. Although earlier results have been quite contradictory, recent research has found a positive association between user participation and IS success. Besides user participation, manage-

With a few exceptions, earlier research on CASE adoption has mainly been descriptive without offering theoretical orientation or attempting to explain factors affecting the adoption.
to adoption, and knowledge barriers to adoption. CASE technology seems to capture aspects of all of these trends. A CASE tool is a contingent innovation [23] in which the organization is the primary adopting unit. Only after the decision to adopt at the organizational level, do individuals get access to CASE in their work. CASE tools include user interdependencies in two respects. First, since a considerable part of IS and software development takes place as projects, coordinated use of CASE tools in projects seems vital for their effective utilization. One could say projects are the secondary adopting units, and IS analysts and software engineers are only tertiary adopters. Secondly, CASE tools intimately intertwine the organizational routines of IS and software development. One could expect managerial influence to be critical in CASE adoption because there are multiple levels involved in the adoption process. Finally, it is obvious CASE
96
October 1996/Vol. 39, No. 10

able applications of diffusion theory fall in the category of organizational adoption of type 2 innovations. In this framework, this article addresses an interesting class of innovations, CASE technology as an example of organizational adoption of type 2 innovations. It focuses on the CASE usage (i.e., the use, routinization, and infusion of CASE technology, once the CASE tool has been selected and acquired by an organization). The focus lies in factors that may explain the CASE usage within these organizations. The conceptual model is depicted in Figure 1, which identifies three groups of factors potentially affecting CASE usage: (1) Development process factors based on IS implementation research (participation, management support, training and expectation realism); (2) CASE innovation characteristics (complexity, compatibility, and relative advantage) adopted from the innovation diffusion literature [23]; and (3) Voluntariness, a fac-

ment support is a factor that is most consistently reported to facilitate IS implementation. Lucas [15] suggests two explanations for its significance in IS implementation. First, management controls the resources needed for systems development and use, and secondly, management support gives clues about what behaviors management is trying to encourage. In the case of training it seems almost trivial to assume the quality and quantity of training is associated with CASE usage. Expectation realism has also been found to be relevant both in the case of IS implementation and CASE adoption, unrealistically high expectations easily leading to dissatisfaction with the system. Figure 1 also includes three CASE innovation characteristics, complexity, compatibility and relative advantage, adopted from the innovation diffusion literature. Rogers defines complexity as the degree to which an innovation is perceived as difficult to under-

COMMUNICATIONS OF THE ACM

Table 1. Profile of the participant organizations (n = 35)

Number Percent Industry Public administration Manufacturing Software house Insurance Finance other Number of employees in the organization 110 2650 51100 101500 > 500 Number of IS employees in the organization 110 1125 2650 51100 101500 > 500 Average size of SW projects 12 persons 35 persons 610 persons 1120 persons > 20 persons no answer CASE experience < 1 year 12 years 35 years > 5 years no answer 7 5 15 4 3 1 2 2 3 11 17 6 5 6 4 10 4 5 20 5 1 2 2 1 10 21 2 1 20% 14% 43% 11% 9% 3% 6% 6% 9% 31% 49% 17% 14% 17% 11% 29% 11% 14% 57% 14% 3% 6% 6% 3% 29% 60% 6% 3% CASE adoption rate as the percentage of developers using CASE 0% 125% 2650% 5175% > 75% Phases supported by CASE Business planning Architecture design Project planning Analysis Design Implementation Maintenance CASE tools used ADW Oracle IEW Prosa Excelerator Teamwork Deft POSE Design Aid Line II IEF Microtool CASE 2000

Number Percent

3 20 5 4 3 13 18 10 32 31 20 16 13 13 11 5 4 3 2 2 2 2 1 1 1

9% 57% 14% 11% 9% 37% 51% 29% 91% 89% 57% 46% 37% 37% 31% 14% 11% 9% 6% 6% 6% 6% 3% 3% 3%

ly, CASE research has omitted CASE usage as a factor affecting CASE effectiveness. It seems obvious, however, that not using a CASE tool does not have much impact on the productivity and quality of IS and software development. In view of the small amount of earlier research on CASE adoption, the hypothesized associations are tentative. One can also conclude there is not any strong theory for CASE adoption when CASE technology is viewed as an organizational innovation. The hypotheses are justified in more detail in [12].
Research Design

Table 2. Profile of the respondents (n = 105)

Number Occupation General manager Project leader Systems analyst Programmer Other no answer Education High school Vocational school College University degree Other 9 38 51 1 5 1 3 3 32 65 2

Percent 9% 36% 49% 1% 5% 1% 3% 3% 30% 62% 2% CASE experience in years < 1 year 12 years 35 years > 5 years no answer CASE experience as the number of projects 0 12 35 >5

Number

Percent

13 40 48 3 1 20 53 26 6

12% 38% 46% 3% 1% 19% 50% 25% 6%

stand and use; compatibility as the degree to which an innovation is perceived as being consistent with existing values, past experiences, and needs of potential adopters; and relative advantage as the degree to which an innovation is perceived as better than its precursor. Voluntariness describes the degree to which an innovation is perceived as being voluntary, or of free will [16]. It is frequently considered in IS implementation research as a dichotomous variable stating whether the system use is voluntary or mandatory, the argument being that use is not an

appropriate measure of implementation success in the case of mandatory systems [15]. Otherwise there is a paucity of research on the influence of voluntariness on IS usage. As discussed previously, Fichman [5] emphasizes managerial influence, which is close to voluntariness, as one trend of recent research on innovation adoption. The relationship between IS usage and IS effectiveness, when effectiveness is measured as the impact of an information system on organizational effectiveness, is a largely neglected area in IS research. As pointed out previous-

The survey was carried out during the spring of 1993. Based on customer information of CASE tool vendors operating in Finland and the previous Finnish studies on CASE adoption, a list of organizations was identified. After telephone contacts, 52 cooperative organizations were found. Questionnaires were mailed to the contact persons (primarily IS managers) who distributed them within the organization. Of the 322 questionnaires mailed, 109 were returned from 35 organizations. Four of the returned questionnaires were rejected. As the result, the final response rate was 32.6% at the level of questionnaires and 67.3% at the level of organizations. The profiles of the 35 organizations and 105 respondents are depicted in Tables 12. Table 1 shows that 43% of the organizations in the sample were software houses. The organizations were relatively large by Finnish standards, but the majority of the average IS and software development projects were relatively small, comprising only 35 persons. Not surprisingly, CASE experience of the organizations had increased when compared with the earlier study in Finland [1], but the adoption rate was still found to be low: in 57% of the organizations less than 25% of the developers used CASE tools. CASE use was highest in the analysis and design phases, with about 90% of organizations
October 1996/Vol. 39, No. 10

COMMUNICATIONS OF THE ACM

97

reporting CASE support in those phases. Among the different CASE tools, ADW, Oracle, and IEW were clear market leaders in the present sample. As Table 1 indicates, the number of CASE tools used in the 35 organizations was 60. This indicates that on average each organization had 1.7 CASE tools. This article focuses on the perceptions concerning CASE technology rather than on individual CASE tools. An organization may utilize several CASE tools, which together provide certain functionality (measured as functional compatibility) and interfaces (measured as interface compatibility) and methodology support (measured as method compatibility).
Measurement

Moore and Benbasat [16] claim the innovation characteristics should be measured as perceptual attributes rather than as more objective primary attributes in order to establish a basis for a general theory. Accordingly, this article applies perceptual measures. The reliability of the measures is assessed using Cronbachs alpha values. There is no exact threshold for the required reliability but [18] proposes the threshold 0.50.6 for exploratory research. In view of the minimal earlier research on the CASE adoption, this article applies the upper value 0.6. Participation. As mentioned previously, participation in this study refers to the participation of the CASE users (i.e., the IS and software professionals) in the selection of the CASE tools and in the planning of their organizational implementation (adoption). There are a number of instruments for measuring user participation, most of which (e.g., [8]) measure personal participation of the respondents. If a survey covers only a few people from an organization, one cannot reasonably estimate participation at the organizational level based on these personal responses. Because our main interest lies at the organizational level, these previous instruments were deemed inappropriate. Participation in this
98
October 1996/Vol. 39, No. 10

study was measured using two items: the number of people participating in the selection of the CASE tool and the number of people participating in the planning of the implementation (installation) of the tool in the adopting organization.4 The reliability of these two items was 0.76. Management support. Management support was measured using two 5-point items: the first asking the support of top management and the second asking for the support of IS management in the selection and implementation of CASE tools. The reliability of the two items was 0.66. Training. Training was interpreted to cover internal (inhouse) training, vendor courses, and self studies. It was measured using two items, the first questioning the adequacy of training and the second the quality of training. The reliability of the two 5-point items was 0.84. Expectations realism. Expectations realism was measured as a difference between perceived CASE impact and expectations concerning the CASE impact.5 Both perceived impact and impact expectations were measured using six, 5-point items adopted from previous CASE research: impact on the speed of development, functional quality of new applications, productivity of development, cost of development, quality of developed applications, and cost of maintenance [1]. The reliability of the scale was 0.76. Perceived complexity. Perceived complexity was modified from the short form of the corresponding measure proposed by Moore and Benbasat [16]. They interpret perceived complexity as perceived ease of use in the sense of the
4 The selection process corresponds to the initiation and adoption in the stage model of Cooper and Zmud [3] and the implementation process corresponds to the adaptation. 5 Expectation realism was slightly positive in the case of seven respondents and two organizations when an organizations expectation realism was calculated as the average of the individuals corresponding estimates representing that organization. Because the organizations expectation realism was calculated in this way, the absolute value of the difference between perceived CASE impact and expectations was not used.

Technology Acceptance Model [4], and use phrases like interaction with, easy to use and learning to operate. These phrases, which may lead the respondent to think in a narrow sense of the quality of the user interface only, were modified to the form to apply/to use. The idea in using the phrase to apply was to ascertain the aspect whether the underlying ideas, assumptions and limitations of the CASE tool are easy to understand and apply. The reliability of the 4item measure for perceived complexity was 0.82. Perceived compatibility. There are several alternatives for measuring perceived compatibility. Moore and Benbasat [16] propose a measure that interprets compatibility in very individualistic terms [22]. Because CASE tools are organizational innovations [5], the measure for compatibility proposed in [16] was not deemed appropriate in the present context.6 To support the practical relevance of the concept of compatibility, the measure applied in this article is adapted from prior CASE research [1]. The respondents were asked to rate the compatability of the tools with the needs of the adopting organization using 23 items. The measurement of each item ranged from very poor to very good on a 5point scale. The 23 items were subjected to factor analysis using principal component analysis with varimax rotation. Factor analysis reported in [12] suggested seven factors that can be labeled as use compatibility (6 items), vendor compatibility (4 items), functional compatibility (4 items), interface compatibility (2 items), project support compatibility (2 items), method compatibility (2 items) and hardware compatibility (2 items). Five of the seven factors had acceptable reliabilities (
Recently, [22] proposed a measure for perceived compatibility that consists of five factors: the efficacy of the technology, knowledge and control, experience of work and sense of professionalism, management impact and response, and traditional support. His measure is very complex, consists of 32 items, and was not available in our survey.
6

COMMUNICATIONS OF THE ACM

0.60). In view of the small sample size (35 organizations), five factors were considered excessive. They (calculated as averages of their items) were subjected to a second order factor analysis that gave as its result only one factor. The reliability of the aggregate compatibility was 0.61. Perceived relative advantage. Perceived relative advantage was measured using the short form instrument proposed in [16]. Its reliability was 0.84. Voluntariness. Voluntariness was measured using the two items of the short instrument suggested in [16]. The reliability of the two items was 0.88. Usage. Usage was measured using the following eight items: The first four items were adopted from [1]: 1) The proportion of projects using CASE, the values ranging from 1 (none), 2 (125%), 3 (2650%), 4 (5175%) and 5 (over 75%). 2) The proportion of analysts using CASE, the values ranging as previously stated. 3) Routinization of the CASE use, the values ranging from 1 (in the beginning) to 5 (completely routine). 4) The number of phases (business planning, architecture specification, project planning, analysis, design, implementation and maintenance) supported by CASE (scaled to vary from 0 to 5). The next four items, measured on a scale from 15, correspond to items of routinization and infusion in [3]: 5) The use of CASE is encouraged as a normal activity. 6) The use of CASE as a natural activity. 7) Increased organizational effectiveness obtained using the CASE technology. 8) The use of CASE to its fullest
7

potential.7 Factor analysis using principal component analysis and varimax rotation gave only one factor. The reliability of the eight items was 0.88. Perceived productivity and quality effects. Productivity and quality effects were measured using seven items adapted from [1]. They were factor analyzed using principal component analysis and varimax rotation. The analysis gave two factors the first of which can be labeled Productivity effects and the second

al averages were used for the missing values in the regression analysis to be described here [9]. The analysis of the model of Figure 1 is based on multiple regression analysis in two phases. In the first phase usage is the dependent variable and the eight variables on the left in Figure 1 are the independent variables. In the second phase the two factors of CASE effectiveness (i.e., productivity effects and quality effects) are the dependent variables. Several rules of thumb have

Table 3. Factors of perceived quality and productivity effects

F1 The CASE tool has greatly speeded up development of new applications The CASE tool has enhanced the functionality of the new applications The CASE tool has definitely made developers more productive The CASE tool has significantly reduced the costs of software development The CASE tool has improved the quality of software products The CASE tool has significantly reduced the costs of software maintenance The CASE tool has significantly improved the documentation of software products Eigenvalue Percent of variance Cumulative percentage 0.89 0.51 0.85 0.71 0.35 0.40

F2

0.53

0.35 0.80 0.71 0.82

3.81 54.5 54.5

1.05 15.0 69.5

Quality effects (Table 3). The factor structure is clear except in the case of item 2. Reliability analysis indicated that inclusion of the item 2 in factor 1 would decrease the reliability of factor 1, which without item 2 was 0.83. Inclusion of item 2 in factor 2, on the contrary, increased the reliability of factor 2 to 0.80. Based on this analysis, item 2 was included in factor 2.
Data Analysis

This operationalization of usage does not measure whether CASE usage was appropriate. Only the last item might be interpreted to include aspects of appropriateness. It indicated that 26.7% of respondents (n = 102) totally disagreed with the claim that CASE tools are used to their fullest potential, 29.4% slightly disagreed, 21.9% were uncertain, 18.1% slightly agreed and only 1.9% totally agreed with the claim. This distribution is consistent with the other items of usage, however, as indicated by the high reliability score.

All the variables introduced were measured at the individual level (n = 105). The measures at the organizational level (n = 35) were aggregated as averages of individual respondents of each organization. There were a number of missing values, especially in the case of participation. To increase the statistical power, organization-

been proposed for the sample size in multiple regression, ranging from 10 to 15 observations per independent variable to the absolute minimum of 4 observations per independent variable [7]. The sample size (n = 35) of this study just exceeds the absolute minimum. Because there is a risk of overfitting the data when the sample size approaches the minimum [7], the sample of 105 individual respondents is used to cross-check the consistency of results. If the results at the individual level are consistent with results at the organizational level, it suggests that the organizational level findings are not just spurious because of the small sample size.
Results

The intercorrelations among the study variables are reported in


October 1996/Vol. 39, No. 10

COMMUNICATIONS OF THE ACM

99

[12]. They indicated both cases). The Table 4. Results of regression analysis predicting CASE usage relatively high correequation explained lations between 84% variance of Individual respondents Companies CASE usage and CASE usage (p other study variables 0.001). Even though Participation 0.07 at the organizational stronger, these orga0.27 ** 0.21 ** Management support 0.20 ** and individual levels. nizational-level findTraining 0.06 Table 4 shows the ings, except in the Expectation realism 0.10 results of the regrescase of relative 0.05 Complexity sion analysis in advantage, are con0.19 ** Compatibility 0.13 which use is the sistent with the 0.27 ** Relative advantage 0.07 0.69 *** 0.58 *** 0.61 *** Voluntariness dependent variable. results at the individThe first column ual level. 0.84 *** 0.73 *** R2 0.74 *** depicts results of the Table 5 (a) shows 0.82 0.71 0.71 Adjusted R2 analysis of the indithe results of hierar* p 0.05 ** p 0.01 *** p 0.001 vidual data in which chical regression all the independent analyses with CASE variables are entered in the equa- ness, the standardized regression effectiveness (i.e., productivity and tion, the second column shows the coefficient () at the organization- quality effects), as the dependent stepwise regression analysis of the al-level being -0.69 (p 0.001). In variables. The purpose is to test individual data, and the third col- addition to voluntariness, the the effect of CASE usage on CASE umn reports the stepwise regres- organizational level analysis identi- effectiveness, keeping other indesion analysis of the organizational fies management support ( = pendent variables as control varidata. The results show very consis- 0.27) and relative advantage ( = ables (block 1) and entering usage tently the pivotal role of voluntari- 0.27) as significant (p 0.01 in after them (block 2). The control variables in Table 5 (a) are the same as the indeTable 5. Results of regression analysis predicting CASE effectiveness pendent variables predicting CASE usage, Individual respondents Companies except expectation realism. Expectation realism Productivity Quality Productivity Quality (a) effects effects effects effects is excluded because as the difference between perceived impact and 0.00 0.04 1. Participation impact expectations it is 0.04 0.03 Management support by definition related to 0.09 0.17' 0.16' Training 0.16 CASE effectiveness. 0.01 Complexity 0.01 0.08 Compatibility Entering CASE usage 0.26' 0.52** 0.32** 0.31*** 0.40*** 0.44*** Relative advantage after the control vari0.33 0.04 0.05 0.05 0.03 0.01 Voluntariness ables did not add signifi0.63*** 0.08 0.53*** 0.54*** 0.28' 0.28' 2. Usage cantly to the variance of perceived productivity 2 0.56*** 0.53*** 0.46*** 0.45*** 0.43*** 0.42*** R effects explained at the 2 0.52 0.44 0.48 0.41 0.38 0.39 Adjusted R organizational level, but 0.09* 0.00 0.08*** 0.10*** 0.02' 0.02' R2 did in the case of perIndividual respondents Companies ceived quality effects. In the latter case, CASE Productivity Quality Productivity Quality (b) effects effects effects effects usage had a significant coefficient (0.63, p 0.001), increasing the 0.06 1. Participation 0.01 variance of perceived 0.03 0.04 Management support quality effects explained 0.28' 0.06 0.14 0.15 Training 0.29* 0.30** 0.09 by 9% (p 0.05). 0.32* 0.41** Complexity 0.10 0.05 0.00 Compatibility Among the control 0.16 0.12 0.13 0.13 Voluntariness variables, perceived rela0.73** 0.30* 0.39* 0.28** 0.62*** 0.65*** 2. Usage tive advantage turned out to be a significant predic0.61*** 0.57*** 0.32*** 0.32*** 0.39*** 0.38*** R2 tor of both perceived 2 0.57 0.53 0.36 0.34 0.27 0.29 Adjusted R productivity and quality 2 0.15** 0.07* 0.11*** 0.14*** 0.04* 0.02' R effects. This result is not ' p 0.10 * p 0.05 ** p 0.01 *** p 0.001 surprising, because the
100
October 1996/Vol. 39, No. 10
COMMUNICATIONS OF THE ACM

items of perceived relative advantage measure the relative advantage largely in terms of the increased speed, quality, ease and effectiveness of respondents tasks [16]. In view of the conceptual overlap between perceived relative advantage on the one hand and perceived productivity and quality effects on the other hand, it is instructive to view the determinants of perceived productivity and quality effects when perceived relative advantage is dropped from control variables. Table 5 (b) shows that then CASE usage emerges as a significant predictor of both perceived productivity effects ( = 0.30, p 0.05) and quality effects ( = 0.73, p 0.001). Among the control vari-

innovations such as CASE. Both management support and voluntariness were found to be significant predictors of CASE usage. At the same time, the results challenge earlier IS implementation research. The significance of voluntariness is noteworthy since it is largely omitted in prior implementation research. Possibly because of this dominance, only management support among the four traditional implementation factors (participation, management support, training, and expectation realism) turned out to be significant predictors of CASE usage.8 In view of the pivotal role of voluntariness, there is a need for additional research on these social determinants of IT usage.

ceived complexity was indirectly based on the perceived ease of use of the Technology Acceptance Model [4], which has quite consistently found perceived ease of use to affect use directly and especially indirectly through perceived usefulness (relative advantage).9 The study is a retrospective cross-sectional analysis. As such it does not allow conclusions on causation. For example, the strong negative association between voluntariness and CASE use could be explained as the positive effect of managements encouragement and mandating on CASE usage. A second explanation might be that the respondents reporting high CASE usage have attributed it to

Among the innovation characteristics, perceived relative advantage was found to be a significant predictor of CASE effectiveness.
ables perceived complexity turns out to be a significant predictor of both CASE productivity effects ( = -0.41, p 0.01) and CASE quality effects ( = -0.32, p 0.05).
Discussion

The preceding results give clear support for the tentative hypotheses H2, H7, H8 and H9 (see Figure 1). Management support (H2), relative advantage (H7) and voluntariness (H8) were found to be significant predictors of CASE usage. The negative association of voluntariness was extremely dominant. In accordance to Hypothesis H9, CASE usage was discovered to be associated with perceptions of CASE effectiveness. It was a strong predictor of perceived quality effects, and a significant predictor of perceived productivity effects when perceived relative advantage was dropped from the regression equation. The results clearly support Fichmans [5] conjecture about the significance of management influence in the adoption of complex

Among the innovation characteristics, perceived relative advantage was found to be a significant predictor of CASE effectiveness. This may be explained by its conceptual overlap with perceived relative advantage and CASE effectiveness. When perceived relative advantage was dropped, perceived complexity was found as a significant, negatively related, predictor of CASE effectiveness: Increased perceived complexity of CASE tools decreases both the perceived productivity and quality effects of CASE. This may be explained by the significant negative correlation between perceived relative advantage and perceived complexity (r = -0.54, p 0.001). Quite interestingly, perceived complexity was not found to be a significant predictor of CASE usage. This is surprising because the operationalization of perWhen voluntariness was dropped from the regression equation predicting CASE usage at the organizational level, expectation realism turned out to be significant ( = 0.43, p 0.01) in addition to management support ( = 0.39, p 0.05).
8

managements pressure and those reporting low CASE usage have attributed it to voluntariness of CASE use. Because CASE usage is likely viewed as socially acceptable and even desirable, the latter explanation would seem unlikely if CASE usage had concerned individual usage by the respondents. In our case, when usage concerned organizational usage of CASE technology, the latter explanation may be relevant. Even though this study does not allow definite conclusions on causality, one notable interpretation of the findings of the present article is that high CASE usage tends to increase the productivity and quality of IS and software development, and that high voluntariness of CASE use tends to decrease CASE usage. Assuming this is the case, there are clear practical implications. It would indicate that
In order to further analyze the role of perceived complexity, perceived relative advantage was dropped from the regression equation predicting CASE usage at the organizational level. This elimination did not make perceived complexity a significant predictor of CASE usage.
9

COMMUNICATIONS OF THE ACM

October 1996/Vol. 39, No. 10

101

the relatively low CASE usage discovered in several studies (e.g., [1, 14]) and in this study is really a loss in terms of the productivity and quality of IS and software development. At the same time the results would suggest that managements stronger support, encouragement and mandating of CASE usage, would effectively increase CASE usage. If this interpretation is accepted, it is essential to involve management in the CASE adoption process and to get management to express its preference for and commitment to CASE. This requires that management understand the importance of CASE for the productivity and quality of IS and software development. The finding that CASE usage tends to increase the productivity and quality of IS and software development obviously makes it easier to communicate this importance to management and to motivate it to intervene in the CASE adoption process. If the preceding interpretation is accepted, the conceptual overlap between perceived relative advantage as a predictor of CASE usage and perceived CASE effectiveness as the consequence of CASE usage emphasize the selfreinforcing cycle of CASE usage. Positive perceptions of CASE effectiveness increase perceptions of relative advantage. Positive perceptions of relative advantage tend to intensify CASE usage. If the positive perceptions of CASE effectiveness are reinforced by further CASE usage, this adds the perception of relative advantage, and so the cycle continues. A second interesting finding of the results was that perceived complexity was a significant determinant of CASE effectiveness, when perceived relative advantage was dropped from the regression equation predicting CASE effectiveness (because of the conceptual overlap between perceived relative advantage and CASE effectiveness). This finding has clear practical implications. CASE tools are often complex, and when their complexity is perceived to be high it is difficult to
102
October 1996/Vol. 39, No. 10

appreciate their advantages. Therefore any means to affect these perceptions can be expected to be significant in the management of CASE adoption. The perceived quality and quantity of training was found to have a moderately high negative correlation with perceived complexity (r = -0.29, p = 0.01). One explanation for this correlation may be the mixed effect of training on perceived complexity. On the one hand, training may increase perceived complexity when introducing different aspects of the CASE tools. On the other hand, when helping the prospective user to apply and use the tool, training attempts to decrease the perceived complexity. It is obvious that future CASE training should pay more attention to the significance of perceived complexity in the CASE adoption to devise strategies in which the perceived complexity is under scrutiny all the time. The significance of perceived complexity also has implications for CASE vendors. On the one hand, it implies a concern for the increased complexity of CASE tools. Even though there are good reasons for more integrated CASE environments including more and more versatile support for IS development, the increased complexity of CASE environments may hinder their adoption in practice. The significance of perceived complexity also suggests that any intelligent technical means, such as tutoring, hypertext and expert system features, that increases the understandability of a CASE tool without impairing its functionality, might be a very profitable investment in the CASE tool development. This study has limitations. It was a retrospective cross-sectional analysis that was based on perceptual data. Even though [16] strongly argues that subjective perceptions provide a sounder basis for theory development than more objective data, one must admit the reliability of perceptual retrospective data may be questionable in the case of CASE impacts, for

example. Perceptual data may also allow the interpretation that, because of perceived management encouragement and urging, the respondents wished to report higher CASE usage in order to satisfy managements implicit expectations. Because the respondents mailed the questionnaires directly and anonymously, it is difficult to see any specific reason for this kind of bias. However, there is a need for additional research including more objective and more longitudinal data. The sample size of 35 organizations was small. The individual data (n = 105) suggests that the results are not solely an outcome of overfitting the data [7]. The sampling procedure may also be biased. The CASE user organizations were partly contacted based on vendor data, and the contact persons in each organization distributed the questionnaires. The respondents returned the questionnaires directly, however. Because of the small number of organizations and respondents, it was not practical to estimate the possible non-response bias. One could suspect, for example, that the sampling procedure biased the sample to include more respondents with favorable attitudes toward CASE than more random sampling would have done. People with positive attitudes may exaggerate positive productivity and quality effects of CASE tools, but it is not as obvious that they would overestimate CASE usage. On the contrary, one could imagine that they would be critical, underestimating CASE usage rather than overestimating it. When combined, these biases would decrease the strength of the association between CASE usage and CASE effects. Especially if the CASE usage was low, respondents might attribute it to the lack of managements encouragement and mandating of usage. This tendency might increase the association between voluntariness and CASE usage. Because of these limitations there is a need for additional

COMMUNICATIONS OF THE ACM

research based on a larger and more random sample. The results are based on CASE users in one country, Finland, and it is an open question to what extent these results can be generalized to other countries. The pivotal role of voluntariness found in this study largely contradicts the results of prior research finding management urging to have a relatively weak relationship with system use. These earlier findings are with all likelihood based on North American data. Hofstede [10] proposes that national cultures can be classified along five dimensions, which are largely independent of each other: power distance, individualism, masculinity, uncertainty avoidance, and long-term orientation. Voluntariness is obviously related to power distance, that is, the extent to which the less powerful members of institutions and organizations within a country expect and accept that power is distributed unequally [10], and individualism, that is, the degree to which the interests of the individual prevail over the interests of the group [10]. One can conjecture that in countries of high power distance, a management mandate is more easily accepted than in countries of low power distance. One can also presume that management mandate or encouragement reflects the interest of the whole organization or of an individual project, rather than of an individual IS professional. Therefore, lower individualism can be expected to make it easier to get the wider interests accepted. Hofstede [10] in his comparison of 50 countries and 3 regions found both the U.S. and Finland have small power distances (scores 40 and 33, and ranks 38 and 46), but differ in individualism (scores 91 and 63, and ranks 1 and 17). If this reasoning is valid, slightly lower power distance in Finland should reduce the significance of voluntarism in Finland compared with the U.S. and the lower individualism

to increase it. In view of these cultural differences there is a need for cross-cultural studies of predictors of CASE usage in countries representing different national cultures. C Acknowledgments The author wishes to express his gratitude to Jarmo Huttunen and Pasi Tppinen for the data collection, Marko Muotka for some statistical analysis, and the reviewers for their constructive comments.
References
1. Aaen, I., Siltanen, A., Srensen, C. and Tahvanainen V.-P. A tale of two countries: CASE experiences and expectations. In Kendall, K.E., Lyytinen, K. and DeGross, J., Eds. The Impact of Computer Supported Technologies on Information Systems Development, IFIP Transactions A-8, North-Holland, Amsterdam, 1992, 6191. 2. Banker, R.D. and Kauffman, R.J. Reuse and productivity in integrated computeraided software engineering: An empirical study. MIS Q. 15, 3 (1991), 375401. 3. Cooper, R.B. and Zmud, R.W. Information technology implementation research: A technology diffusion approach. Manage. Sci. 36, 2 (1990), 123139. 4. Davis, F.D., Bagozzi, R.P. and Warshaw, P.R. User acceptance of computer technology: A comparison of two theoretical models. Manage. Sci. 35, 8 (1989), 9821003. 5. Fichman, R.G. Information technology diffusion: A review of empirical research. In DeGross, J.I., Becker, J.D. and Elam, J.J, Eds., Proceedings of the Thirteenth International Conference on Information Systems, (Dallas, TX, 1992), pp. 195206. 6. Finlay, P.N. and Mitchell, A.C. Perceptions of the benefits from introduction of CASE: An empirical study. MIS Q. 18, 4, (1994), 353370. 7. Hair, J.F. Jr., Anderson, R.E., Tatham, R.L. and Black, W.C. Multivariate Data Analysis with Readings. Macmillan, New York, 1992. 8. Hartwick, J. and Barki, H. Explaining the role of user participation in information system use. Manage. Sci. 40, 4 (1994), 440465. 9. Hertel, B.R. Minimizing error variance introduced by missing data routines in survey analysis. Sociological Methods and Research, 4 (1976), 459474. 10. Hofstede, G. Cultures and Organizations: Software for the Mind. McGraw-Hill, London, 1991. 11. Huff, C.C. Elements of a realistic CASE tool adoption budget. Commun. ACM 35, 4 (1992), 4554. 12. Iivari, J. Factors affecting CASE usage. Working paper, Department of Information Processing Science, University of Oulu, 1996 (available from the author). 13. Kemerer, C.F. How the learning curve

affects CASE tool adoption. IEEE Software 9, 3 (1992), 2328. 14. Kusters, R.J. ja Wijers, G.M. On the practical use of CASE-tools: results of a survey, CASE93. In Proceedings of the 6th International Workshop on CASE (Singapore, 1993), IEEE Computer Society Press, pp. 210. 15. Lucas, H.C., Jr. Implementation, the Key to Successful Information Systems. Columbia University Press, New York, 1981. 16. Moore, G.C. and Benbasat, I. Development of an instrument to measure the perceptions of adopting an information technology innovation. Information Systems Research 2, 3 (1991), 192222. 17. Norman, R.J. and Nunamaker, J.F. Jr. CASE productivity perceptions of software engineering professionals. Commun. ACM 32, 9 (Sept. 1989), pp. 11021108. 18. Nunally, J.C. Psychometric Theory. McGraw-Hill, New York, 1978. 19. Orlikowski, W.J. CASE tools as organizational change: Investigating incremental and radical changes in systems development. MIS Q. 17, 3 (1993), 309340. 20. Prescott, M.B. and Conger, S.A. Information technology innovations: A classification by IT locus of impact and research approach. The DATABASE for Advances in Information Systems 26, 2 & 3 (1995), 2041. 21. Rai, A. and Howard, G.S. Propagating CASE usage for software development: An empirical investigation of key organizaational correlates. Omega 22, 2 (1994), 133147. 22. Ramiller, N. Perceived compatibility of information technology innovations among secondary adopters: Towards reassessment. J. Engineering and Technology Management 11, 1 (1994), 123. 23. Rogers, E.M. Diffusion of Innovations, Fourth Edition. The Free Press, New York, 1995. 24. Straub, D., Limayem, M. and Karahanna-Evaristo, E. Measuring systems usage: Implications for IS theory testing. Manag. Sci. 41, 8 (1995), 13281342. 25. Wynekoop, J.L., Senn, J.A. and Conger, S.A. The implementation of CASE tools: An innovation diffusion approach, Kendall, K.E., Lyytinen, K. and DeGross, J., Eds., The Impact of Computer Supported Technologies on Information Systems Development, IFIP Transactions A-8, North-Holland, Amsterdam, 1992, 2541. JUHANI IIVARI is a professor of information systems in the Department of Information Processing Science at the University of Oulu, Finland; email: iivari@rieska.oulu.fi
Permission to make digital/hard copy of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage, the copyright notice, the title of the publication and its date appear, and notice is given that copying is by permission of ACM, Inc. To copy otherwise, to republish, to post on servers, or to redistribute to lists requires prior specific permission and/or a fee.

ACM 0002-0782/96/1000 $3.50

COMMUNICATIONS OF THE ACM

October 1996/Vol. 39, No. 10

103

Вам также может понравиться