Академический Документы
Профессиональный Документы
Культура Документы
Abstract
ImportancePerformance Analysis (IPA) is a simple and useful technique for identifying those attributes of a product or service that are most in
need of improvement or that are candidates for possible cost-saving conditions without significant detriment to overall quality. To this end, a two-
dimensional IPA grid displayed the results of the evaluation about importance and performance of each relevant attribute. This paper shows that
ordinal preferences are better than metric measures of the importance dimension and proposes a formula to transform the ordinal measure into a
new metric scale adapted to the IPA grid. This formula makes allowances for the total number of features considered, the number of rankings, and
the reported orders of preference.
2006 Elsevier Inc. All rights reserved.
3. Importance measures
targets for improvement action. Other factors that several Moreover, the various measures proposed generally disagree
studies pointed to as contributory causes of the crowding with each other (Alpert, 1971; Bacon, 2003; Crompton and
phenomenon include respondents' lack of involvement (Bacon, Duray, 1985; Heeler et al., 1979; Jaccard et al., 1986; Matzler
2003) and their possible lack of expertise regarding the product et al., 2003; Oh, 2001), so the interpretation of IPA differs
or service assessed (Sambonmatsu et al., 2003). However, the according to the chosen importance measure. Thus, the
main cause is the use of measures of absolute rather than decision-maker has little way of knowing a priori which
relative (competitive) importance. measure, if any, may be the most reliable at incorporating IPA as
The development of indirect measures of importance helps to a means of designing appropriate marketing strategies.
circumvent the above problems. Among these indirect mea-
sures, standardized regression coefficients obtained by multi- 4. An alternative assessment of importance
variate regression of the attributes performance ratings over the
overall performance rating given to the product or service. This The starting point for this paper is the idea that, for the
is intrinsically a measure of relative importance in that each purposes of IPA, asking raters to rank attributes in order of
regression coefficient depends on data for all the attributes. This importance (with no ties allowed) rather than assigning them
is, in fact, a more realistic conception of human reasoning and absolute importance values offers the possibility of better
psychology of choice (Edwards, 1954; Tversky and Kahneman, measurement of importance. Clearly, this procedure avoids the
1981). This method has the further advantage of reducing the problems that beset the indirect measures described above,
demands on the rater's attention (since only the performance and, for each rater, avoids the problem of crowding, since
scale is available, not importance), which may favor increased ranking k attributes automatically spreads their rank scores
rater involvement. However, this approach also has at least one evenly between 1 and k. Furthermore, as the evaluation task
major weakness: the possibility of collinearity (Danaher, 1997; forces the rater to consider attributes jointly rather than one by
Bacon, 2003). Collinearity among the attribute performances one, this procedure has two great advantages by: 1) obtaining a
when used as predictors of overall performance can lead to the relative or competitive measure of importance for each
precision of the regression coefficients being so poor that they attribute; and 2) favoring rater commitment and involvement
fail to discriminate reliably among the attributes (and can even in the evaluation task, especially by asking raters only to rank
take on negative values). their top few k preferences among the s attributes under
A second criticism of linear regression coefficients as consideration, which reduces risk of fatigue. However, before
importance measures is that the relationship between the overall the application of this method, the researcher must settle the
performance of a product and its performance with respect to its question of how multiple raters ranking reports should
individual attributes may well be nonlinear (Danaher, 1997). aggregate in the IPA grid.
Specifically, some research reports that the negative effect of If every rater ranks all the s attributes under consideration,
below-average attribute performance on the overall product rating with rank 1 being low and rank s high, the mean rank assigned
is greater than the positive effect of above-average attribute to an attribute may be a reasonable measure of its importance.
performance (Mittal et al., 1998; Sethna, 1982). Generalization of However, this first procedure has three major drawbacks:
the regression coefficient approach to this situation would mean 1) asking for only the top few k preferences can leave many
fitting a response surface to the performance data and using partial attributes with low aggregate rank scores, resulting in crowding
derivatives as local measures of importance. Although in some at the bottom of the IPA grid; this effect will increase with
situations this procedure might provide the decision-maker with greater numbers of attributes; 2) this procedure omits the
useful information, this procedure seems unlikely to be generally reported orders of preference listed by each rater; 3) comparison
applicable. with performance mean values would be difficult in both
Another major group of indirect importance measures quadrant and diagonal approaches to IPA.
comprises those derived by conjoint analysis. This approach Previous authors adopted a ranking approach in their IPA
has the advantage over multiple regression of using orthogonal research (Matzler et al., 2003; Mersha and Adlakha, 1992;
designs, which excludes the possibility of collinearity between Sampson and Showalter, 1999), but they ignored the whole
attribute partial utilities. Furthermore, the use of several levels number of attributes that compose the evaluation task, and/or
for each attribute lessens the risk of difficulties due to non- the rankings given by raters, using as their measure of
linear dependence of overall performance on attribute importance the absolute frequency for each attribute reported
performances. However, conjoint analysis requires quite an by all raters or the mean rank of the ordered k attributes.
onerous data collection process and becomes unfeasible when The measure proposed in this paper gets round those
involving more than a very few attributes, both of which are problems by subjecting appropriately scaled mean ranks to a
especially serious drawbacks for the application of IPA to transformation that depends on the proportion of attributes that
services. the rater must rank. The remainder of the paper describes the
Partial correlation or logistic regression methods have scaling and transformation operations and the use of the new
difficulties similar to those of the above approaches to measure for importanceperformance analysis of a primary
importance measurement, or they have equally negative health care service and then compares the results of this analysis
consequences (Crompton and Duray, 1985; Danaher and with those of an earlier study based on a direct measure of
Mattsson, 1994; Heeler et al., 1979; Jaccard et al., 1986). importance.
118 J. Abalo et al. / Journal of Business Research 60 (2007) 115121
5. Method Table 2
The nine attributes for assessing the patient-perceived quality of primary health
care in Galicia
5.1. Importance measurement
1. Medical staff
2. Nursing staff
Suppose n raters presented with s attributes rank their top k 3. Auxiliary staff attending the public
preferences using natural numbers from 1 (most preferred) to k 4. Easy of making an appointment
(least preferred), with no ties allowed. The problem is to use 5. Waiting time in the health centre before entering the consulting room
these rankings to assign each attribute i an importance value pi 6. Opening hours
lying in some specified interval that, for convenience, is here 7. State of the building housing the centre
8. Signage inside the centre
[0,1]; the value of pi should increase with the importance of 9. Equipment
attribute i.
Denoting by gij the rank assigned to the i-th attribute by the
j-th rater, the first thing to do is to recode the gij as ranking components of the service on the basis of the existing literature
scores hij that lie in the desired interval, increase with degree of and focus groups with health care staff and patients. The present
preference, rather than decrease, and assign the value 0 to all study includes a further factor excluded in the earlier study:
attributes not mentioned by rater j: health center opening hours, making nine in all (Table 2).
kgij 1=k gij not void
hij 1 6.2. Data collection procedure
0 otherwise
The data for application in this research came from 1767
Table 1 lists hij for values of k and gij up to 9.
patients recruited at 75 primary health care centers from among
For the reasons noted at the end of the Introduction, the mean
all patients aged 18 years or older, respecting proportionality as
of the hij over raters, mi, tends to crowd attributes together at the
regards age groups and sex. Eight trained interviewers collected
bottom of the importance scale. This tendency is even greater,
the data for a month. For the purposes of the evaluation of service
the smaller the proportion of attributes ranked by each rater, k /
performance, every rater scored from 0 (very bad) to 10 (very
s. Thus, this value is not attractive as a measure of importance.
good) all of the nine attributes considered, and the mean of the
However, this difficulty can be overcome by subjecting the mi
raters gave aggregate data. On the other hand, the aggregate
to a monotonic transformation that increases small mi while
importance values of the nine attributes came from the new
leaving large mi relatively unaltered. In view of the relationship
method proposed in this paper. To this end, raters ranked what
between the likelihood of crowding and k / s, a particularly
they considered to be the three most important attributes in order
suitable transformation of this kind is mi (mi)k/s. This proce-
of importance, from 1 (most important) to 3 (least important). To
dure increases discrimination between attributes and augments
minimize the likelihood of bias due to the order of presentation
the diagnostic and strategic properties of IPA. The transformed
(Ares, 2003; Tversky and Kahneman, 1982, 1983), each
importance measure proposed in this paper is therefore
interviewer presented the nine attributes in a different order.
X k=s Both the performance evaluation and the importance ranking
Pi n1 hij 2
j questionnaires formed part of a larger questionnaire on the
perceived quality of primary health care in Galicia.
6. Application: Ipa of a primary health care service
7. Results and discussion
6.1. Attributes
Table 3 lists the number of ranks 1, 2 and 3 assigned by the
In a previous study of the public primary health care service raters for each attribute, and Table 4 lists the aggregate
in Galicia, Spain, Abalo et al. (2006) identified eight essential
Table 3
Table 1 Number of assignations of each attribute to each rank in this study
Ranking scores hij for raw rankings gij of the rater's most preferred k attributes
Attribute RANK
hij k
1st 2nd 3rd
2 3 4 5 6 7 8 9
Medical staff 1044 320 157
gij 1 1 1 1 1 1 1 1 1 Nursing staff 38 367 199
2 0.500 0.667 0.750 0.800 0.833 0.857 0.875 0.889 Auxiliary staff 17 73 162
3 0.333 0.500 0.600 0.667 0.714 0.750 0.778 Ease of appointment 139 203 269
4 0.250 0.400 0.500 0.571 0.625 0.667 Time in waiting room 124 186 209
5 0.200 0.333 0.429 0.500 0.556 Opening hours 66 130 187
6 0.167 0.286 0.375 0.444 State of building 44 99 165
7 0.143 0.250 0.333 Signage 6 29 48
8 0.125 0.222 Equipment 289 360 371
9 0.111 Total 1767 1767 1767
J. Abalo et al. / Journal of Business Research 60 (2007) 115121 119
Table 4
Aggregate performance and importance scores of each attribute in this study
Attribute Performance Importance a Discrepancy b
Medical staff 8.58 9.05 0.47
Nursing staff 8.40 5.82 2.58
Auxiliary staff 7.56 4.07 3.49
Ease of appointment 6.80 5.91 0.89
Time in waiting room 5.74 5.64 0.10
Opening hours 7.17 4.95 2.22
State of building 7.17 4.54 2.63
Signage 7.41 2.86 4.55
Equipment 5.66 7.18 1.52
Mean 7.16 5.56
a
Scaled up to occupy the same range as performance values.
b
Performance value minus importance value.
importance and performance value of each attribute together Fig. 3. Aggregate importance values of health service attributes according to
with the difference between the two, where the determination of Abalo et al. (2006). Bars show 95% confidence intervals, and frames surround
values that do not differ significantly at the = .05 level.
aggregate importance is Eq. (2) multiplied by 10, in order to
achieve the same rank scale as the performance values. For
comparison, Table 5 lists the performance and importance
values obtained in the earlier study (Abalo et al., 2006), in since in all cases importance exceeds performance and all the
which the importance measure of the eight attributes considered attributes lie above the diagonal, a strict interpretation of the
are the averages of ratings on a scale of 010 from 1601 diagram would suggest that all the attributes are good candidates
subjects, obtained in the same way as the performance ratings. for improvement measures. This is somewhat counterintuitive,
In the earlier study, aggregate importance values squeezed given that the average performance value is 7.23.
into a rank of only 1.66 units (from 7.34 to 9.00). Although The IPA grid obtained in the present study (Fig. 4) clearly
some groups of attributes are different from the others (because discriminates better among attributes than that of the earlier
the wide sample and subsequent little standard error, Table 5 study (Fig. 5). However, although its importance values show a
and Fig. 3), the uncertainty in the values is large enough to be good spread, its performance values, like those of the earlier
able to use them for effective discrimination among the study, crowd into a much narrower range (less than 3 units, from
attributes within these groups. By contrast, the importance 5.66 to 8.58, in the present study). This is in keeping with the
measure used in the present study successfully spread the usual tendency observed in studies of user satisfaction with
aggregate importance scores over a range of more than 6 units, health services that have employed questionnaires soliciting
from 2.86 to 9.05. quantitative ratings (Fitzpatrick, 1991; Hawes and Rao, 1985).
Fig. 4 presents the aggregate importance values obtained in In other words, direct performance ratings in the health service
the present study plotted against the corresponding aggregate
performance values on an IPA grid of the type described in the
Introduction. Fig. 5 shows the corresponding diagram for the
earlier study (Abalo et al., 2006), where aggregate importance
and performance both clustered around high values, crowding
the attributes into the upper right corner of the IPA grid. In fact,
Table 5
Aggregate performance and importance scores of each attribute obtained by
Abalo et al. (2006) by direct rating
Attribute Performance Importance Standard error of
importance
Medical staff 8.46 8.91 .034
Nursing staff 7.97 8.16 .044
Auxiliary staff 7.14 7.34 .050
Ease of appointment 6.87 8.88 .039
Time in waiting room 5.85 8.32 .046
State of building 7.27 8.78 .035
Signage 7.34 8.23 .043
Equipment 6.96 9.00 .036
Fig. 4. IPA diagram of a primary health care service constructed using the
Mean 7.23 8.45
performance and importance measures and data of this study.
120 J. Abalo et al. / Journal of Business Research 60 (2007) 115121
Dolinsky AL, Caputo RK. Adding a competitive dimension to importance Novatorov EV. An importanceperformance approach to evaluating internal
performance analysis: an application to traditional health care systems. marketing in a recreation centre. Manag Leis 1997;2:116.
Health Care Mark Quart 1991;8(3):6179. Oh H. Revisiting importanceperformance analysis. Tour Manag 2001;22:61727.
Duke CR, Persia MA. Performanceimportance analysis of escorted tour Oliver RL. Cognitive, affective, and attribute bases of the satisfaction response.
evaluations. J Travel Tour Mark 1996;5(3):20723. J Consum Res 1993;20:41830.
Edwards W. The theory of decision making. Psychol Bull 1954;51:380417. Ortinau DJ, Bush A, Bush RP, Twible JL. The use of importanceperformance
Ennew CT, Reed GV, Binks MR. ImportancePerformance Analysis and the analysis for improving the quality of marketing education: interpreting
measurement of service quality. Eur J Mark 1993;27(2):5970. faculty-course evaluations. J Mark Ed 1989;11(2):7886.
Evans MR, Chon K. Formulating and evaluating tourism policy using Ostrom A, Iacobucci D. Consumer trade-offs and the evaluation of services.
importanceperformance analysis. Hospitality Educ Res J 1989;13:20313. J Mark 1995;59:1728.
Fishbein M. An investigation of the relationship between beliefs about an object Parasuraman A, Zeithaml V, Berry L. A conceptual model of service quality and
and attitude toward that object. Hum Relat 1963;16(3):2339. its implications for future research. J Mark 1985;49:4150.
Fishbein M, Ajzen I. Belief, attitude, intention and behavior: an introduction to Parasuraman A, Zeithaml V, Berry L. SERVQUAL: a multiple-item scale for
theory and research. Reading (MA): Addison-Wesley; 1975. measuring consumer perceptions of service quality. J Retail 1998;64:1240.
Fitzpatrick R. Surveys of patient satisfaction: important general considerations. Picn E, Varela J, Rial A, Garca A. Evaluacin de la satisfaccin del consumidor
Br Med J 1991;302:8879. mediante Anlisis de la ImportanciaValoracin: una aplicacin a la
Ford JB, Joseph M, Joseph B. Importanceperformance analysis as a strategic evaluacin de destinos tursticos. Santiago de Compostela: I Congreso
tool for service marketers: the case of service quality perceptions of business Galego de Calidade; 2001.
students in New Zealand and the USA. J Serv Mark 1999;13(2):17186. Rosenberg MJ. Cognitive structures and attitudinal affect. J Abnorm Soc
Green PE, Rao VR. Conjoint measurement for quantifying judgmental data. Psychol 1956;53(2):36772.
J Mark Res 1971;8:35563. Sambonmatsu DN, Kardes FR, Houghton DC, Ho EA, Posavac SS. Over-
Hansen E, Bush RJ. Understanding customer quality requirements. Model and estimating the importance of the given estimation in multiattribute consumer
application. Ind Mark Manag 1999;2:11930. judgment. J Consum Psychol 2003;13(3):289300.
Hawes JM, Rao CP. Using ImportancePerformace Analysis to develop health Sampson SE, Showalter MJ. The performanceimportance response function:
care marketing strategies. J Health Care Mark 1985;5(4):1925. observations and implications. Serv Ind J 1999;19(3):126.
Heeler RM, Okechuku C, Reid S. Attribute importance: contrasting measure- Sethna BN. Extensions and testing of ImportancePerformance Analysis. Bus
ments. J Mark Res 1979;16:603. Econ 1982;20:2831.
Hollenhorst S, Olson D, Fortney R. Use of importanceperformance analysis to Slack N. The importanceperformance matrix as a determinant of improvement
evaluate state park cabins: the case of the West Virginia state park system. priority. Int J Oper Prod Manag 1994;14(5):5975.
J Park Recreat Admi 1992;10(1):111. Spreng RA, Mackenzie SB, Olshavsky RW. A reexamination of the
Hudson S, Hudson P, Miller GA. The measurement of service quality in the tour determinants of consumer satisfaction. J Mark 1996;60:1532.
operating sector: a methodological comparison. J Travel Res 2004;42 Taylor SA. Assessing regression-based importance weights for quality
(3):30513. perceptions and satisfaction judgments in the presence of higher order
Jaccard J, Brinberg D, Ackerman LJ. Assessing attribute importance: a and/or interaction effects. J Retail 1997;73(1):13559.
comparison of six methods. J Consum Res 1986;12:4638. Tse DK, Wilton PC. Models of consumer satisfaction formation: an extension.
LaTour S, Peat N. Conceptual and methodological issues in consumer J Mark Res 1988;25:20412.
satisfaction research. In: Perrault Jr WD, editor. Advances in consumer Tversky A. Intransitivity of preferences. Psychol Rev 1969;76:3148.
research, vol. 6. Ann Arbor (MI): Association for Consumer Research; 1979. Tversky A, Kahneman D. The framing of decisions and the psychology of
p. 4317. choice. Science 1981;211:4538.
Luce RD, Tuckey JW. Simultaneous conjoint measurement: a new type of Tversky A, Kahneman D. Judgements of and by representativeness. In:
fundamental measurement. J Math Psychol 1964;1:127. Kahneman D, Slovic P, Tversky A, editors. Judgement under uncertainty:
Matzler K, Sauerwein E, Heischmidt KA. ImportancePerformance Analysis Heuristics and biases. Cambridge: Cambridge University Press; 1982.
revisited: the role of the factor structure of customer satisfaction. Serv Ind J Tversky A, Kahneman D. Extensional versus intuitive reasoning: the
2003;23(2):11230. conjunction fallacy in probability judgement. Psychol Rev 1983;90
Matzler K, Bailom F, Hinterhuber HH, Renzl B, Pichler J. The asymmetric (4):239315.
relationship between attribute-level performance and overall customer Uysal M, Howard G, Jamrozy U. An application of importanceperformance
satisfaction: a reconsideration of the importanceperformance analysis. Ind analysis to a ski resort: a case study in North Carolina. Vis Leis Bus
Mark Manag 2004;33(4):2717. 1991;10:1625.
Martilla JA, James JC. ImportancePerformance Analysis. J Mark Varela J, Braa T. Anlisis conjunto aplicado a la investigacin comercial.
1977;41:779. Madrid: Eudema; 1996.
Mersha T, Adlakha V. Attributes of service quality: the consumers' perspective. Wilkie WL, Pessemier EA. Issues in marketing's use of multi-attribute attitude
Int J Serv Ind Manag 1992;3(3):3445. models. J Mark Res 1973;10:42841.
Mittal V, Ross WT, Baldasare PM. The asymmetric impact of negative and Wittink DR, Bayer LR. The measurement imperative. Mark Res 1994;6
positive attribute-level performance on overall satisfaction and repurchase (4):1422.
intentions. J Mark 1998;62:3347. Yavas U, Shemwell DJ. Modified importanceperformance analysis: an
Nale RD, Rauch DA, Wathen SA, Barr PB. An exploratory look at the use of application to hospitals. Int J Health Care Qual Assur 2001;14(3):104 10.
importanceperformance analysis as a curricular assessment tool in a school Zhang HQ, Chow I. Application of importanceperformance model in tour
of business. J Workplace Learn 2000;12(4):13945. guides' performance: evidence from mainland Chinese outbound visitors in
Neslin SA. Linking product features to perceptions: self-stated versus Hong Kong. Tour Manag 2004;25:8191.
statistically revealed importance weights. J Mark Res 1981;18:806.