Вы находитесь на странице: 1из 7

Journal of Business Research 60 (2007) 115 121

Importance values for ImportancePerformance Analysis: A formula for


spreading out values derived from preference rankings
Javier Abalo a,, Jess Varela a , Vicente Manzano b
a
University of Santiago de Compostela, Spain
b
University of Sevilla, Spain

Abstract

ImportancePerformance Analysis (IPA) is a simple and useful technique for identifying those attributes of a product or service that are most in
need of improvement or that are candidates for possible cost-saving conditions without significant detriment to overall quality. To this end, a two-
dimensional IPA grid displayed the results of the evaluation about importance and performance of each relevant attribute. This paper shows that
ordinal preferences are better than metric measures of the importance dimension and proposes a formula to transform the ordinal measure into a
new metric scale adapted to the IPA grid. This formula makes allowances for the total number of features considered, the number of rankings, and
the reported orders of preference.
2006 Elsevier Inc. All rights reserved.

Keywords: ImportancePerformance Analysis; Importance measurement; Ordinal data; Multiattribute evaluation

1. Introduction performance of each attribute. The key objective of IPA is


diagnostic in nature: this technique aims to facilitate
Since the 1950s, the majority of both decision-making identification of attributes for which, given their importance,
(Churchman and Ackoff, 1954; Edwards, 1954; Tversky, 1969) the product or service underperforms or overperforms. To this
and attitude models (Fishbein, 1963; Fishbein and Ajzen, 1975; end, the importance measure represents the vertical axis, and
Rosenberg, 1956) have included stimuli among the key the performance measure constitutes the horizontal axis of a
attributes for study in order to obtain an overall evaluation of two-dimensional graph. These two axes divide the IPA grid
these topics. This approach is implicit in conjoint measurement into four quadrants where every attribute shows up according
techniques (Green and Rao, 1971; Luce and Tuckey, 1964; its mean rating on importance and performance scales. In the
Varela and Braa, 1996) and is explicit in complex models of original version of IPA, the appearance of an attribute in the
consumer satisfaction (LaTour and Peat, 1979; Oliver, 1993; top left quadrant of the grid is indicative of underperformance,
Spreng et al., 1996; Tse and Wilton, 1988; Wilkie and and its appearance in the bottom right quadrant is indicative of
Pessemier, 1973) and perceived service quality (Cronin and overperformance. Product or service improvement efforts
Taylor, 1992, 1994; Parasuraman et al., 1985, 1988). should focus on attributes in the former situation, while
Developing this idea, Martilla and James (1977) devised attributes in the latter situation are candidates for possible
ImportancePerformance Analysis (IPA) as a simple graph- cost-cutting strategies (Fig. 1). Therefore, Importance
ical tool to further the development of effective marketing Performance Analysis provides a useful and easily under-
strategies based on judgments of the importance and standable guide for identifying the most crucial product or
service attributes in terms of their need for managerial action,
as a means to develop successful marketing programs to
Corresponding author. Facultad de Psicologa, Campus Sur s/n, 15782,
achieve advantage over competitors.
Santiago de Compostela, Spain. Tel.: +34 981563100x13706; fax: +34
981528071.
Originally devised with marketing uses in mind, the
E-mail addresses: mtjavi@usc.es (J. Abalo), mtsuso@usc.es (J. Varela), application of IPA extends to a wide range of fields, including
vmanzano@us.es (V. Manzano). health service provision (Abalo et al., 2006; Dolinsky, 1991;
0148-2963/$ - see front matter 2006 Elsevier Inc. All rights reserved.
doi:10.1016/j.jbusres.2006.10.009
116 J. Abalo et al. / Journal of Business Research 60 (2007) 115121

3. Importance measures

The second major difficulty facing IPA, and the focus of


this paper, is the measurement of importance and performance.
In this regard, performance has generally been less contro-
versial than importance: the usual measurement procedure has
been to take the mean of the performance ratings obtained
from an appropriate group of people by means of a metric or
Likert scale. However, a variety of means exist to perform
importance measurement. In particular, two quite different
kinds of measure are common in IPA applications: 1) direct
measures based on Likert scale, k-point scale or metric ratings
Fig. 1. The original partition of the IPA grid in areas with distinct implications obtained in the same way as for performance; and 2) indirect
for product or service development (after Martilla and James, 1977). measures obtained from the performance ratings, either by
multivariate regression of an overall product or service rating
on the ratings given to the individual attributes (Danaher and
Dolinsky and Caputo, 1991; Hawes and Rao, 1985; Yavas and Mattsson, 1994; Dolinsky, 1991; Neslin, 1981; Taylor, 1997;
Shemwell, 2001), education (Alberty and Mihalik, 1989; Ford Wittink and Bayer, 1994) or by means of conjoint analysis
et al., 1999; Nale et al., 2000; Ortinau et al., 1989), industry techniques (Danaher, 1997; DeSarbo et al., 1994; Ostrom and
(Hansen and Bush, 1999; Matzler et al., 2004; Sampson and Iacobucci, 1995).
Showalter, 1999), internal marketing (Novatorov, 1997), service Although a recent review of these methods (Bacon, 2003)
quality (Ennew et al., 1993; Matzler et al., 2003) or tourism supports earlier studies (e.g. Alpert, 1971; Heeler et al., 1979)
(Duke and Persia, 1996; Evans and Chon, 1989; Hollenhorst in finding that direct measures capture the importance of
et al., 1992; Hudson et al., 2004; Picn et al., 2001; Uysal et al., attributes better than indirect measures, the former have serious
1991; Zhang and Chow, 2004). However, difficulties in its drawbacks. In particular, Bacon (2003) himself emphasizes
application soon became apparent and have prompted IPA that direct importance assessment is often misleading because
versions that generally differ from Martilla and James' original ratings are uniformly high. The main source of this problem is
version in one, or both, of the two ways: 1) the grid partition inherent to Martilla and James' procedure, in which the first
into areas with distinct significance for decision-makers; and 2) step should be to identify the most salient attributes of a product
the measurement of both importance and performance, the or service by qualitative studies (focus groups and/or
former especially. unstructured personal interviews) or by reviewing previous
research. This procedure has a natural tendency to record high
2. Alternative IPA representations importance ratings on a metric or Likert scale for all the
attributes selected for evaluation, with the result that they all
Among the alternative partitions of the IPA grid (reviewed crowd together at the top of the IPA grid. This defeats the
by Abalo et al., 2006; Bacon, 2003; or Oh, 2001), the most purpose of the exercise, since the objective of the IPA diagram
interesting are those that highlight the difference between is not to act simply as a record of absolute importance and
importance and performance ratings by means of an upward performance values, but to discriminate among attributes as
diagonal line that represents points where ratings of impor-
tance and performance are exactly equal; this iso-rating line
divides the graph in two great areas (Abalo et al., 2006; Bacon,
2003; Hawes and Rao, 1985; Nale et al., 2000; Picn et al.,
2001; Sampson and Showalter, 1999; Slack, 1994). This paper
follows Abalo et al. (2006) in using a partition that combines
the quadrant and diagonal-based schemes, enlarging the top
left quadrant of the original MartillaJames partition so the
new area occupies the whole of the zone above the diagonal on
which importance is equal to performance (Fig. 2). With this
partition, any attribute with an importance rating greater than
the corresponding performance rating is a candidate for
improvement efforts. Moreover, the greater discrepancy
between the importance and the performance for an attribute,
the greater the need for remedial action, because of the
assumption that a larger discrepancy causes greater dissatis-
faction (Sethna, 1982). The interpretation of the areas below
the diagonal is the same as in the original MartillaJames
diagram. Fig. 2. The partition of the IPA grid used in this paper (after Abalo et al., 2006).
J. Abalo et al. / Journal of Business Research 60 (2007) 115121 117

targets for improvement action. Other factors that several Moreover, the various measures proposed generally disagree
studies pointed to as contributory causes of the crowding with each other (Alpert, 1971; Bacon, 2003; Crompton and
phenomenon include respondents' lack of involvement (Bacon, Duray, 1985; Heeler et al., 1979; Jaccard et al., 1986; Matzler
2003) and their possible lack of expertise regarding the product et al., 2003; Oh, 2001), so the interpretation of IPA differs
or service assessed (Sambonmatsu et al., 2003). However, the according to the chosen importance measure. Thus, the
main cause is the use of measures of absolute rather than decision-maker has little way of knowing a priori which
relative (competitive) importance. measure, if any, may be the most reliable at incorporating IPA as
The development of indirect measures of importance helps to a means of designing appropriate marketing strategies.
circumvent the above problems. Among these indirect mea-
sures, standardized regression coefficients obtained by multi- 4. An alternative assessment of importance
variate regression of the attributes performance ratings over the
overall performance rating given to the product or service. This The starting point for this paper is the idea that, for the
is intrinsically a measure of relative importance in that each purposes of IPA, asking raters to rank attributes in order of
regression coefficient depends on data for all the attributes. This importance (with no ties allowed) rather than assigning them
is, in fact, a more realistic conception of human reasoning and absolute importance values offers the possibility of better
psychology of choice (Edwards, 1954; Tversky and Kahneman, measurement of importance. Clearly, this procedure avoids the
1981). This method has the further advantage of reducing the problems that beset the indirect measures described above,
demands on the rater's attention (since only the performance and, for each rater, avoids the problem of crowding, since
scale is available, not importance), which may favor increased ranking k attributes automatically spreads their rank scores
rater involvement. However, this approach also has at least one evenly between 1 and k. Furthermore, as the evaluation task
major weakness: the possibility of collinearity (Danaher, 1997; forces the rater to consider attributes jointly rather than one by
Bacon, 2003). Collinearity among the attribute performances one, this procedure has two great advantages by: 1) obtaining a
when used as predictors of overall performance can lead to the relative or competitive measure of importance for each
precision of the regression coefficients being so poor that they attribute; and 2) favoring rater commitment and involvement
fail to discriminate reliably among the attributes (and can even in the evaluation task, especially by asking raters only to rank
take on negative values). their top few k preferences among the s attributes under
A second criticism of linear regression coefficients as consideration, which reduces risk of fatigue. However, before
importance measures is that the relationship between the overall the application of this method, the researcher must settle the
performance of a product and its performance with respect to its question of how multiple raters ranking reports should
individual attributes may well be nonlinear (Danaher, 1997). aggregate in the IPA grid.
Specifically, some research reports that the negative effect of If every rater ranks all the s attributes under consideration,
below-average attribute performance on the overall product rating with rank 1 being low and rank s high, the mean rank assigned
is greater than the positive effect of above-average attribute to an attribute may be a reasonable measure of its importance.
performance (Mittal et al., 1998; Sethna, 1982). Generalization of However, this first procedure has three major drawbacks:
the regression coefficient approach to this situation would mean 1) asking for only the top few k preferences can leave many
fitting a response surface to the performance data and using partial attributes with low aggregate rank scores, resulting in crowding
derivatives as local measures of importance. Although in some at the bottom of the IPA grid; this effect will increase with
situations this procedure might provide the decision-maker with greater numbers of attributes; 2) this procedure omits the
useful information, this procedure seems unlikely to be generally reported orders of preference listed by each rater; 3) comparison
applicable. with performance mean values would be difficult in both
Another major group of indirect importance measures quadrant and diagonal approaches to IPA.
comprises those derived by conjoint analysis. This approach Previous authors adopted a ranking approach in their IPA
has the advantage over multiple regression of using orthogonal research (Matzler et al., 2003; Mersha and Adlakha, 1992;
designs, which excludes the possibility of collinearity between Sampson and Showalter, 1999), but they ignored the whole
attribute partial utilities. Furthermore, the use of several levels number of attributes that compose the evaluation task, and/or
for each attribute lessens the risk of difficulties due to non- the rankings given by raters, using as their measure of
linear dependence of overall performance on attribute importance the absolute frequency for each attribute reported
performances. However, conjoint analysis requires quite an by all raters or the mean rank of the ordered k attributes.
onerous data collection process and becomes unfeasible when The measure proposed in this paper gets round those
involving more than a very few attributes, both of which are problems by subjecting appropriately scaled mean ranks to a
especially serious drawbacks for the application of IPA to transformation that depends on the proportion of attributes that
services. the rater must rank. The remainder of the paper describes the
Partial correlation or logistic regression methods have scaling and transformation operations and the use of the new
difficulties similar to those of the above approaches to measure for importanceperformance analysis of a primary
importance measurement, or they have equally negative health care service and then compares the results of this analysis
consequences (Crompton and Duray, 1985; Danaher and with those of an earlier study based on a direct measure of
Mattsson, 1994; Heeler et al., 1979; Jaccard et al., 1986). importance.
118 J. Abalo et al. / Journal of Business Research 60 (2007) 115121

5. Method Table 2
The nine attributes for assessing the patient-perceived quality of primary health
care in Galicia
5.1. Importance measurement
1. Medical staff
2. Nursing staff
Suppose n raters presented with s attributes rank their top k 3. Auxiliary staff attending the public
preferences using natural numbers from 1 (most preferred) to k 4. Easy of making an appointment
(least preferred), with no ties allowed. The problem is to use 5. Waiting time in the health centre before entering the consulting room
these rankings to assign each attribute i an importance value pi 6. Opening hours
lying in some specified interval that, for convenience, is here 7. State of the building housing the centre
8. Signage inside the centre
[0,1]; the value of pi should increase with the importance of 9. Equipment
attribute i.
Denoting by gij the rank assigned to the i-th attribute by the
j-th rater, the first thing to do is to recode the gij as ranking components of the service on the basis of the existing literature
scores hij that lie in the desired interval, increase with degree of and focus groups with health care staff and patients. The present
preference, rather than decrease, and assign the value 0 to all study includes a further factor excluded in the earlier study:
attributes not mentioned by rater j: health center opening hours, making nine in all (Table 2).

kgij 1=k gij not void
hij 1 6.2. Data collection procedure
0 otherwise
The data for application in this research came from 1767
Table 1 lists hij for values of k and gij up to 9.
patients recruited at 75 primary health care centers from among
For the reasons noted at the end of the Introduction, the mean
all patients aged 18 years or older, respecting proportionality as
of the hij over raters, mi, tends to crowd attributes together at the
regards age groups and sex. Eight trained interviewers collected
bottom of the importance scale. This tendency is even greater,
the data for a month. For the purposes of the evaluation of service
the smaller the proportion of attributes ranked by each rater, k /
performance, every rater scored from 0 (very bad) to 10 (very
s. Thus, this value is not attractive as a measure of importance.
good) all of the nine attributes considered, and the mean of the
However, this difficulty can be overcome by subjecting the mi
raters gave aggregate data. On the other hand, the aggregate
to a monotonic transformation that increases small mi while
importance values of the nine attributes came from the new
leaving large mi relatively unaltered. In view of the relationship
method proposed in this paper. To this end, raters ranked what
between the likelihood of crowding and k / s, a particularly
they considered to be the three most important attributes in order
suitable transformation of this kind is mi (mi)k/s. This proce-
of importance, from 1 (most important) to 3 (least important). To
dure increases discrimination between attributes and augments
minimize the likelihood of bias due to the order of presentation
the diagnostic and strategic properties of IPA. The transformed
(Ares, 2003; Tversky and Kahneman, 1982, 1983), each
importance measure proposed in this paper is therefore
interviewer presented the nine attributes in a different order.
X k=s Both the performance evaluation and the importance ranking
Pi n1 hij 2
j questionnaires formed part of a larger questionnaire on the
perceived quality of primary health care in Galicia.
6. Application: Ipa of a primary health care service
7. Results and discussion
6.1. Attributes
Table 3 lists the number of ranks 1, 2 and 3 assigned by the
In a previous study of the public primary health care service raters for each attribute, and Table 4 lists the aggregate
in Galicia, Spain, Abalo et al. (2006) identified eight essential

Table 3
Table 1 Number of assignations of each attribute to each rank in this study
Ranking scores hij for raw rankings gij of the rater's most preferred k attributes
Attribute RANK
hij k
1st 2nd 3rd
2 3 4 5 6 7 8 9
Medical staff 1044 320 157
gij 1 1 1 1 1 1 1 1 1 Nursing staff 38 367 199
2 0.500 0.667 0.750 0.800 0.833 0.857 0.875 0.889 Auxiliary staff 17 73 162
3 0.333 0.500 0.600 0.667 0.714 0.750 0.778 Ease of appointment 139 203 269
4 0.250 0.400 0.500 0.571 0.625 0.667 Time in waiting room 124 186 209
5 0.200 0.333 0.429 0.500 0.556 Opening hours 66 130 187
6 0.167 0.286 0.375 0.444 State of building 44 99 165
7 0.143 0.250 0.333 Signage 6 29 48
8 0.125 0.222 Equipment 289 360 371
9 0.111 Total 1767 1767 1767
J. Abalo et al. / Journal of Business Research 60 (2007) 115121 119

Table 4
Aggregate performance and importance scores of each attribute in this study
Attribute Performance Importance a Discrepancy b
Medical staff 8.58 9.05 0.47
Nursing staff 8.40 5.82 2.58
Auxiliary staff 7.56 4.07 3.49
Ease of appointment 6.80 5.91 0.89
Time in waiting room 5.74 5.64 0.10
Opening hours 7.17 4.95 2.22
State of building 7.17 4.54 2.63
Signage 7.41 2.86 4.55
Equipment 5.66 7.18 1.52
Mean 7.16 5.56
a
Scaled up to occupy the same range as performance values.
b
Performance value minus importance value.

importance and performance value of each attribute together Fig. 3. Aggregate importance values of health service attributes according to
with the difference between the two, where the determination of Abalo et al. (2006). Bars show 95% confidence intervals, and frames surround
values that do not differ significantly at the = .05 level.
aggregate importance is Eq. (2) multiplied by 10, in order to
achieve the same rank scale as the performance values. For
comparison, Table 5 lists the performance and importance
values obtained in the earlier study (Abalo et al., 2006), in since in all cases importance exceeds performance and all the
which the importance measure of the eight attributes considered attributes lie above the diagonal, a strict interpretation of the
are the averages of ratings on a scale of 010 from 1601 diagram would suggest that all the attributes are good candidates
subjects, obtained in the same way as the performance ratings. for improvement measures. This is somewhat counterintuitive,
In the earlier study, aggregate importance values squeezed given that the average performance value is 7.23.
into a rank of only 1.66 units (from 7.34 to 9.00). Although The IPA grid obtained in the present study (Fig. 4) clearly
some groups of attributes are different from the others (because discriminates better among attributes than that of the earlier
the wide sample and subsequent little standard error, Table 5 study (Fig. 5). However, although its importance values show a
and Fig. 3), the uncertainty in the values is large enough to be good spread, its performance values, like those of the earlier
able to use them for effective discrimination among the study, crowd into a much narrower range (less than 3 units, from
attributes within these groups. By contrast, the importance 5.66 to 8.58, in the present study). This is in keeping with the
measure used in the present study successfully spread the usual tendency observed in studies of user satisfaction with
aggregate importance scores over a range of more than 6 units, health services that have employed questionnaires soliciting
from 2.86 to 9.05. quantitative ratings (Fitzpatrick, 1991; Hawes and Rao, 1985).
Fig. 4 presents the aggregate importance values obtained in In other words, direct performance ratings in the health service
the present study plotted against the corresponding aggregate
performance values on an IPA grid of the type described in the
Introduction. Fig. 5 shows the corresponding diagram for the
earlier study (Abalo et al., 2006), where aggregate importance
and performance both clustered around high values, crowding
the attributes into the upper right corner of the IPA grid. In fact,

Table 5
Aggregate performance and importance scores of each attribute obtained by
Abalo et al. (2006) by direct rating
Attribute Performance Importance Standard error of
importance
Medical staff 8.46 8.91 .034
Nursing staff 7.97 8.16 .044
Auxiliary staff 7.14 7.34 .050
Ease of appointment 6.87 8.88 .039
Time in waiting room 5.85 8.32 .046
State of building 7.27 8.78 .035
Signage 7.34 8.23 .043
Equipment 6.96 9.00 .036
Fig. 4. IPA diagram of a primary health care service constructed using the
Mean 7.23 8.45
performance and importance measures and data of this study.
120 J. Abalo et al. / Journal of Business Research 60 (2007) 115121

IPA. Instead of aiming for exact and absolute quantification of


raters' perceptions of the importance of the individual attributes,
the new method, defined by Eqs. (1) and (2), summarizes their
perception of the relative importance of what they consider to
be the most important attributes. Moreover, this procedure
spreads the attributes over the importance dimension of the IPA
grid, thereby improving the readability and utility of this tool for
successful marketing planning. The new method thus over-
comes a major difficulty faced by IPA based on absolute rather
than relative measures of importance, namely, the tendency for
all attributes to crowd into the upper part of the diagram because
of equally great importance among them. Empirical results
obtained by Abalo et al. (2006) using direct importance ratings
illustrate this tendency, and the empirical results of the present
study show that this problem is indeed surmountable with the
new method. By forcing raters to select what they consider to be
the most important attributes of the product, and to rank them by
Fig. 5. IPA grid of a primary health care service constructed using the
order of importance, the new method may also increase raters'
performance and importance measures and data of Abalo et al. (2006).
involvement with the evaluation task. Previous attempts to base
importance measures on preferences (Matzler et al., 2003;
area in general and these two studies in particular exhibit the Mersha and Adlakha, 1992; Sampson and Showalter, 1999)
same crowding tendency discussed in the Introduction in have failed to achieve the desired value spread because they
relation to direct importance ratings. Likewise, this phenomenon have failed to take into account the proportion of ranked
extends to other fields like the motor industry, in which the attributes and/or the reported orders of preference. Finally, in
performance of a service recently rated from 1 to 10 with respect view of the empirical results of this study, in which performance
to six attributes achieved a spread of just 0.61 rating points values obtained by direct rating clustered at the high end of the
between the best and the worst mean rating (Matzler et al., 2004). performance dimension, the new method may successfully
The result of spreading out importance values while failing measure not only importance but also performance.
to spread performance values, which cluster at the high end of
the scale, is that the great majority of attributes fall below the References
diagonal of the IPA diagram (negative discrepancies: greater
performance than importance mean values for each attribute). Abalo J, Varela J, Rial A. El anlisis de ImportanciaValoracin aplicado a la
Only attributes to which users assign very great importance (in gestin de servicios. Psichotema 2006;18(4):7307.
this study, the medical staff and equipment) can appear as Alberty S, Mihalik B. The use of importanceperformance analysis as an
evaluative technique in adult education. Eval Rev 1989;13(1):3344.
candidates for improvement, which often is somewhat coun- Alpert M. Identification of determinant attributes: A comparison of methods.
terintuitive, especially when those attributes achieve the best J Mark Res 1971;8:18491.
performance mean values (e.g. medical staff ). Ares VM. Un ndice de proximidad para la expresin numrica del cambio en
Therefore, for the purposes of IPA, preference ranking and rangos. Metodol Encuestas 2003;5(2):1135.
Bacon DR. A comparison of approaches to ImportancePerformance Analysis.
Eqs. (1) and (2) may often be useful to achieve a good spread
Int J Mark Res 2003;45(1):5571.
among both performance values and importance values. Churchman CW, Ackoff RL. An approximate measure of value. J Oper Res Soc
Although the mean of the ratings given directly by users may Am 1954;2(2):17287.
well be a better performance measure of individual attributes in Crompton JL, Duray NA. An investigation of the relative efficacy of four
absolute terms, IPA requires the construction of a diagram that alternative approaches to ImportancePerformance Analysis. J Acad Mark
highlights the relative importance and relative performance of Sci 1985;13(4):6980.
Cronin J, Taylor S. Measuring service quality: a reexamination and extension.
the attributes considered. To this end, the transformed rankings J Mark 1992;56:5568.
approach described in this paper appears to be most appropriate. Cronin J, Taylor S. SERVPERF versus SERVQUAL: reconciling perfor-
Unfortunately, the requirements of the sponsor of the present mance-based and perceptions-minus-expectations measurement of service
study on primary health care services prevented application of quality. J Mark 1994;58:12531.
the new approach to performance as well as importance, so the Danaher PJ. Using conjoint analysis to determine de relative importance of
service attributes measured in customer satisfaction surveys. J Retail
reconstruction of raters' preferences from their performance 1997;73(2):23560.
ratings was impossible, as ties among attributes is the rule rather Danaher PJ, Mattsson J. Customer satisfaction during the service delivery
than the exception. process. Eur J Mark 1994;28(5):516.
DeSarbo WS, Huff L, Rolandelli MM, Chon J. On the measurement of
8. Conclusion perceived service quality. In: Rust RT, Oliver RL, editors. Service quality:
new directions in theory and practice. London: Sage; 1994. p. 20122.
Dolinsky AL. Considering the competition in strategy development: an
This paper presents a new method for measuring the extension of importanceperformance analysis. J Health Care Mark
importance of product or service attributes for the purposes of 1991;11(1):316.
J. Abalo et al. / Journal of Business Research 60 (2007) 115121 121

Dolinsky AL, Caputo RK. Adding a competitive dimension to importance Novatorov EV. An importanceperformance approach to evaluating internal
performance analysis: an application to traditional health care systems. marketing in a recreation centre. Manag Leis 1997;2:116.
Health Care Mark Quart 1991;8(3):6179. Oh H. Revisiting importanceperformance analysis. Tour Manag 2001;22:61727.
Duke CR, Persia MA. Performanceimportance analysis of escorted tour Oliver RL. Cognitive, affective, and attribute bases of the satisfaction response.
evaluations. J Travel Tour Mark 1996;5(3):20723. J Consum Res 1993;20:41830.
Edwards W. The theory of decision making. Psychol Bull 1954;51:380417. Ortinau DJ, Bush A, Bush RP, Twible JL. The use of importanceperformance
Ennew CT, Reed GV, Binks MR. ImportancePerformance Analysis and the analysis for improving the quality of marketing education: interpreting
measurement of service quality. Eur J Mark 1993;27(2):5970. faculty-course evaluations. J Mark Ed 1989;11(2):7886.
Evans MR, Chon K. Formulating and evaluating tourism policy using Ostrom A, Iacobucci D. Consumer trade-offs and the evaluation of services.
importanceperformance analysis. Hospitality Educ Res J 1989;13:20313. J Mark 1995;59:1728.
Fishbein M. An investigation of the relationship between beliefs about an object Parasuraman A, Zeithaml V, Berry L. A conceptual model of service quality and
and attitude toward that object. Hum Relat 1963;16(3):2339. its implications for future research. J Mark 1985;49:4150.
Fishbein M, Ajzen I. Belief, attitude, intention and behavior: an introduction to Parasuraman A, Zeithaml V, Berry L. SERVQUAL: a multiple-item scale for
theory and research. Reading (MA): Addison-Wesley; 1975. measuring consumer perceptions of service quality. J Retail 1998;64:1240.
Fitzpatrick R. Surveys of patient satisfaction: important general considerations. Picn E, Varela J, Rial A, Garca A. Evaluacin de la satisfaccin del consumidor
Br Med J 1991;302:8879. mediante Anlisis de la ImportanciaValoracin: una aplicacin a la
Ford JB, Joseph M, Joseph B. Importanceperformance analysis as a strategic evaluacin de destinos tursticos. Santiago de Compostela: I Congreso
tool for service marketers: the case of service quality perceptions of business Galego de Calidade; 2001.
students in New Zealand and the USA. J Serv Mark 1999;13(2):17186. Rosenberg MJ. Cognitive structures and attitudinal affect. J Abnorm Soc
Green PE, Rao VR. Conjoint measurement for quantifying judgmental data. Psychol 1956;53(2):36772.
J Mark Res 1971;8:35563. Sambonmatsu DN, Kardes FR, Houghton DC, Ho EA, Posavac SS. Over-
Hansen E, Bush RJ. Understanding customer quality requirements. Model and estimating the importance of the given estimation in multiattribute consumer
application. Ind Mark Manag 1999;2:11930. judgment. J Consum Psychol 2003;13(3):289300.
Hawes JM, Rao CP. Using ImportancePerformace Analysis to develop health Sampson SE, Showalter MJ. The performanceimportance response function:
care marketing strategies. J Health Care Mark 1985;5(4):1925. observations and implications. Serv Ind J 1999;19(3):126.
Heeler RM, Okechuku C, Reid S. Attribute importance: contrasting measure- Sethna BN. Extensions and testing of ImportancePerformance Analysis. Bus
ments. J Mark Res 1979;16:603. Econ 1982;20:2831.
Hollenhorst S, Olson D, Fortney R. Use of importanceperformance analysis to Slack N. The importanceperformance matrix as a determinant of improvement
evaluate state park cabins: the case of the West Virginia state park system. priority. Int J Oper Prod Manag 1994;14(5):5975.
J Park Recreat Admi 1992;10(1):111. Spreng RA, Mackenzie SB, Olshavsky RW. A reexamination of the
Hudson S, Hudson P, Miller GA. The measurement of service quality in the tour determinants of consumer satisfaction. J Mark 1996;60:1532.
operating sector: a methodological comparison. J Travel Res 2004;42 Taylor SA. Assessing regression-based importance weights for quality
(3):30513. perceptions and satisfaction judgments in the presence of higher order
Jaccard J, Brinberg D, Ackerman LJ. Assessing attribute importance: a and/or interaction effects. J Retail 1997;73(1):13559.
comparison of six methods. J Consum Res 1986;12:4638. Tse DK, Wilton PC. Models of consumer satisfaction formation: an extension.
LaTour S, Peat N. Conceptual and methodological issues in consumer J Mark Res 1988;25:20412.
satisfaction research. In: Perrault Jr WD, editor. Advances in consumer Tversky A. Intransitivity of preferences. Psychol Rev 1969;76:3148.
research, vol. 6. Ann Arbor (MI): Association for Consumer Research; 1979. Tversky A, Kahneman D. The framing of decisions and the psychology of
p. 4317. choice. Science 1981;211:4538.
Luce RD, Tuckey JW. Simultaneous conjoint measurement: a new type of Tversky A, Kahneman D. Judgements of and by representativeness. In:
fundamental measurement. J Math Psychol 1964;1:127. Kahneman D, Slovic P, Tversky A, editors. Judgement under uncertainty:
Matzler K, Sauerwein E, Heischmidt KA. ImportancePerformance Analysis Heuristics and biases. Cambridge: Cambridge University Press; 1982.
revisited: the role of the factor structure of customer satisfaction. Serv Ind J Tversky A, Kahneman D. Extensional versus intuitive reasoning: the
2003;23(2):11230. conjunction fallacy in probability judgement. Psychol Rev 1983;90
Matzler K, Bailom F, Hinterhuber HH, Renzl B, Pichler J. The asymmetric (4):239315.
relationship between attribute-level performance and overall customer Uysal M, Howard G, Jamrozy U. An application of importanceperformance
satisfaction: a reconsideration of the importanceperformance analysis. Ind analysis to a ski resort: a case study in North Carolina. Vis Leis Bus
Mark Manag 2004;33(4):2717. 1991;10:1625.
Martilla JA, James JC. ImportancePerformance Analysis. J Mark Varela J, Braa T. Anlisis conjunto aplicado a la investigacin comercial.
1977;41:779. Madrid: Eudema; 1996.
Mersha T, Adlakha V. Attributes of service quality: the consumers' perspective. Wilkie WL, Pessemier EA. Issues in marketing's use of multi-attribute attitude
Int J Serv Ind Manag 1992;3(3):3445. models. J Mark Res 1973;10:42841.
Mittal V, Ross WT, Baldasare PM. The asymmetric impact of negative and Wittink DR, Bayer LR. The measurement imperative. Mark Res 1994;6
positive attribute-level performance on overall satisfaction and repurchase (4):1422.
intentions. J Mark 1998;62:3347. Yavas U, Shemwell DJ. Modified importanceperformance analysis: an
Nale RD, Rauch DA, Wathen SA, Barr PB. An exploratory look at the use of application to hospitals. Int J Health Care Qual Assur 2001;14(3):104 10.
importanceperformance analysis as a curricular assessment tool in a school Zhang HQ, Chow I. Application of importanceperformance model in tour
of business. J Workplace Learn 2000;12(4):13945. guides' performance: evidence from mainland Chinese outbound visitors in
Neslin SA. Linking product features to perceptions: self-stated versus Hong Kong. Tour Manag 2004;25:8191.
statistically revealed importance weights. J Mark Res 1981;18:806.

Вам также может понравиться