Вы находитесь на странице: 1из 22

The current issue and full text archive of this journal is available at www.emeraldinsight.com/0960-4529.

htm

Alternative measures of service quality: a review


Riadh Ladhari
Department of Business Administration, University of Moncton, Moncton, Canada
Abstract
Purpose The purpose of this paper is to identify and discuss the key conceptual and empirical issues that should be considered in the development of alternative industry-specic measurement scales of service quality (other than SERVQUAL). Design/methodology/approach A total of 30 studies are selected from two well-known databases: Science direct and ABI inform. These studies are subjected to a comprehensive in-depth content analysis and theoretical discussion of the key conceptual and empirical issues to be considered in the development of service-quality measurement instruments. Findings The study identies deciencies in some of the alternative service-quality measures; however, the identied deciencies do not invalidate the essential usefulness of the scales. The study makes constructive suggestions for the development of future scales. Originality/value This is the rst work to describe and contrast a large number of service-quality measurement models, other than the well-known SERVQUAL instrument. The ndings are of value to academics and practitioners alike. Keywords Customer services quality, Psychometric tests, SERVQUAL Paper type General review

Alternative measures of service quality 65

1. Introduction A great deal of service-quality research in recent decades has been devoted to the development of measures of service quality. In particular, the SERVQUAL instrument (Parasuraman et al., 1988) has been widely applied and valued by academics and practicing managers (Buttle, 1996). However, several studies have identied potential difculties with the use of SERVQUAL (Carman, 1990; Cronin and Taylor, 1992; Asubonteng et al., 1996; Buttle, 1996; Van Dyke et al., 1997; Llosa et al., 1998). These difculties have related to the use of so-called difference scores, the ambiguity of the denition of consumer expectations, the stability of the SERVQUAL scale over time, and the dimensionality of the instrument. As a result of these criticisms, questions have been raised regarding the use of SERVQUAL as a generic measure of service quality and whether alternative industry-specic measures of service quality should be developed for specic service settings. Over the past 15 years or so, at least 30 industry-specic scales of service quality have been published in the literature on service quality including (among others) scales suggested by Saleh and Ryan (1991), Vandamme and Leunis (1993), Jabnoun and Khalifa (2005), Akbaba (2006), and Caro and Garcia (2007). However, no study has attempted to review and integrate this plethora of research on service-quality measurement. The present study addresses this gap in the literature. Its purpose is to explore some of the pertinent conceptual and empirical issues involved in the development of industry-specic measures of service quality.

Managing Service Quality Vol. 18 No. 1, 2008 pp. 65-86 q Emerald Group Publishing Limited 0960-4529 DOI 10.1108/09604520810842849

MSQ 18,1

66

The remainder of this paper is organized as follows. Following this introduction, the next section provides a brief description of the SERVQUAL scale and the critiques that have been made of it. The paper then presents a summary of 30 alternative industry-specic measures of service quality and utilizes these to canvass several conceptual and empirical issues related to the development of such scales. A discussion of the ndings of the review and suggestions for future research are then presented. Finally, the conclusions of the paper and managerial implications are noted. 2. A generic measure of service quality: the SERVQUAL scale When the SERVQUAL scale was developed by Parasuraman et al. (1985, 1988), their aim was to provide a generic instrument for measuring service quality across a broad range of service categories. Relying on information from 12 focus groups of consumers, Parasuraman et al. (1985) reported that consumers evaluated service quality by comparing expectations (of service to be received) with perceptions (of service actually received) on ten dimensions: tangibles, reliability, responsiveness, communication, credibility, security, competence, understanding/knowing customers, courtesy, and access. In a later (Parasuraman et al. (1988)) work, the authors reduced the original ten dimensions to ve: (1) tangibles (the appearance of physical facilities, equipment, and personnel); (2) reliability (the ability to perform the promised service dependably and accurately); (3) responsiveness (the willingness to help customers and provide prompt service); (4) empathy (the provision of individual care and attention to customers); and (5) assurance (the knowledge and courtesy of employees and their ability to inspire trust and condence). Each dimension is measured by four to ve items (making a total of 22 items across the ve dimensions). Each of these 22 items is measured in two ways: (1) the expectations of customers concerning a service; and (2) the perceived levels of service actually provided. In making these measurements, respondents are asked to indicate their degree of agreement with certain statements on a seven-point Likert-type scale (1 strongly disagree to 7 strongly agree). For each item, a so-called gap score (G) is then calculated as the difference between the raw perception-of-performance score (P) and the raw expectations score (E). The greater the gap score (calculated as G P minus E), the higher the score for perceived service quality. SERVQUAL has been used to measure service quality in various service industries; these have included: the health sector (Carman, 1990; Headley and Miller, 1993; Lam, 1997; Kilbourne et al., 2004); banking (Lam, 2002; Zhou et al., 2002); fast food (Lee and Ulgado, 1997); telecommunications (Van der Wal et al., 2002); retail chain (Parasuraman et al., 1994); information systems (Jiang et al., 2000); and library services (Cook and Thompson, 2001). SERVQUAL has also been applied in various countries; these have included: the United States (Babakus and Boller, 1992; Pitt et al., 1995; Jiang et al., 2000; Kilbourne et al., 2004); China (Lam, 2002; Zhou et al., 2002); Australia (Baldwin and Sohal, 2003); Cyprus (Arasli et al., 2005); Hong Kong (Kettinger

et al., 1995; Lam, 1997); Korea (Kettinger et al., 1995); South Africa (Pitt et al., 1995; Van der Wal et al., 2002); The Netherlands (Kettinger et al., 1995); and the UK (Pitt et al., 1995; Kilbourne et al., 2004). Despite the widespread use of the SERVQUAL model to measure service quality, several theoretical and empirical criticisms of the scale have been raised. These can be summarised as follows: . The concept and operationalisation of the gap score have been questioned. For example, it has been suggested that the notion of subtraction contained in the SERVQUAL model has no equivalent in theories of psychological function (Ekinci and Riley, 1998). The use of a gap score is said to be a poor choice as a measure of psychological construct (Van Dyke et al., 1999) because there is little evidence that customers actually assess service quality in terms of perception-minus-expectations scores (Peter et al., 1993; Buttle, 1996; Ekinci and Riley, 1998). It has been contended that service quality is more accurately assessed by measuring only perceptions of quality (Cronin and Taylor, 1992). Moreover, the validity of the operationalisation of the gap score has been questioned because such scores are unlikely to be distinct from their component scores (Brown et al., 1993). . The concept of expectations has been criticised for being loosely dened and open to multiple interpretations (Teas, 1993, 1994). According to this critique, expectations have been variously dened as desires, wants, what a service provider should offer, the level of service the customer hopes to receive, adequate service, normative expectations, and ideal standards. As a result, it is contended that the operationalisation of SERVQUAL is itself open to multiple interpretations. . The validity of the items and dimensions of the SERVQUAL instrument have been questioned. It has been suggested that the factor-loading pattern in a number of studies (Carman, 1990; Parasuraman et al., 1991; Babakus and Boller, 1992; Headley and Miller, 1993; Engelland et al., 2000) indicates a weakness in terms of convergent validity because several of the SERVQUAL items had the highest loadings on different dimensions from those in Parasuraman et al. (1988). . A number of researchers have suggested that different dimensions are more appropriate for expectations, perceptions, and gap scores. Suggestions have included: one dimension (Cronin and Taylor, 1992; Lam, 1997); two dimensions (Babakus and Boller, 1992; Gounaris, 2005); three dimensions (Chi Cui et al., 2003; Arasli et al., 2005; Najjar and Bishu, 2006); four dimensions (Kilbourne et al., 2004); six dimensions (Carman, 1990; Headley and Miller, 1993); seven dimensions (Walbridge and Delene, 1993); and nine dimensions (Carman, 1990). Moreover, other studies have reported a poor t when tested against a ve-factor model with conrmatory factor analysis (CFA) (Chi Cui et al., 2003; Badri et al., 2005). . It has been contended that perception scores (as in the SERVPERF instrument) outperform gap scores in predicting overall evaluation of service (Cronin and Taylor, 1992; McAlexander et al., 1994).

Alternative measures of service quality 67

MSQ 18,1

68

It has been argued that SERVQUAL focuses on the process of service delivery nroos, 1990; Richard and rather than the outcomes of service encounters (Gro Allaway, 1993; Brady and Cronin, 2001). The fundamental model underlying SERVQUAL has been questioned. Several researchers have contended that service quality is an aggregation of various quality sub-dimensions and that service quality is therefore a multilevel construct (as well as being a multidimensional construct) (Dabholakar et al., 1996; Brady and Cronin, 2001; Wilkins et al., 2007).

3. Industry-specic measure of service quality In view of the problems outlined above, the applicability of a generic scale for measuring service quality in all settings has been questioned (Babakus and Boller, 1992; Van Dyke et al., 1997; Jabnoun and Khalifa, 2005; Akbaba, 2006; Caro and Garcia, 2007). Moreover, it has been argued that a simple adaptation of the SERVQUAL items is insufcient to measure service quality across a diversity of service industries (Carman, 1990; Babakus and Boller, 1992; Brown et al., 1993; Van Dyke et al., 1997). For example, Carman (1990) contended that certain dimensions required expansion by the inclusion of 13 additional items to the SERVQUAL instrument in order to capture service quality adequately across different services. It has also been contended that service quality is a simple unidimensional construct in some contexts, but a complex multidimensional construct in others (Babakus and Boller, 1992). For these reasons, it has been suggested that industry-specic measures of service quality might be more appropriate than a single generic scale (Babakus and Boller, 1992; Van Dyke et al., 1997; Caro and Garcia, 2007). Dabholkar et al. (1996, p. 14) summarized this view in the following terms:
. . . it appears that a [single] measure of service quality across industries is not feasible. Therefore, future research on service quality should involve the development of industry-specic measures of service quality.

As a consequence of these arguments, much of the emphasis in recent research has moved from attempts to adapt SERVQUAL to the development of alternative industry-specic measures. Table I summarizes 30 industry-specic measures of service quality taken from two databases: Science direct and ABI inform. The features of these measures are discussed below. 3.1 Service industries and countries It is apparent from Table I that alternative scales have been developed to measure service quality in a variety of service industries. These have included (among others): restaurants (Stevens et al., 1995); retail banks (Aldlaigan and Buttle, 2002; Sureshchandar et al., 2002); career centers (Engelland et al., 2000); internet retail (Janda et al., 2002); hotels (Ekinci and Riley, 1998; Akbaba, 2006; Wilkins et al., 2007); hospitals (Sower et al., 2001); and higher education (Markovic, 2006). Moreover, the scales have been developed in various countries. These have included Turkey (Akbaba, 2006); Australia (Wilkins et al., 2007); Canada (Saleh and Ryan, 1991); Croatia (Markovic, 2006); India (Sureshchandar et al., 2002); the USA (Dabholkar et al., 1996); Korea (Kang and James, 2004); Hong Kong (Lam and Zhang, 1999); Belgium

Study Telephone interviews Conrmatory factor analysis 26 items; expectations-only scores Seven-point Likert scale, ranging from strongly agree (7) to strongly disagree (1) Ranged from 0.63 to 0.80

Service industry (Country) Sample Scale

Questionnaire administration

Data analysis procedure

Dimensions (number of items) Reliability

Knuston et al. (1990)

Lodging industry 201 adults (USA)

Saleh and Ryan (1991) Self-administered Exploratory factor analysis

Hospitality 200 hotel guests, industry (Canada) 17 management staff

Ranged from 0.74 to 0.93 for hotel guests; ranged from 0.63 to 0.80 for management staff

Bouman and van der Wiele (1992) 226 customers of Self-administered Exploratory care service rms factor analysis

Care service industry (The Netherlands)

Ranged from 0.76 to 0.92

Vandamme and Leunis (1993) Self-administered Exploratory factor analysis

Health care sector 70 patients (Belgium)

5 dimensions: reliability (4 items), assurance (5), responsiveness (3), tangibles (6), empathy (8) 32 items for hotel guests and 4 dimensions for hotel 33 items for management staff; guests: tangibles and perception minus expectations reliability (10), Five-point Likert scale, responsiveness (8), ranging from highly satised assurance (8), empathy (1) to highly dissatised (5) (6) 5 dimensions for management staff: tangibles (7), reliability (3), responsiveness (8), assurance (8), empathy (7) 40 items; 3 factors: customer Perception-minus-expectations kindness (19), tangibles scores (13), faith (8) Seven-point Likert scale, ranging from very unimportant (1) to very important (7) for expectations and from 1 definitely not appropriate to 7 definitely appropriate for perceptions 17 items; 6 dimensions: tangibles Perception-minus-expectations (4), medical scores responsiveness (3), Seven-point Likert scale, assurance I (3), ranging from strongly assurance II (3), disagree (1) to strongly agree nursing staff (2), (7) personal beliefs and values (2)

Ranged from 0.58 to 0.82

(continued )

Alternative measures of service quality 69

Table I. Review of service-quality scales

70

MSQ 18,1

Study Conrmatory factor analysis 29 items; expectations-only scores Seven-point Likert scale, ranging from strongly agree (7) to strongly disagree (1) 49 items, perception-minus-expectations (factor analysis is based on expectations-only scores)

Stevens et al. (1995)

200 respondents Telephone for ne-dining, interviews 198 for casual-dining, 198 for quick-service restaurants Tomes and Ng Service quality in 132 patients Self-administered (1995) NHS or NHS trust admitted in large hospital services general hospital (England) in the east of England Exploratory factor analysis

Dabholkar et al. (1996) Self-administered Conrmatory factor analysis

Lam and Zhang (1999)

Mentzer et al. (1999) Selfadministered

Table I.
Sample Questionnaire administration Scale Data analysis procedure Dimensions (number of items) Reliability 5 dimensions: tangibles Ranged from 0.89 (10), reliability (5), to 0.92 responsiveness (3), assurance (6), empathy (5) Ranged from 0.64 to 0.92 227 shoppers for the rst study and 149 for the cross-validation study 209 users of travel Self-administered Exploratory agents factor analysis 7 dimensions: empathy (10), relationship of mutual respect (9), dignity (9), understanding of illness (5), religious needs (1), food (6), physical environment (9) 5 dimensions: physical aspects (6 items), reliability (5), personal interaction (9), problem solving (3), policy (5) Ranged from 0.85 to 0.92 28 items; perception-only scores Five-point Likert scale, ranging from strongly disagree (1) to strongly agree (5) 23 items; perception-minus-expectations scores Seven-point Likert scales, ranging from strongly agree (7) to strongly disagree (1) 25 items; perception-only scores Five-point Likert scale, ranging from strongly disagree (1) to strongly agree (5) Ranged from 0.67 to 0.88 5531 defense logistics agency users Conrmatory factor analysis 5 dimensions: responsiveness and assurance (6), reliability (5), empathy (4), resources and corporate image (5), tangibility (3) 9 dimensions: information quality (2), ordering procedures (2), ordering release quantities (3), timeliness (3), order accuracy (3), order quality (3), order condition (3), order discrepancy handling (3), personnel contact quality (3) Ranged form 0.73 to 0.89 (continued )

Service industry (Country)

Restaurant industry (USA)

Retail service quality (USA)

Travel agents (Hong Kong)

Logistic service quality (USA)

Study

Service industry (Country) Sample Scale Ranged form 0.75 to 0.83

Questionnaire administration

Data analysis procedure

Dimensions (number of items) Reliability

Shemwell and Yavas (1999)

Hospital service quality (USA)

3 dimensions: search attributes (5 items), credence attributes (4 items), experience attributes (5 items) 5 dimensions: tangibles (4), reliability (4), responsiveness (3), assurance (3), empathy (3)

Ranged from 0.76 to 0.89

218 respondents Self-administered Conrmatory residing in factor analysis different neighborhoods SMSA Engelland et al. Career service 262 Self-administered Exploratory (2000) centers on college undergraduate factor analysis; campuses (USA) college students Conrmatory for the factor analysis exploratory study and 237 for the validation Frochot and Service quality 790 visitors for Self-administered Exploratory Hughes (2000) provided in the nal survey factor analysis historic houses (England and Scotland) 14 items; perception-only scores Seven-point scale, ranging from poor (1) to outstanding (7) 17 items; perception-minus-expectations scores Seven-point Likert scale, ranging from strongly disagree (1) to strongly agree (7) 24 items; Perception-only scores Five-point Likert scale, ranging from strongly agree (5) to strongly disagree (1) 4407 participants Web-based administration Exploratory factor analysis

Ranged from 0.70 to 0.83

Cook and Thompson (2001)

Library service (USA)

Ranged from 0.80 to 0.94

Sower et al. (2001) Exploratory factor analysis

Hospital service quality (USA)

663 recently discharged patients

5 dimensions: responsiveness (8), tangibles (7), communications (4), consumables (3), empathy (2) 34 items; perception-only 4 dimensions: service scores (11 items), library as Nine-point scale, ranging from place (9), access to low (1) to high (9); and collections (7), unnumbered graphic rating reliability (7) scale 75 items; perception-only 8 dimensions: respect scores and caring (26), Seven-point Likert scale, effectiveness and ranging from strongly agree continuity (15), (7) to strongly disagree (1) appropriateness (15), information (7), efciency (5), effectiveness-meals (5), rst impression (1), staff diversity (1)

Ranged from 0.87 to 0.98

(continued )

Alternative measures of service quality 71

Table I.

72

MSQ 18,1

Study 72 disabled service users and a parent/carer group members Self-administered Exploratory factor analysis; Correlation matrix analysis 27 items; perception scores and expectations scores.

Vaughan and Shiu (2001)

Aldlaigan and Buttle (2002)

Janda et al. (2002) Conrmatory factor analysis

Sureshchandar Banking (India) et al. (2002)

Getty and Getty (2003) Mail survey

Table I.
Sample Questionnaire administration Scale Data analysis procedure Dimensions (number of items) Reliability 975 bank customers Mail survey Exploratory factor analysis Ranged from 0.80 to 0.93 (total sample) 446 respondents Administered by who had made at interviewers least one internet purchase within the last six months 277 bank Self-administered customers Conrmatory factor analysis Ranged from 0.61 to 0.83 Ranged from 0.82 to 0.96 Exploratory factor analysis 10 dimensions: access (3), responsiveness (4), communication (4), humaneness (4), security (2), enabling/empowerment (2), competence (3), reliability (3), equity (1), tangibles (1) 21 items scale; perception-only 4 dimensions: service scores system quality (11), Seven-point Likert scale, behavioral service ranging from strongly quality (5), machine disagree (1) to strongly agree service quality (2), (7) service transactional accuracy (3) 22 items; perception-only 5 dimensions: scores performance (6 items), Seven-point Likert scale, access (4 items), ranging from strongly security (4 items), disagree (1) to strongly agree sensation (4 items), (7) information (4 items) 41 items; perception-only 5 dimensions: core scores service or service Seven-point Likert scale, product (5), human ranging from very poor (1) to element of service very good (7) delivery (17), systemization of service delivery (6), tangibles of service (6), social responsibility (7) 26 items; perception-only 5 dimensions: scores tangibility (8 items), Four-point scale, ranging from reliability (4), low (1) to high (4) responsiveness (5), condence (5), communication (4) High reliabilityno detailed information (continued )

Service industry (Country)

Voluntary sector (Scotland)

Banking (UK)

Internet retail service quality (USA)

Lodging industry 229 (USA) frequent-traveler business owners

Study 324 ecotourists Mail survey who had taken an ecotrip in the past 18 months Exploratory factor analysis Ranged from 0.86 to 0.98 29 items; expectations-only scores Seven-point Likert scale, ranging from strongly disagree (1) to strongly agree (7) Ranged from 0.79 to 0.88

Service industry (Country) Sample Scale

Questionnaire administration

Data analysis procedure

Dimensions (number of items) Reliability

Khan (2003)

Ecotourism

Wolnbarger and Gilly (2003) 1,013 internet users Exploratory factor analysis; Conrmatory factor analysis Web-based administration using an online panel

Online e-tail quality (USA)

Yoon and Suh (2004) Self-administered Exploratory factor analysis

Consulting service (Korea)

86 respondents from IT consulting sites

Ranged from 0.87 to 0.95

Gounaris (2005) 515 senior management Mail survey Conrmatory factor analysis (CFA).

Business to business service (Greece)

Ranged from 0.79 to 0.88

Jabnoun and Khalifa (2005)

Bank (United Arab Emirates)

6 dimensions: ecotangibles (3), assurance (5), reliability (5), responsiveness (4), empathy (4), tangibles (8) 14 items; 4 dimensions: web site perception-minus-expectations design (5), scores fulllment/reliability Seven-point Likert scale, (3), security/privacy (3), ranging from strongly customer service (3) disagree (1) to strongly agree (7) 36 items; perception-only 6 dimensions: scores assurance (4), Seven-point Likert scale, responsiveness (3), ranging from Strongly reliability (12), disagree (1) to Strongly agree empathy (4), process (7) (9), education (4) 22 items; perception-only 4 dimensions: potential scores quality (6), hard Seven-point Likert scale, process quality (5), soft ranging from entirely disagree process quality (6), (1) to entirely agree (7) output (5) 29 items; perception-only 4 dimensions: personal scores skills (12), reliability (5), image (6), value (6) Ranged from 0.85 to 0.94 Ranged from 0.81 to 0.92

Karatepe et al. Bank service (2005) (Cyprus)

115 customers of Self-administered Exploratory factor analysis Islamic banks and 115 customer of conventional banks 1220 customers Self-administered Exploratory factor analysis; conrmatory factor analysis 549 subjects for Web-based the development administration stage and 858 customers for the validation stage Exploratory factor analysis; conrmatory factor analysis

Parasuraman et al. (2005)

Electronic service quality (internet users not identied)

20 items; perception-only scores Five-point Likert scale, ranging from strongly agree (5) to strongly disagree (1) 22 items; perception-only scores Five-point Likert scale, ranging from strongly disagree (1) to strongly agree (5)

4 dimensions: service environment (4), interaction quality (7), empathy (5), reliability (4) 4 dimensions: efciency (8 items); system availability (4); fulllment (7); privacy (3)

Ranged from 0.83 to 0.94

(continued )

Alternative measures of service quality 73

Table I.

74

MSQ 18,1

Study Self-administered Exploratory factor analysis

Akbaba (2006) Business hotel 234 hotel guests industry (Turkey)

Markovic (2006)

Caro and Garcia (2007)

Wilkins et al. (2007) 664 hotel guests

Table I.
Sample Questionnaire administration Scale Data analysis procedure Dimensions (number of items) Reliability Ranged from 0.71 to 0.86 444 graduate students Self-administered Exploratory factor analysis Ranged from 0.53 to 0.78 375 subjects Self-administered Exploratory factor analysis; Conrmatory factor analysis Ranged from 0.74 to 0.96 Self-administered Exploratory factor analysis; conrmatory factor analysis 25 items; 5 dimensions: tangibles perception-minus-expectations (6), adequacy in service scores supply (7), understanding and caring (5), assurance (4), and convenience (3) 26 items; expectations-only 7 dimensions: scores reliability (6), students Five-point Likert scale, in scientic work (4), ranging from strongly believe empathy (4), assurance that the statement is wrong (1) (3), e-learning (3), to strongly believe that the responsiveness (3), statement is not wrong (5) tangibles (3) 36 items; perception-only 4 dimensions: personal scores interaction (3 Five-point Likert scale, sub-dimensions, 14 ranging from strongly items), design (2 disagree (1) to strongly agree sub-dimensions, 7 (5) items), physical environment (2 sub-dimensions, 7 items), outcome (2 sub-dimensions, 8 items) 30 items; perception-only 3 dimensions: physical scores product (3 sub-dimensions, 13 items), service experience (3 sub-dimensions, 13 items), quality food and beverage (4 items) Ranged from 0.72 to 0.90

Service industry (Country)

High education service (Croatia)

Urgent transport service (Spain)

Hospitality service (Australia)

(Vandamme and Leunis, 1993); the United Arab Emirates (Jabnoun and Khalifa, 2005); and Spain (Caro and Garcia, 2007). 3.2 Dimensional structure All scales in Table I are multi-dimensional. However, the number of dimensions vary from a minimum of two (Ekinci and Riley, 1998) to a maximum of ten (Vaughan and Shiu, 2001). It is apparent that the number of dimensions varied according to the service context and the country. For example, the factor structure for the lodging industry in Australia (Wilkins et al., 2007) was somewhat different from that in North America (Knutson et al., 1990; Saleh and Ryan, 1991; Getty and Getty, 2003). Moreover, the factor structure varied within a given country. For example, the factor structure for the lodging industry in North America varied from ve dimensions (Knutson et al., 1990; Getty and Getty, 2003) to four (Saleh and Ryan, 1991). Despite this variation, it is apparent that the ve dimensions of SERVQUAL were, for the most part, retained in the scales examined in this review. For example, the dimension of tangibles (the appearance of physical facilities, equipment, and personnel) was retained in most of the scales (for example, Knutson et al., 1990; Saleh and Ryan, 1991; Bouman and van der Wiele, 1992; Dabholkar et al., 1996; Lam and Zhang, 1999; Engelland et al., 2000; Frochot and Hughes, 2000; Sureshchandar et al., 2002; Getty and Getty, 2003; Khan, 2003; Akbaba, 2006; Markovic, 2006). Similarly, the empathy dimension (the knowledge and courtesy of employees and their ability to inspire trust and condence) was retained in numerous studies (for example, Knutson et al., 1990; Tomes and Ng, 1995; Lam and Zhang, 1999; Engelland et al., 2000; Khan, 2003; Yoon and Suh, 2004; Karatepe et al., 2005; Markovic, 2006). Similar observations apply to the other SERVQUAL dimensions. However, new dimensions were added to account for industry-specic characteristics. For example, Janda et al. (2002) added security as a specic dimension of service quality required in the internet retail industry. 3.3 Gap scores versus perception scores Three measurement methods were found in the scales reviewed in Table I: . performance-only scores (for example, Dabholkar et al., 1996; Ekinci and Riley, 1998; Frochot and Hughes, 2000; Janda et al., 2002; Getty and Getty, 2003; Caro and Garcia, 2007; Wilkins et al., 2007); . expectations-only scores (for example, Knutson et al., 1990; Khan, 2003; Markovic, 2006); and . perception-minus-expectations scores (for example, Engelland et al., 2000; Wolnbarger and Gilly, 2003). It is apparent that, despite the practical difculties in obtaining information on customer expectations, many studies continue to use a gap model. It would seem that such models facilitate the identication of strengths and weaknesses in specic quality attributes. 3.4 Technical dimension versus functional dimension nroos (1984), service quality consists of: According to the two-dimensional model of Gro

Alternative measures of service quality 75

MSQ 18,1

(1) technical (outcome) quality (which refers to the outcome of the service performance); and (2) functional (process) quality (which refers to the manner in which the service is delivered). The SERVQUAL model is based on functional quality (the delivery process) rather than technical quality (the outcome of the service encounter). Most of the studies in the present review focused on the functional quality of the service-delivery process (for example, Stevens et al., 1995; Engelland et al., 2000; Frochot and Hughes, 2000; Getty and Getty, 2003; Yoon and Suh, 2004; Markovic, 2006). Only a limited number of studies incorporated the technical (outcome) dimension (for example, Vaughan and Shiu, 2001; Aldlaigan and Buttle, 2002; Gounaris, 2005; Caro and Garcia, 2007). 3.5 Number of items The number of items in the present review varied from 14 (Shemwell and Yavas, 1999) to 75 (Sower et al., 2001) according to the industry context. For example, Sureshchandar et al. (2002) used 41 items for the banking industry, Vaughan and Shiu (2001) used 27 items in the voluntary service sector, Yoon and Suh (2004) used 36 items in the consulting service industry, Bouman and van der Wiele (1992) used 40 items in the care industry, Markovic (2006) used 26 items in the higher education industry, and Akbaba (2006) used 25 items in the business hotel industry. To determine the number of items, most researchers generated an initial pool of scale statements from a review of literature. This initial pool was then rened through: . focus groups (for example, Mentzer et al., 1999; Sower et al., 2001; Vaughan and Shiu, 2001; Aldlaigan and Buttle, 2002; Khan, 2003; Wilkins et al., 2007); and/or . individual interviews with providers or users (for example, Aldlaigan and Buttle, 2002; Janda et al., 2002; Getty and Getty, 2003; Karatepe et al., 2005; Caro and Garcia, 2007). It is also worthy of note that, in some cases, SERVQUAL was utilised as a starting-point for the development of the item pool (for example, Dabholkar et al., 1996; Frochot and Hughes, 2000; Sureshchandar et al., 2002) or as the fundamental structure for new instruments (for example, Engelland et al., 2000, Khan, 2003; Markovic, 2006). 3.6 Sample sizes Sample sizes in the studies reviewed in Table I varied from 70 (Vandamme and Leunis, 1993) to 5,531 (Mentzer et al., 1999) service users. Only three studies had sample sizes of more then 1,000: 1,013 internet users (Wolnbarger and Gilly, 2003), 1,220 customers (Karatepe et al., 2005), and 5,531 defence logistics agency users (Mentzer et al., 1999). Three studies had sample sizes of fewer than 100 respondents/users and 14 studies had sample sizes of fewer than 250 respondents/users. Several studies did not provide details of their samples. 3.7 Analysis method A total of 16 studies used only exploratory factor analysis (EFA) to assess their dimensional structure and items. Eight studies used conrmatory factor analysis

76

(CFA). Only six studies used a combination of these techniques (Engelland et al., 2000; Wolnbarger and Gilly, 2003; Karatepe et al., 2005; Parasuraman et al., 2005; Caro and Garcia, 2007; Wilkins et al., 2007). Item-to-total correlation analysis (that is, correlation between the score on an item and the sum of the scores of all other items constituting a single factor) was the most commonly used methodology to decide which items to retain and which to discard. In several studies, all items were discarded that scored less than ^ 0.40 on the item-to-total correlation (for example, Wolnbarger and Gilly, 2003) or ^ 0.30 on the item-to-total correlation (for example, Janda et al., 2002; Aldlaigan and Buttle, 2002). Other studies used loading scores as a basis for item exclusion. For example, some studies excluded items with factor loadings less than ^ 0.40 (for example, Engelland et al., 2000; Sower et al., 2001; Janda et al., 2002; Jabnoun and Khalifa, 2005; Caro and Garcia, 2007), others excluded items with factor loadings less than ^ 0.45 (for example, Markovic, 2006), and others excluded items with factor loadings less than ^ 0.50 (for example, Lam and Zhang, 1999; Wolnbarger and Gilly, 2003; Karatepe et al., 2005). In some studies, items with cross-loadings greater than ^ 0.40 were discarded (for example, Janda et al., 2002). 3.8 Reliability and validity Cronbachs alpha was the most commonly used measure of scale reliability (that is, the internal homogeneity of a set of items composing a scale). Most scales in the present review exhibited good reliability (that is, Cronbachs alphas greater than 0.60). For example, Frochot and Hughes (2000) used ve dimensions with reliability coefcients ranging from 0.70 to 0.83, Akbaba (2006) used ve-dimensions with reliability coefcients ranging from 0.71 to 0.86, and Khan (2003) used six dimensions ranging from 0.86 to 0.98. To assess convergent validity (that is, the extent to which a set of items that is assumed to represent a construct does in fact converge on the same construct), most studies calculated the average variance extracted (AVE) by each dimension (with an AVE of greater than 0.5 being said to support convergent validity). Examples in the present review included Gounaris (2005) and Caro and Garcia (2007). Some researchers considered the fact that all the items loaded highly on the factor to which they were assigned as further evidence of convergent validity (for example, Dabholkar et al., 1996; Caro and Garcia, 2007). To establish discriminant validity (that is, the extent to which measures of theoretically unrelated constructs do not correlate with one another), several researchers used CFA and compared the AVE for each factor with the variance shared by the remaining factors (for example, Wolnbarger and Gilly, 2003; Gounaris, 2005; Caro and Garcia, 2007). The two dimensions were conrmed as being distinct from each other if the AVE estimates were greater than the shared variance estimates. In other studies, discriminant validity was demonstrated by simply showing that the scale did not correlate strongly with other measures from which it was supposed to differ (for example, Sureshchandar et al., 2002). To demonstrate predictive validity (that is, the extent to which the scores of one construct were empirically related to the scores of other conceptually related constructs) some researcher correlated their service-quality dimensions with overall quality (for example, Sureshchandar et al., 2002; Wolnbarger and Gilly, 2003; Gounaris, 2005; Jabnoun and Khalifa, 2005; Parasuraman et al., 2005). Others correlated their service-quality dimensions with other dimensions; these included: satisfaction

Alternative measures of service quality 77

MSQ 18,1

78

(for example, Lam and Zhang, 1999; Janda et al., 2002; Wolnbarger and Gilly, 2003; Gounaris, 2005); word-of-mouth (for example, Dabholkar et al., 1996; Janda et al., 2002); and loyalty (for example, Janda et al., 2002; Sureshchandar et al., 2002; Wolnbarger and Gilly, 2003). Only a few studies tested and supported all three types of validity (convergent, discriminant, and predictive). These included: Dabholkar et al. (1996); Aldlaigan and Buttle (2002); Janda et al. (2002); Sureshchandar et al. (2002); Wolnbarger and Gilly (2003); Gounaris (2005); Karatepe et al. (2005); and Parasuraman et al. (2005). In some of the studies, the three types of validity were not evaluated or even discussed. It is apparent that numerous scales in the present review suffered from incomplete proof of validity. In addition, the methodological assessments of the new alternative instruments were not clearly presented in several studies. 4. Discussion and suggestions for future research This review has documented a variety of industry-specic measurement scales proposed in the service-quality literature since the publication of the SERVQUAL model in 1988. It is apparent that there is ongoing debate about several aspects of such scales. These include: . the dimensionality of service quality; . the hierarchical structure of service quality; . the relationship of culture to perceptions of service quality; . comparisons between alternative scales and SERVQUAL; . validity of service-quality scales; and . the statistical analysis used. These aspects are discussed below, together with suggestions for future avenues of research. 4.1 Dimensionality of service quality All of the 30 studies reviewed here posited service quality as a multidimensional construct. However, the number and nature of the dimensions varied, depending on the service context; indeed, they varied even within the same service industry. It is apparent that the criteria used to evaluate service quality differ among customer groups and circumstances. For example, a businessperson staying in a given hotel has different service criteria from those of a tourist (Eccles and Durrand, 1997). Scholars should therefore describe the empirical context in which a particular scale was developed and the contexts in which it can be applied. In several cases reviewed in the present study, the authors did not explicitly identify the empirical context in which the scale was developed. Future studies should replicate these measures in different contexts to ascertain whether the number and nature of dimensions are applicable in other settings. 4.2 Hierarchical structure of service quality Several authors have suggested that service quality is a hierarchical construct consisting of various sub-dimensions (Dabholkar et al., 1996; Brady and Cronin, 2001; Gounaris, 2005; Caro and Garcia, 2007; Wilkins et al., 2007). However, despite this

theoretical support for a multilevel, multidimensional model of service quality, few efforts have been made to provide empirical evidence for such a structure. Future research could extend scholarly understanding of service quality by undertaking empirical studies of hierarchical multidimensional conceptions of service quality in different settings. 4.3 Culture and service quality Several researchers have suggested that there is a need to develop culturally specic measures of service quality (Winsted, 1997; Zhou et al., 2002; Raajpoot, 2004; Karatepe et al., 2005). As with other marketing constructs and measures, it has been contended that constructs of service quality that are developed in one culture might be not applicable in another (Kettinger et al., 1995; Karatepe et al., 2005). According to this view, the meanings, number, and relative importance of service-quality dimensions depend on the cultural and value orientations of customers particularly with respect to cultural traditions of power distance and individualism/collectivism (Winsted, 1997; Espinoza, 1999; Mattila, 1999; Furrer et al., 2000; Karatepe et al., 2005; Glaveli et al., 2006). Further research in this area is desirable. 4.4 Comparisons between alternative scales and SERVQUAL Although SERVQUAL has been criticised on theoretical grounds, only one scale in the present review (INDSERV) has been empirically shown to outperform SERVQUAL. It is apparent that rigorous empirical studies are needed to substantiate whether alternative scales really are superior to SERVQUAL. In particular, further studies are required to validate and rene the alternative scales. It should also be noted that the small sample sizes used in several of the studies proposing alternative scales were insufcient to permit a comprehensive psychometric assessment of the proposed scales. There is also a need to compare the new scales with SERVQUAL with regard to their ability to predict constructs known to be related to service quality such as overall service quality, satisfaction, word of mouth, and loyalty. Despite the widespread criticism of SERVQUAL, it is the contention of the present study that the scale continues to be the most useful model for measuring service quality. In addition, the methodological approach used by Parasuraman et al. (1985, 1988, 1991) in developing and rening SERVQUAL was more rigorous than those used by the authors of the alternative scales. Finally, it is interesting to note that there are many similarities between the dimensions used in SERVQUAL and those developed in alternative scales. This suggests that some service-quality dimensions are generic whereas others are specic to particular industries and contexts. 4.5 Validity of service quality scales Although the measures of service quality reviewed in this study claimed to have exhibited good reliability, it is important to note that higher alpha values can be indicative of deciencies (rather than reliability) in a scale (Churchill, 1979; Smith, 1999). As Smith (1999) has noted, high alpha values can reect poor design of the measurement instrument, poor scale content, or problems of data attenuation. It is thus critical to establish the validity (the extent to which an instrument measures what it is intended to measure) of any proposed measurement system.

Alternative measures of service quality 79

MSQ 18,1

80

In this regard, the present review has revealed that validity analysis received far less attention than assessments of reliability; indeed, validity was apparently not examined at all in several of the studies described in this review. It is thus apparent that the development of new service-quality scales has suffered from an inadequate treatment of construct-measurement issues as compared with the procedure recommended for the development of valid and reliable measures of marketing constructs (Churchill, 1979; Brown et al., 1993). Future research should provide an assessment of the convergent, discriminant, and nomological validity of proposed new scales. In addition, researchers should indicate under what conditions their scale is likely to be valid or invalid. It is also of interest that the new industry-specic instruments were not replicated. As a result, their psychometric properties can be questioned. Finally, although some studies did validate their proposed measurement scales, there remained concerns about generalizability. A generalization from a single study, no matter how large the sample, is always problematic. Future research is certainly needed to rene these scales. 4.6 Statistical analyses From a methodological perspective, most researchers in the present review used EFA with varimax (orthogonal) rotation to reduce the items used in their constructs. However, numerous academic researchers have criticized the use of EFA, which is a data-driven method, for this purpose. Indeed, Kwok and Sharp (1998) described the use of EFA as nothing more than a shing expedition. EFA has a number of signicant shortcomings. First, common factor analysis with varimax rotation assumes uncorrelated factors or traits; its application to data exhibiting correlated factors can produce: . incorrect conclusions regarding the number of factors; and . distorted factor loadings (Segars and Grover, 1993). Second, because the solution obtained is only one of an innite number of potential solutions, the estimates obtained for factor loadings are not unique (Segars and Grover, 1993). Finally, given that items are assigned to the factors on which they load most signicantly, it is possible for items to load on more than one factor; hence, the distinctiveness of the factors can be affected and the researcher might lack any sound evidence or theoretical explanation on which to base an interpretation (Ahire et al., 1996; Sureshchandar et al., 2002). Given these limitations and the potential advantages of using CFA, a combination of EFA and CFA is desirable. These two approaches to data analysis can provide complementary perspectives of data. 5. Managerial implications This review should assist service managers to identify the dimensions of service quality that are appropriate to their particular service industries. Service managers can use these scales for qualitative and/or quantitative purposes. In qualitative terms, knowledge of the components of service quality can assist service managers to identify the strengths and weakness of their own rms and to make comparisons with other rms in the same service industry. Managers can use

focus groups of customers to obtain information about their expectations and about how well the rm performs on the dimensions identied in appropriate industry-specic scales. In addition, because the quality dimensions identied in the literature might not be exhaustive, managers should also conduct interviews with customers to ascertain what they perceive to be the key determinants in their evaluations of service quality. In conducting such focus groups and interviews, managers should be aware that expectations can vary across consumer segments; qualitative data should therefore be collected among different consumer segments. Moreover, the information received from consumers can be complemented with information obtained in discussion with their employees especially service-contact employees who have frequent direct interactions with consumers. On a quantitative basis, service managers can use industry-specic scales to measure: . customer expectations and perceptions of performance with respect to various dimensions and attributes and thus identify strengths and weaknesses; and . the importance weighting of each service-quality dimension and attribute. In undertaking these quantitative assessments, service managers should be aware that it is inappropriate to measure expectations and perceptions simultaneously after the service is experienced; rather, customers should respond to the items on expectations before the service is experienced and to the items on perceptions after the service is experienced. The quantitative analysis should be used to correct weaknesses and to capitalize on strengths. However, managers should recognize that satisfying consumers is not necessarily sufcient to retain them; to ensure loyalty, customers should be delighted. Finally, the most obvious implication of the present study for managers is to recognize that each service context is unique. Service providers should be careful in applying alternative scales to contexts that have few elements in common with the empirical contexts used in their development. In particular, economic and cultural factors should be taken into consideration when applying these scales to different contexts.

Alternative measures of service quality 81

6. Conclusion The measurement of service quality has received signicant attention from scholars and practitioners in recent years. SERVQUAL (Parasuraman et al., 1985, 1988), which was designed to be a generic instrument applicable across a broad spectrum of services, has been extensively used, replicated, and criticised. The most important criticism of SERVQUAL has been doubt about its applicability in various specic industries. As a result, numerous studies in different service sectors have sought to develop industry-specic service-quality scales. This review, which has documented and described thirty such industry-specic scales, provides helpful direction to researchers and practitioners in developing and utilising new industry-specic instruments.

MSQ 18,1

82

References Ahire, S.L., Golhar, D.Y. and Waller, M.A. (1996), Development and validation of TQM implementation constructs, Decision Sciences, Vol. 27 No. 1, pp. 23-56. Akbaba, A. (2006), Measuring service quality in the hotel industry: a study in a business hotel in Turkey, International Journal of Hospitality Management, Vol. 25 No. 2, pp. 170-92. Aldlaigan, A.H. and Buttle, F.A. (2002), SYSTRA-SQ: a new measure of bank service quality, International Journal of Service Industry Management, Vol. 13 No. 4, pp. 362-81. Arasli, H., Mehtap-Smadi, S. and Katircioglu, S.T. (2005), Customer service quality in the Greek Cypriot banking industry, Managing Service Quality, Vol. 15 No. 1, pp. 41-56. Asubonteng, P., McCleary, K.J. and Swan, J.E. (1996), SERVQUAL revisited: a critical review of service quality, Journal of Service Marketing, Vol. 10 No. 6, pp. 62-81. Babakus, E. and Boller, G.W. (1992), An empirical assessment of the SERVQUAL scale, Journal of Business Research, Vol. 24 No. 3, pp. 253-68. Badri, M.A., Abdulla, M. and Al-Madani, A. (2005), Information technology center service quality: Assessment and application of SERVQUAL, International Journal of Quality & Reliability Management, Vol. 22 Nos 8/9, pp. 819-48. Baldwin, A. and Sohal, A. (2003), Service quality factors and outcomes in dental care, Managing Service Quality, Vol. 13 No. 1, pp. 207-16. Bouman, M. and van der Wiele, T. (1992), Measuring service quality in the car service industry: building and testing and instrument, International Journal of Service Industry Management, Vol. 3 No. 4, pp. 4-16. Brady, M. and Cronin, J. (2001), Some new thoughts on conceptualizing perceived service quality: a hierarchical approach, Journal of Marketing, Vol. 65 No. 3, pp. 34-49. Brown, T.J., Churchill, G.A. and Peter, J.P. (1993), Research note: improving the measurement of service quality, Journal of Retailing, Vol. 69 No. 1, pp. 127-39. Buttle, F. (1996), SERVQUAL: review, critique, research agenda, European Journal of Marketing, Vol. 30 No. 1, pp. 8-32. Carman, J.M. (1990), Consumer perceptions of service quality: an assessment of the SERVQUAL dimensions, Journal of Retailing, Vol. 66 No. 1, pp. 33-55. Caro, L.M. and Garcia, J.A.M. (2007), Measuring perceived service quality in urgent transport service, Journal of Retailing and Consumer Services, Vol. 14 No. 1, pp. 60-72. Churchill, G.A. Jr (1979), A paradigm for developing better measures of marketing constructs, Journal of Marketing Research, Vol. 16 No. 1, pp. 64-73. Chi Cui, C., Lewis, B.R. and Park, W. (2003), Service quality measurement in the banking sector in South Korea, International Journal of Bank Marketing, Vol. 21 Nos 4/5, pp. pp191-pp201. Cook, C. and Thompson, B. (2001), Psychometric properties of scores from the web-based LibQual study of perceptions of library service quality, Library Trends, Vol. 49 No. 4, pp. 585-604. Cronin, J.J. and Taylor, S.A. (1992), Measuring service quality: a reexamination and extension, Journal of Marketing, Vol. 56, July, pp. 55-68. Dabholkar, P., Thorpe, D.I. and Rentz, J.O. (1996), A measure of service quality for retail stores: scale development and validation, Journal of the Academy of Marketing Science, Vol. 24 No. 1, pp. 3-16. Eccles, G. and Durrand, P. (1997), Improving service quality: lessons and practice from the hotel sector, Managing Service Quality, Vol. 7 No. 5, pp. 224-6.

Ekinci, Y. and Riley, M. (1998), A critique of the issues and theoretical assumptions in service quality measurement in the lodging industry: time to move the goal-posts, Hospitality Management, Vol. 17 No. 4, pp. 349-62. Engelland, B.T., Workman, L. and Singh, M. (2000), Ensuring service quality for campus career services centers: a modied SERVQUAL scale, Journal of Marketing Education, Vol. 22 No. 3, pp. 236-45. Espinoza, M.M. (1999), Assessing the cross-cultural applicability of a service quality measure: a comparative study between Quebec and Peru, International Journal of Service Industry Management, Vol. 10 No. 5, pp. 449-68. Frochot, I. and Hughes, H. (2000), HISTOQUAL: the development of a historic houses assessment scale, Tourism Management, Vol. 21 No. 2, pp. 157-67. Furrer, O., Liu, B.S.-C. and Sudharshan, D. (2000), The relationships between culture and service quality perceptions: basis for cross-cultural market segmentation and resource allocation, Journal of Service Research, Vol. 2 No. 4, pp. 355-71. Getty, J.M. and Getty, R.L. (2003), Lodging quality index (LQI): assessing customers perceptions of quality deliver, International Journal of Contemporary Hospitality Management, Vol. 15 No. 2, pp. 94-104. Glaveli, N., Petridou, E., Liassides, C. and Sphatis, C. (2006), Bank service quality: evidence from ve Balkan countries, Managing Service Quality, Vol. 16 No. 4, pp. 380-94. nroos, C. (1984), A service quality model and its marketing implications, European Journal Gro of Marketing, Vol. 18 No. 4, pp. 36-44. nroos, C. (1990), Service Management and Marketing, Lexington Books, Lexington, MA. Gro Gounaris, S. (2005), Measuring service quality in b2b services: an evaluation of the SERVQUAL ` -vis the INDSERV scale, Journal of Services Marketing, Vol. 19 Nos 6/7, scale vis-a pp. 421-35. Headley, D.E. and Miller, S.J. (1993), Measuring service quality and its relationship to future consumer behavior, Journal of Health Care Marketing, Vol. 13 No. 4, pp. 32-41. Jabnoun, N. and Khalifa, A. (2005), A customized measure of service quality in the UAE, Managing Service Quality, Vol. 15 No. 4, pp. 374-88. Janda, S., Trocchia, P.J. and Gwinner, K.P. (2002), Consumer perceptions of internet retail, International Journal of Service Industry Management, Vol. 13 No. 5, pp. 412-31. Jiang, J.J., Klein, G. and Crampton, S.M. (2000), A note on SERVQUAL reliability and validity in information system service quality measurement, Decision Sciences, Vol. 31 No. 3, pp. 725-44. nrooss Kang, G.-D. and James, J.J. (2004), Service quality dimensions: an examination of Gro service quality model, Managing Service Quality, Vol. 14 No. 4, pp. 266-77. Karatepe, O.M., Yavas, U. and Babakus, E. (2005), Measuring service quality of banks: scale development and validation, Journal of Retailing and Consumer Services, Vol. 12 No. 5, pp. 373-83. Kettinger, W.L., Lee, C.C. and Lee, S. (1995), Global measures of information service quality: a cross-national study, Decision Sciences, Vol. 26 No. 5, pp. 569-88. Khan, M. (2003), ECOSERV: ecotourists quality expectations, Annals of Tourism Research, Vol. 30 No. 1, pp. 109-24. Kilbourne, W.E., Duffy, J.A., Duffy, M. and Giarchi, G. (2004), The applicability of SERVQUAL in cross-national measurements of health-care quality, Journal of Services Marketing, Vol. 18 Nos 6/7, pp. 524-33.

Alternative measures of service quality 83

MSQ 18,1

84

Knutson, B.J., Stevens, P., Patton, M. and Thompson, C. (1990), Consumers expectations for service quality in economy, mid-price and luxury hotel, Journal of Hospitality and Leisure Management, Vol. 1 No. 2, pp. 27-43. Kwok, W.C.C. and Sharp, D.J. (1998), A review of construct measurement issues in behavioral accounting research, Journal of Accounting Literature, Vol. 17, pp. 137-74. Lam, S.S.K. (1997), SERVQUAL: a tool for measuring patients opinions of hospital service quality in Hong Kong, Total Quality Management, Vol. 8 No. 4, pp. 145-52. Lam, T. and Zhang, H.Q. (1999), Service quality of travel agents: the case of travel agents in Hong Kong, Tourism Management, Vol. 20 No. 3, pp. 341-9. Lam, T.K.P. (2002), Making sense of SERVQUALs dimensions to the Chinese customers in Macau, Journal of Market-focused Management, Vol. 5 No. 10, pp. 43-58. Lee, M. and Ulgado, F.M. (1997), Consumer evaluations of fast-food services: a cross-national comparison, Journal of Services Marketing, Vol. 11 No. 1, pp. 39-50. Llosa, S., Chandon, J. and Orsingher, C. (1998), An empirical study of SERVQUALs dimensionality, The Service Industries Journal, Vol. 18 No. 1, pp. 16-44. McAlexander, J.H., Kaldenberg, D.O. and Koenig, H.F. (1994), Service quality measurement: examination of dental practices sheds more light on the relationships between service quality, satisfaction, and purchase intentions in a health care setting, Journal of Health Care Marketing, Vol. 14 No. 3, pp. 34-40. Markovic, S. (2006), Expected service quality measurement in tourism higher education, Nase Gospodarstvo, Vol. 52 Nos 1/2, pp. 86-95. Mattila, A.S. (1999), The role of culture in the service evaluation processes, Journal of Service Research, Vol. 1 No. 3, pp. 250-61. Mentzer, J.T., Flint, D.J. and Kent, J.L. (1999), Developing a logistics service quality scale, Journal of Business Logistics, Vol. 20 No. 1, pp. 9-32. Najjar, L. and Bishu, R.R. (2006), Service quality: a case study of a bank, Quality Management Journal, Vol. 13 No. 3, pp. 35-44. Parasuraman, A., Zeithaml, V.A. and Berry, L.L. (1985), A conceptual model of service quality and its implications for future research, Journal of Marketing, Vol. 49 No. 4, pp. 41-50. Parasuraman, A., Zeithaml, V.A. and Berry, L.L. (1988), SERVQUAL: a multiple-item scale for measuring consumer perceptions of service quality, Journal of Retailing, Vol. 64 No. 1, pp. 12-40. Parasuraman, A., Zeithaml, V.A. and Berry, L.L. (1991), Renement and reassessment of the SERVQUAL scale, Journal of Retailing, Vol. 67 No. 4, pp. 420-50. Parasuraman, A., Zeithaml, V.A. and Berry, L.L. (1994), Alternative scales for measuring service quality: A comparative assessment based on psychometric and diagnostic criteria, Journal of Retailing, Vol. 70 No. 3, pp. 201-30. Parasuraman, A., Zeithaml, V.A. and Malhotra, A. (2005), E-S-QUAL: a multiple-item scale for assessing electronic service quality, Journal of Service Research, Vol. 7 No. 3, pp. 213-33. Peter, J.P., Churchill, G.A. Jr and Brown, T.J. (1993), Caution in the use of difference scores in consumer research, Journal of Consumer Research, Vol. 19, March, pp. 655-62. Pitt, L.F., Watson, R.T. and Kavan, C. (1995), Service quality: a measure of information systems effectiveness, MIS Quarterly, Vol. 19 No. 2, pp. 173-87. Raajpoot, N. (2004), Reconceptualizing service encounter quality in a non-Western context, Journal of Service Research, Vol. 7 No. 2, pp. 181-201.

Richard, M.D. and Allaway, A.W. (1993), Service quality: insights and managerial implications from the frontier, in Rust, R.T. and Oliver, R.L. (Eds), Service Quality: New Directions in Theory and Practice, Sage Publications, Thousand Oaks, CA, pp. 1-19. Saleh, F. and Ryan, C. (1991), Analysing service quality in the hospitality industry using the SERVQUAL model, Services Industries Journal, Vol. 11 No. 3, pp. 324-43. Segars, A.H. and Grover, V. (1993), Re-examining perceived ease of use and usefulness: a conrmatory factor analysis, MIS Quarterly, Vol. 17 No. 4, pp. 517-24. Shemwell, D.J. and Yavas, U. (1999), Measuring service quality in hospitals: scale development and managerial applications, Journal of Marketing Theory and Practice, Vol. 7 No. 3, pp. 65-75. Smith, A.M. (1999), Some problems when adopting Churchills paradigm for the development of service quality measurement scales, Journal of Business Research, Vol. 46 No. 2, pp. 109-20. Stevens, P., Knuston, B. and Patton, M. (1995), DINESERV: a tool for measuring service quality in restaurants, Cornell Hotel and Restaurant Administration Quarterly, Vol. 36 No. 2, pp. 56-60. Sower, V., Duffy, J.A., Kilbourne, W., Kohers, G. and Jones, P. (2001), The dimensions of service quality for hospitals: development and use of the KQCAH scale, Health Care Management Review, Vol. 26 No. 2, pp. 47-58. Sureshchandar, G.S., Rajendran, C. and Anantharaman, R.N. (2002), Determinants of customer-perceived service quality: a conrmatory factor analysis approach, Journal of Services Marketing, Vol. 16 No. 1, pp. 9-34. Teas, R.K. (1993), Expectations, performance evaluation, and consumers perceptions of quality, Journal of Marketing, Vol. 57 No. 4, pp. 18-34. Teas, R.K. (1994), Expectations as a comparison standard in measuring service quality: an assessment of a reassessment, Journal of Marketing, Vol. 58 No. 1, pp. 132-9. Tomes, A.E. and Ng, S.C.P. (1995), Service quality in hospital care: the development of an in-patient questionnaire, International Journal of Health Care Quality Assurance, Vol. 8 No. 3, pp. 25-33. Vandamme, R. and Leunis, J. (1993), Development of a multiple-item scale for measuring hospital service quality, International Journal of Service Industry Management, Vol. 4 No. 3, pp. 30-49. Van der Wal, R.W.E., Pampallis, A. and Bond, C. (2002), Service quality in a cellular telecommunications company: a South African experience, Managing Service Quality, Vol. 12 No. 5, pp. 323-35. Van Dyke, T.P., Kappelman, L.A. and Prybutok, V.R. (1997), Measuring information systems service quality: concerns on the use of the SERVQUAL questionnaire, MIS Quarterly, Vol. 21 No. 2, pp. 195-208. Van Dyke, T.P., Prybutok, V.R. and Kappelman, L.A. (1999), Cautions on the use of the SERVQUAL measure to assess the quality of information systems services, Decision Sciences, Vol. 30 No. 3, pp. 877-91. Vaughan, L. and Shiu, E. (2001), ARCHSECRET: a multi-item scale to measure service quality within the voluntary sector, International Journal of Nonprot and Voluntary Sector Marketing, Vol. 6 No. 2, pp. 131-44. Walbridge, S.W. and Delene, L.M. (1993), Measuring physician attitudes of service quality, Journal of Health Care Marketing, Vol. 13 No. 1, pp. 7-15.

Alternative measures of service quality 85

MSQ 18,1

86

Wilkins, H., Merrilees, B. and Herington, C. (2007), Toward an understanding of total service quality in hotels, International Journal of Hospitality Management, Vol. 26 No. 4, pp. 840-53. Winsted, F.K. (1997), The service experience in two culture: a behavioral perspective, Journal of Retailing, Vol. 73 No. 3, pp. 337-60. Wolnbarger, M. and Gilly, M.C. (2003), eTailQ: dimensionalizing, measuring and predicting etail quality, Journal of Retailing, Vol. 79 No. 3, pp. 183-98. Yoon, S. and Suh, H. (2004), Ensuring IT consulting SERVQUAL and user satisfaction: a modied measurement tool, Information Systems Frontiers, Vol. 6 No. 4, pp. 341-51. Zhou, L., Zhang, Y. and Xu, J. (2002), A critical assessment of SERVQUALs applicability in the banking context of China, Asia Pacic, in Hunt, K. (Ed.), Advances in Consumer Research, Vol. 5, Association for Consumer Research, Valdosta, GA, pp. 14-21. Further reading Cronin, J.J., Brady, M.K. and Hult, G.T.M. (2000), Assessing the effects of quality, value, and customer satisfaction on consumer behavioral intentions in service environments, Journal of Retailing, Vol. 76 No. 2, pp. 193-218. Duncan, E. and Elliott, G. (2002), Customer service quality and nancial performance among Australian retail nancial institutions, Journal of Financial Services Marketing, Vol. 7 No. 1, pp. 25-41. Gounaris, S.P., Stathakopoulos, V. and Athanassopoulos, A.D. (2003), Antecedents to perceived service quality: an exploratory study in the banking industry, The International Journal of Bank Marketing, Vol. 21 Nos 4/5, pp. 168-90. Malhotra, N.K., Ulgado, F.M., Agarwal, J.G. and Wu, L. (2005), Dimensions of service quality in developed and developing economies: multi-country cross cultural comparisons, International Marketing Review, Vol. 22 No. 3, pp. 256-78. Rust, R.T. and Zahorik, A.J. (1993), Customer satisfaction, customer retention and market share, Journal of Retailing, Vol. 69 No. 2, pp. 193-215. About the author Riadh Ladhari is an Assistant Professor of Marketing at the Department of Business Administration, University of Moncton, Canada. His current research is centered on service quality and customer satisfaction. His work has been published in refereed journals such as Journal of Business Research and Psychology & Marketing. In addition, he has presented several papers at national and international conferences. Riadh Ladhari can be contacted at: riadh.ladhari@umoncton.ca

To purchase reprints of this article please e-mail: reprints@emeraldinsight.com Or visit our web site for further details: www.emeraldinsight.com/reprints