Вы находитесь на странице: 1из 10

JMP: AMS 3,1 4

Some issues in conducting customer satisfaction surveys


Dr. Binshan Lin, Professor and Dr. Charlotte A. Jones, Professor
College of Business Administration, Louisiana State University, Shreveport, Shreveport, LA 71115, USA

Journal of Marketing Practice: Applied Marketing Science, Vol. 3 No. 1, 1997, pp. 4-13. MCB University Press, 1355-2538

Introduction Customer satisfaction results from providing goods and services that meet or exceed customers needs (Evans & Lindsay, 1996). Customer satisfaction is first-and-foremost the responsibility of each employee of organizations. Many of the claims for the advantages derivable from Total Quality Management (TQM) depend upon achieving a focus on customer satisfaction (Gravin, 1988; Lin & Fite, 1996). Meeting customers needs, and thus assuring customer satisfaction, is ultimately the responsibility of management. A crucial part of that responsibility is to assure that quality measurements of customer satisfaction are obtained. One familiar manifestation of this concern with customer satisfaction as a test of market orientation has been the widespread advocacy of customer satisfaction surveys, leading to the capture and dissemination of such measurements as a basis for marketing control (Band, 1988). The increasing emergence of customer satisfaction surveys in the past decade has been a remarkable development. The automobile industry pioneered the use of customer satisfaction surveys and probably spends more money on them than any other industry. Studies at IBM showed that each percentage point in improved customer satisfaction translates into $50 million more revenue over five years (Kirkpatrick, 1992). A study estimates that segment for customer satisfaction surveys work alone accounts for $100 million in consulting and research revenues for major U.S. market research firms (Loro, 1992). Customer satisfaction surveys have been used to a great extent as the method to assess customer satisfaction. These surveys usually fulfill two needs. The first provides valuable information that enables a company to compare the performance of one business unit or several business units in different time periods and locations (Jones & Sasser, 1995). Second, customer satisfaction surveys can be rich source of information for generating continuous quality improvements, but only if they are examined carefully and used within a consistent framework. Customer satisfaction surveys are not without problems. Common problems include a tendency to show a high level of satisfaction, a lack of standard satisfaction scales, the proliferation and excessive use of surveys (Altany, 1993;

Mehta, 1990), and the overall ffectiveness of customer satisfaction research is an ongoing analytical discipline (Adamson, 1992). Another weakness of customer satisfaction surveys is that an increasing number of customers are tired of being surveyed (Reichheld, 1996). Moreover, so many customer satisfaction surveys appear to be just random data gathering of customer perceptions and opinions with little effort for intelligent follow-up and meaningful investigations (Godfrey, 1993). It is essential to evaluate the reliability of the method, particularly considering the fact that attitudinal studies using questionnaires are the most common method for measuring customer satisfaction, and because of the lack of standardized instruments for measuring customer experience and satisfaction. The major purpose of this paper is to reassert the importance of studying customer satisfaction surveys and to clarify and illuminate the some of the most common methodological problems faced by management in these surveys. Attention will focus on four of the most crucial issues encountered with these surveys, namely (1) sampling frames, (2) quality of survey data and instruments, (3) non-response problems, and (4) reporting and interpretation. These issues form the context for our discussion of current methodologies for customer satisfaction surveys. Methodological aspects addressed are prescriptive as well as descriptive and evaluative. They are applicable to both management and marketing research. Finally, a research agenda is laid out, suggesting customer satisfaction surveys requirements in the future. Issue #1: sampling frames Since suppliers or manufacturers cannot survey every customer, they rely on selecting and analyzing representative samples that constitute rather small percentages of the total population. Customer satisfaction surveys usually have a high level of internal validity since customer satisfaction is expressed in terms of the surveyed customers and not all customers in general. Unfortunately, customer satisfaction surveys are vulnerable to haphazard sampling procedures that introduce biases of unknown magnitude into their results. The construction of a proper sampling plan is crucial when it comes to the elimination of potentially large biases. The way to evaluate a sample is not by the results, but by examining the process by which it was selected. The first step in evaluating the quality of a sample is to define the sample frame. Fowler (1993) states that there are three characteristics of the sample frame: comprehensiveness, probability of selection, and efficiency. One problem found in the customer satisfaction surveys carried out by routine procedure is uncontrolled sampling. There are numerous questions about whether the use of sampling theory is appropriate in this context. Many statisticians would even question whether a consecutive series of customers can be regarded as a random sample.

Conducting customer surveys 5

JMP: AMS 3,1 6

1. Dealing with Sample Sizes Since consumer satisfaction surveys are ultimately focused on understanding the big picture, survey practitioners often use small samples and statistical inference to estimate the composite perceptions of large population (Hayslip, 1994). Large samples, in some instances, are required to detect insubstantial effect sizes. If a customer satisfaction survey is to be relevant to the quality, it must be conducted with issues that have a substantial effect on business performance. Small samples, such as hen industry influence is controlled, should provide enough variation in the variables of interest to demonstrate the theoretical relationship. McDaniel and Gates (1993) suggest that a sample should be large enough to obtain at least 100 respondents in each major subgroup of the target population and a minimum of 20 to 50 respondents in each of the less important subgroups. 2. Dealing with the Choice of a Target Population The choice of a target population for sampling depends on the survey questions conducted. Under most situations, the target population will be all potential customers, and a random sample of them may be studied. Other analysis will limit the target population to a subgroup with a particular income condition (such as over $50,000) or characteristic (such as children under age three). Identifying these populations or subpopulations may present a challenge of the moving target due to ongoing plan enrollment and disenrollment. Moreover, sampling techniques must be sensitive to statistical considerations. 3. Dealing with Segments in a Target Population One lesson IBM Canada learned from their customer satisfaction surveys was that different customers define satisfaction differently, and that the same customers may well define customer satisfaction differently over time (Etherington, 1992). Customer satisfaction surveys should help identify these segments and design management actions to achieve satisfaction by segments. Some traditional segmentation strategies provide little insights in conducting customer satisfaction surveys. For example, L.L. Bean is changing the way it segments its customers. Traditionally, it grouped customers by the frequency, size, and timing of their purchases. Recently, it has shifted to customer-defined segmentation, i.e, a dozens customer groups has been identified based on what they generally purchase from L.L. Bean (George & Weimerskirch, 1994). More importantly, such a company is able to align its customer satisfaction surveys and other monitoring systems to the new segments in order to track the specific requirements for each segment. However, if one is interested in comparing the customer satisfaction for a given marketing condition, problems could arise if such surveys are used without reference to a target population. Issue #2: Improving quality of survey data and instrument Customer satisfaction surveys are used to assess quality of product or service. The quality of questionnaire design is generally recognized as an important

factor for self-administrated instruments (Dillman, 1983). It is becoming clear that greater attention to the quality of customer satisfaction survey data is essential for the evolvement of the field into a discipline. The quality of customer satisfaction surveys based on coded data relies upon the quality of the data itself. Redman (1992) suggests four basic dimensions for data quality: timeliness, completeness, usability, and accuracy. Timeliness seems obvious, but different users may have very different needs. Completeness involves several concepts of strata, sample sizes, coverage and thoroughness of survey design. Usability includes format of survey, survey instructions, and understandability issues. It is, of course, crucial that the data should be accurate, that it gives a true picture of the information that is encoded. And it is important that the data be consistent, that the same attributes of the information should be invariably considered in the encoding process. In the effort to measure any particular attribute, numerous sources of error might be introduced by subtle factors that influence the behavior of customers leading to variation in the measurements made. Two types of measurement errors can be identified in customer satisfaction surveys: response error and procedure error. Response errors are indicated when responses are recorded in both the interview and the reinterview but they differ. Such response errors have been common in customer satisfaction surveys due to the inability or inconvenience of customers. Errors can also be introduced in survey measures, because customers may vary from time to time in their willingness to provide responses to specific questions and because skip sequences are not always properly followed in a customer satisfaction surveys. Customer satisfaction surveys must be reliable. Reliability is the degree to which measures are free from error and therefore yield consistent results (Peter, 1979). Collecting data with an unreliable scale is like taking measurements with an elastic tape measure; the same thing can be measured a number of times, but it will yield a different length each time (Flynn, et al., 1990). The number of points used in the rating scale can affect the reliability of the scale (Churchill & Peter, 1979). The most commonly reported measure of reliability is Cronbachs alpha and usual criterion is 0.7 which is based on Nunnallys 1978 edition of Psychometric Theory (Peterson, 1994) Once a customer satisfaction survey has been determined to be reliable, its validity can be assessed. Validity is a scales ability to measure what it sets out to measure. Customer satisfaction surveys must be valid; that is, they must truly measure the satisfaction variables they are intended to measure. Using an invalid scale is like measuring inches with a meter stick; precise quantitative data can be collected, but it is meaningless (Flynn et al., 1990). It is critical for studies using survey methods to follow guidelines regarding questionnaire construction and administration (Cronbach & Meehl, 1955; Sudman & Bradbury, 1982) so that the data collected are relevant and appropriate. Discussions about survey design issues such as question formatting options, graphic layout, integration of interviewer recording tasks,

Conducting customer surveys 7

JMP: AMS 3,1 8

and optimal routing strategies are usually absent from published works in customer satisfaction surveys. Three approaches are suggested to improve the quality of the customer satisfaction surveys instruments and context. The first approach involves permanently eliminating scales which showed undesirable psychometric qualities. However, results can be spurious. The second approach is to eliminate items in the construction of Likert scales (Fishbein & Ajzen, 1975). Items should indicate a favorable or unfavorable attitude toward the object in consumer satisfaction surveys. If the item is ambiguous or appears to indicate a neutral attitude, it should be immediately eliminated. However, if time is a major consideration, a short-form of the instrument, which provides only an overall measure of the construct, may be desirable. Third, tape-recorded interview in some situations can be employed to extract more complete customer perceptions. For example, in Hewlett-Packard, tape-recorded interviews with customers provided valuable information that would have been lost if they had relied strictly on a note takers for each customer satisfaction interview (Burkett, 1996). Rust et al. (1994) claim that an expectations question (with response choices much better than expected, about as expected, and worse than expected) provides more accurate results by greatly reducing the top box problem. They feel that one serious problem with rating scales is that often times they are worded so that it is easiest for consumers to choose the top box. However, with an expectations question subjects are less likely to check the top box unless they are truly delighted rather than merely satisfied with the stimulus. Issue #3: Nonresponse Nonresponse is a feature of virtually all customer surveys, damaging the inferential value of the sample survey methods. A low response rate undermines the validity and generalizability of potential findings because the characteristics of nonrespondents may be unknown. Two strains of literature dealing with the problem of nonresponse can be identified. First, there is a large research on methods to increase response rates in surveys, involving advanced letters, payments to respondents, interviewer scripts that are persuasive, and strategies of timing calls on sample (Groves, 1989). Second, many researchers focus largely on attempts to reduce error arising from nonresponse through the use of post-survey adjustments. This includes weighting cases by estimated probabilities of cooperation and by known population quantities, imputation, and selection bias models (Little & Rubin, 1987). Mail questionnaires frequently result in low response rates, and verifying subject responses is difficult (Kerlinger, 1986). With regard to mail surveys at least, several studies have noted the importance of distinguishing between nonresponse due to noncompliance and nonresponse due to inaccessibility (Mayer & Pratt, 1966; Stinchcombe, Jones, & Sheatsley, 1981), raising the possibility that different groups of nonrespondents of customer satisfaction surveys have different predispositions toward survey participation.

The difficulty of knowing whether nonresponse is systematic or not is a commonly known problem. Managers frequently administer questionnaires that have not been validated to a nonrepresentative customer sample and employ data collection methods that produce low response rates. The occurrence of nonresponse is most likely not a random process but is determined by specific factors varying from study to study. As such the following provisions were made to minimize nonresponse: A reply paid envelope was enclosed along with a letter encouraging customers to complete the questionnaire. Techniques include getting a foot-in-the-door by involving respondents in some small task or offering a monetary incentive (Yammarino, Skinner & Childers, 1991). Although these techniques help boost response rates, they may introduce sample-composition bias. Sample-composition bias occurs when those responding to a survey differ in some important respect from those who do not respond to the survey. In other words, the techniques used to increase response rates may appeal to some members of the sample and alienate others, causing the results to be nonrepresentative of the population of interest (Parker & McCrohan, 1983). Technology-driven advances offer a tremendous opportunities to researchers to apply some of these developments in an innovative manner to improve the customer satisfaction surveys process. For example, geodemographic systems such as PRIZIM and MICROVISION can be used for predicting and correcting response rate problems in surveys regardless of the mode of survey administration (Appel & Baim, 1992). Issue #4: Reporting and interpretation Several difficulties of reporting and interpretation are related to the nature of customer satisfaction surveys. First, traditional survey methodologies frequently relate to a specific transaction (Oliver, 1981). While satisfaction with delivered services is important, focusing on it alone fails to address customer needs. Survey instruments must provide specific information on what customers need in order to chart process improvement plans that meet the goals of TQM. Second, customer satisfaction involves many determinants. The humanness, for example, of the service and supplier-customer communication are important determinants for satisfaction in many studies. 1. Dealing with Complex Distinctions Among Customer Segments Differences in customer satisfaction are the result of complex dimension. These factors include customers social condition, and customers physiological reserve. One of the major weakness of customer satisfaction surveys is that surveys usually ignore critical distinctions among customer segments (Reichheld, 1996).

Conducting customer surveys 9

JMP: AMS 3,1 10

Customers are not homogeneous even though many studies report survey results as if they are. Distinctions between males and females, blacks and whites, and young and old need to be clearly measured and reported. In order to measure the effect of supplier performance on outcome with accuracy, one would have to control for all the other factors. This clearly is very difficult, given the existing surveys and measurement instruments. 2. Dealing with Item Heterogeneity Customer satisfaction surveys usually employ multiple items for various reasons. First, if a particular question is subject to measurement error due to a customers haste or lack of understanding, a repetition of the same question in different ways decreases the chance for a random answer. Second, if a particular concept is too broad to captured in one question, multiple items are used to tap each dimension. Finally, simply increasing the number of items in an instrument increases reliability scores (Allen & Yen, 1979). However, interpretation of results from multiple-item surveys requires caution. The empirical support for totalling the individual items in this way appears to be the highest internal consistency reliability as reported by Cronbachs alpha. However, Cronbachs alpha should be used only for homogeneous tests since the formula assumes item homogeneity. If the customer satisfaction survey measures a variety of traits, Cronbachs alpha is not a suitable measure (Allen & Yen, 1979). 3. Dealing with the Survey Instruments In designing customer satisfaction surveys, there are two minimal requirements: firstly, questions about specific aspects of quality should be included as they are less ambiguous and more sensitive than general questions and, secondly, open-ended questions should be included to aid interpretation of the responses to precoded questions. Methodology used in the survey studies appears to vary in one major respect: some use a free-response technique to determine important factors or causes associated with customers satisfaction, while other studies ask customers to check or rank a set of predetermined factors. Moreover, usually there is a time lag between receiving customers responses by mail and service time. This makes interpretation of the survey results and taking corrective action difficult. A number of institutions have turned to interactive survey methodologies to counter the time lag problem and provide centralized, uniform data collection. Through the use of interatctive television services, customers use a menu to browse through services, watch videos on their particular product information, place orders, fill out satisfaction surveys, etc. In a shopping mall setting for example, customer satisfaction surveys can be filled out on a daily basis.

4. Dealing with Statistical Analysis When inappropriate statistical tests are conducted on survey data, test results may not be valid at the stated degree of confidence, and incorrect conclusions may be drawn. For example, suppose respondents in a customer satisfaction surveys are asked to rate their satisfaction with a given product on a scale of one to three, which is ordinal. This scale does not meet the assumption of interval measures required for performing parametric analysis. If a parametric statistical test is employed on such data, resulting might indicate with a 5 percent degree of confidence that the mean level of satisfaction is high when the actual degree of confidence associated with that conclusion is much lower. Likewise, test results might indicate that the mean satisfaction level is less than anticipated when, in fact, it is actually at least as high. In either case, results could cause management to make an incorrect marketing decision. Conclusion It is hoped that this study would bring greater attention and rigor to customer satisfaction surveys. This is critical because of the central role of customer satisfaction surveys in the TQM context. Evaluating the quality of the data is a managerial responsibility as well as a technical issue for the researcher. The following concluding comments are intended to help marketing researchers and managers recognize and make some of the difficult research design choices that confront them. These comments should be interpreted as general guidelines, not as rules that should absolutely govern choices. First, prior to initiating a customer satisfaction survey, the researcher must explicitly identify and understand the limitations of the sampling and data collection strategies. All sampling and data collection strategies have serious shortcomings. Second, there is still a need for more reliable measures of customer satisfaction surveys. As noted earlier, while measuring satisfaction is important, measuring customer needs is equally important and an area often overlooked by researchers. Third, the real value of establishing a generalized instrument for customer satisfaction surveys cannot be realized unless a mechanism is established for provision of a centralized data bank of results. Such a data bank would permit comparison of results across suppliers, regions, or states, and across other variables of interest. Fourth, researchers must develop methods to reduce the data collection demands imposed on the customers without sacrificing optimal information to capture the interest of quality. While the development of shorter customer satisfaction surveys would help, it will have to be omplemented by procedures that are smarter and can fill up gaps due to the limited amount of information obtained from the respondents (Malhotra, 1987). Last, if continuous quality improvement is truly a factor for an institution, customers should be involved in the construction of the survey instruments. It is not uncommon for suppliers or manufacturers to develop a list of ideas about

Conducting customer surveys 11

JMP: AMS 3,1 12

customer expectations without benefit of input from customers. Involving customers in the design process is essential to accurately focusing in on areas of customer concern.
References Adamson, C. (1992), Ineffective: Thats the Problem with Customer Satisfaction Surveys, Quality Progress, May, pp.35-38. Allen, M.J. & Yen, W.M. (1979), Introduction to Measurement Theory, Monterey, CA: Brooks/Cole. Altany, D.R. (1993), Bad Surveys Flood the Market, Industry Week, September 20, p.12. Appel, V. & Baim, J. (1992), Predicting and Correcting Response Rate Problems Using Geodemography, Marketing Research, Vol.4, pp.22-28. Band, W. (1988), How to Gain Competitive Advantage Through Customer Satisfaction, Sales and Marketing Management in Canada, May, pp.30-32. Burkett, J. (1996), Concept Engineering: The Key to Customer Satisfaction, CQM Voice, Vol.7, 1996, p.3. Churchill, G.A. & Peter, J.P. (1984), Research Design Effects on the Reliability of Rating Scales: A Meta-Analysis, Journal of Marketing Research, Vol.21, November, pp.360-375. Cronbach, L.J. and Meehl, P.E. (1955), Construct Validity in Psychological Tests, Psychological Bulletin,Vol.52, pp.282-302. Dillman, D.A. (1983), Mail and Other Self-Administratered Questionnaires, in Handbook of Survey Research, (Eds.) Rossi,P., Wright, J.D. & Anderson, A.B., New York: Academic Press, pp.359-377 Etherington, B. (1992), Putting Customer Satisfaction to Work, Business Quarterly Summer, pp.128-131. Evans, J.R. and Lindsay, W.M. (1996), The Management and Control of Quality, St. Paul, MN: West. Fishbein, M. and Ajzen, I. (1975), Belief, Attitude, Intention, and Behavior: An Introduction to Theory and Research, Reading, MA: Addison-Wesley. Flynn, B.B., Sakakibara, S., Schroeder, R.G., Bates, K.A. and Flynn, E.J. (1990), Empirical Research Methods in Operations Management, Journal of Operations Management, Vol.9, No.2, pp.250-284. Fowler, F.J. (1993), Survey Research Methods, Newbury Park: CA: SAGE Publications. Garvin, D.A. (1988), Managing Quality The Strategic and Competitive Edge, London: Collier MacMillan. George, S. and Weimerskirch, A. (1994), Total Quality Management, New York, NY: John Wiley & Sons. Godfrey, A.B. (1993), Ten Areas for Future Research in Total Quality Management, Quality Management Journal, Vol.1, No.1, pp.47-70. Groves, R.M. (1989), Survey Errors and Survey Costs, New York: John Wiley & Sons. Hayslip, W.R. (1994), Measuring Customer Satisfaction in Business Markets, Quality Progress, April, pp.83-87. Jones, T.O. and Sasser, W.E. (1995), Why Satisfied Customers Defect, Harvard Business Review, Vol.73, No.6, pp.88-99. Kerlinger, F.N. (1986), Foundations of Behavioral Research, 3rd ed, Fort Worth, TX: Holt, Rinehart, and Winston. Kirkpatrick, D. (1992), Breaking Up IBM, Fortune, July 27, pp.44-58. Lin, B. & Fite, D. (1996), Managing a Sea of Quality Information at Ark-La-Tex Aquatics, National Productivity Review, Vol.15, No.1, pp.79-85.

Little, R. & Rubin, D.B. (1987), Statistical Analysis with Missing Data, New York: John Wiley & Sons. Loro, L. (1992), Customer Is Always Right: Satisfaction Research Booms, Advertising Age, February 10, p.25. Malhotra, N. (1987), Analyzing Marketing Research Data with Incomplete Information on the Dependent Variables, Journal of Marketing Research, Vol.24, pp.74-84. Mayer, C.S. & Pratt, R.W. (1966), A Note on Nonresponse in a Mail Survey, Public Opinion Quarterly, Vol.30, pp.639-646. Mehta, C.L. (1990), Too Many Surveys, Too Many Numbers Add up to Confusion, Automotive News, November, p.13. Oliver, R. (1981), Measurement and Evaluation of Satisfaction Process in Retail Settings, Journal of Retailing, Vol.57, pp.25-48. Parker, C. & McCrohan, K.F. (1983), Increasing Mail Survey Response Rates: A Discussion of Methods and Induced Bias, In Marketing: Theories and Concepts for an Era of Change, Summey, J., Viswanathan, R., Taylor, R. & Glynn, K. (Eds)., Southern Marketing Association, pp.254-256. Peter, J.P. (1979), Reliability: A Review of Psychometric Basis and Recent Marketing Practices, Journal of Marketing Research, Vol.16, pp.6-17. Peterson, R. (1994), A Meta-Analysis of Cronbachs Coefficient Alpha, Journal of Consumer Research, Vol.21, No.2, pp.381-391. Redman, T.C. (1992), Data Quality: Management and Technology, New York: Bantam Books. Reichheld, F.F. (1996), Learning from Customer Defections, Harvard Business Review, Vol.74, No.2, pp.56-69. Stinchcombe, A., Jones, C. & Sheatsley, P. (1981), Nonresponse Bias for Attitude Questions, Public Opinion Quarterly, Vol.45, pp.359-375. Sudman, S. and Bradburn, N.M. (1982), Asking Questions: A Practical Guide to Questionnaire Design, San Francisco, CA: Jossey-Bass Publishers. Yammarino, F., Skinner, S.J., & Childers, T. (1991), Understanding Mail Survey Response Behavior: A Meta-Analysis, Public Opinion Quarterly, Winter, Vol.55, pp.23-34.

Conducting customer surveys 13

Вам также может понравиться