Академический Документы
Профессиональный Документы
Культура Документы
It seems self evident that companies should try to satisfy their customers. Satisfied
customers usually return and buy more, they tell other people about their
experiences, and they may well pay a premium for the privilege of doing business
with a supplier they trust. Statistics are bandied around that suggest that the cost of
keeping a customer is only one tenth of winning a new one. Therefore, when we win
a customer, we should hang on to them.
Customer satisfaction surveys are often just that surveys of customers without consideration of
the views of lost or potential customers. Lapsed customers may have stories to tell about service
issues while potential customers are a good source of benchmark data on the competition. If a
survey is to embrace non-customers, the compilation of the sample frame is even more difficult.
The quality of these sample frames influences the results more than any other factor since they
are usually outside the researchers control. The questionnaire design and interpretation are
within the control of the researchers and these are subjects where they will have considerable
experience.
How likely or unlikely are you to buy from ABC Ltd again?
How likely or unlikely would you be to recommend ABC Ltd to a friend or colleague?
It is at the more specific level of questioning that things become more difficult. Some issues are
of obvious importance and every supplier is expected to perform to a minimum acceptable level
on them. These are the hygiene factors. If a company fails on any of these issues they would
quickly lose market share or go out of business. An airline must offer safety but the level of inflight service is a variable. These variables such as in-flight service are often the issues that
differentiate companies and create the satisfaction or dissatisfaction.
Working out what questions to ask at a detailed level means seeing the world from the
customers points of view. What do they consider important? These factors or attributes will differ
from company to company and there could be a long list. They could include the following:
The product
Delivery
Delivery on time
Speed of delivery
Representative's knowledge
Reliability of returning calls
Friendliness of the sales staff
Complaint resolution
Responsiveness to enquiries
After sales service
Technical service
The company
Price
Market price
Total cost of use
Value for money
The list is not exhaustive by any means. There is no mention above of environmental issues,
sales literature, frequency of representatives calls or packaging. Even though the attributes are
deemed specific, it is not entirely clear what is meant by product quality or ease of doing
business. Cryptic labels that summarise specific issues have to be carefully chosen for
otherwise it will be impossible to interpret the results.
Customer facing staff in the research-sponsoring organisation will be able to help at the early
stage of working out which attributes to measure. They understand the issues, they know the
terminology and they will welcome being consulted. Internal focus groups with the sales staff will
prove highly instructive. This internally generated information may be biased, but it will raise most
of the general customer issues and is readily available at little cost.
It is wise to cross check the internal views with a small number of depth interviews with
customers. Half a dozen may be all that is required.
When all this work has been completed a list of attributes can be selected for rating.
The tool kit for measuring customer satisfaction boils down to three options, each with their
advantages and disadvantages. The tools are not mutually exclusive and a self-completion
element could be used in a face to face interview. So too a postal questionnaire could be
preceded by a telephone interview that is used to collect data and seek co-operation for the selfcompletion element.
Tools
Advantages
Disadvantages
Typical Applications
Postal surveys
(or eresearch)
researcher to administer
Low cost
Respondents can
scalar questions
Visual explanations
Ability to build
answered
Show cards can be
Expensive for a
geographically dispersed
population
Takes longer to
Face to face
interviews
Good response to
open ended questions
Can ask respondent
can be provided
used
Telephone
interviews
Where there is a
Low cost
High control of
interviewer standards
High control of
sample
Easy to ask for
Tedious for
When planning the fieldwork, there is likely to be a debate as to whether the interview should be
carried out without disclosing the identify of the sponsor. If the questions in the survey are about
a particular company or product, it is obvious that the identity has to be disclosed. When the
survey is carried out by phone or face to face, co-operation is helped if an advance letter is sent
out explaining the purpose of the research. Logistically this may not be possible in which case
the explanation for the survey would be built into the introductory script of the interviewer.
If the survey covers a number of competing brands, disclosure of the research sponsor will bias
the response. If the interview is carried out anonymously, without disclosing the sponsor, bias will
result through a considerably reduced strike rate or guarded responses. The interviewer,
explaining at the outset of the interview that the sponsor will be disclosed at the end of the
interview, usually overcomes this.
to six or seven factors but respondents struggle to place things in rank order once the first four or
five are out of the way. It would not work for determining the importance of 30 attributes.
As a check against factors that are given a stated importance score, researchers can
statistically calculate (or derive) the importance of the same issues. Derived importance is
calculated by correlating the satisfaction levels of each attribute with the overall level of
satisfaction. Where there is a high link or correlation with an attribute, it can be inferred that the
attribute is driving customer satisfaction. Deriving the importance of attributes can show the
greater influence of softer issues such as the friendliness of the staff or the power of the brand
things that people somehow cannot rationalise or admit to in a stated answer.
Someone once told me that the half way point in a marathon is 22 miles. Given the fact that a
marathon is 26.2 miles it seemed that their maths was awry. Their point was that it requires as
much energy to run the last 4.2 miles as it does the first 22. The same principle holds in the
marathon race of customer satisfaction. The half way point is not a mean score of 5 out of 10 but
8 out of 10. Improving the mean score beyond 8 takes as much energy as it does to get to 8 and
incremental points of improvement are hard to achieve.
Other researchers prefer to concentrate on the top box responses those scores of 4 or 5 out
of 5 the excellent or very good ratings. It is argued that these are the scores that are required to
create genuine satisfaction and loyalty. In their book The Service Profit Chain, Heskett, Sasser
and Schlesinger argue that a rating of 9 or 10 out of 10 is required on most of the key issues that
drive the buying decision. If suppliers fail to achieve such high ratings, customers show
indifference and will shop elsewhere. Capricious consumers are at risk of being wooed by
competitors, readily switching suppliers in the search for higher standards. The concept of the
zone of loyalty, zone of indifference and zone of defection as suggested by the three Harvard
professors (JL Heskett, The Service Profit Chain; The Free Press; New York 1997) is illustrated
below in diagram 1:
Look at the customer satisfaction data to see where there are low absolute scores and
low scores relative to the competition
Assume the scores are correct unless there is irrefutable evidence to the contrary and
remember, perceptions are reality
Step 2: Challenge and redefine the segmentation
Are segments correctly defined in the light of the customer satisfaction findings?
How could a change in segmentation direct the offer more effectively and so achieve
higher levels of satisfaction?
Step 3: Challenge and redefine the customer value propositions
Are customer satisfaction scores low because the customer value proposition (CVP) is
not being communicated effectively to the market?
Are customer satisfaction scores low because the CVP is not being effectively
implemented?
Is the CVP right for the segment? How could a change in CVP achieve a higher customer
satisfaction index (CSI)?
Step 4: Create an action plan
Think through the issues that need to be addressed and list them out
Identify the root cause of the problems
Identify any barriers that could stop the improvement taking place
Set measurable targets
Has the action recommended in the plan, taken place? Has it been enough? Has it had
enough time to work?
Revisit the steps spot the gap, challenge the segmentation and CVP, more action
Many of the issues that affect customer satisfaction span functional boundaries and so
organisations must establish cross-functional teams to develop and implement action plans. One
of the best ways of achieving this involvement by different groups of employees is to involve them
in the whole process.
When the survey results are available, they should be shared with the same groups that were
involved right at the beginning. Workshops are an excellent environment for analysing the survey
findings and driving through action planning. These are occasions when the survey data can be
made user friendly and explained so that it is moved from something that has been collected and
owned by the researcher to something that is believed in and found useful by the people that will
have to implement the changes.
As with all good action planning, the workshops should deliver mutually agreed and achievable
goals, assigned to people who can make things happen, with dates for achievements and
rewards for success. Training may well be required to ensure that employees know how to handle
customer service issues and understand which tools to use in various situations. Finally, there
should be a constant review of the process as improving customer satisfaction is a race that
never ends.
By Nick Hague
and Paul Hague
Readers of this white paper also viewed: