Вы находитесь на странице: 1из 19

Facilities

Emerald Article: National standards of customer satisfaction in facilities management Matthew Tucker, Michael Pitt

Article information:
To cite this document: Matthew Tucker, Michael Pitt, (2009),"National standards of customer satisfaction in facilities management", Facilities, Vol. 27 Iss: 13 pp. 497 - 514 Permanent link to this document: http://dx.doi.org/10.1108/02632770910996342 Downloaded on: 04-06-2012 References: This document contains references to 31 other documents To copy this document: permissions@emeraldinsight.com This document has been downloaded 1396 times since 2009. *

Users who downloaded this Article also downloaded: *


David Wyman, Maury Seldin, Elaine Worzala, (2011),"A new paradigm for real estate valuation?", Journal of Property Investment & Finance, Vol. 29 Iss: 4 pp. 341 - 358 http://dx.doi.org/10.1108/14635781111150286 James DeLisle, Terry Grissom, (2011),"Valuation procedure and cycles: an emphasis on down markets", Journal of Property Investment & Finance, Vol. 29 Iss: 4 pp. 384 - 427 http://dx.doi.org/10.1108/14635781111150312 Franois Des Rosiers, Jean Dub, Marius Thriault, (2011),"Do peer effects shape property values?", Journal of Property Investment & Finance, Vol. 29 Iss: 4 pp. 510 - 528 http://dx.doi.org/10.1108/14635781111150376

Access to this document was granted through an Emerald subscription provided by UNIVERSITI TUN HUSSEIN ONN MALAYSIA For Authors: If you would like to write for this, or any other Emerald publication, then please use our Emerald for Authors service. Information about how to choose which publication to write for and submission guidelines are available for all. Additional help for authors is available for Emerald subscribers. Please visit www.emeraldinsight.com/authors for more information. About Emerald www.emeraldinsight.com With over forty years' experience, Emerald Group Publishing is a leading independent publisher of global research with impact in business, society, public policy and education. In total, Emerald publishes over 275 journals and more than 130 book series, as well as an extensive range of online products and services. Emerald is both COUNTER 3 and TRANSFER compliant. The organization is a partner of the Committee on Publication Ethics (COPE) and also works with Portico and the LOCKSS initiative for digital archive preservation.
*Related content and download information correct at time of download.

The current issue and full text archive of this journal is available at www.emeraldinsight.com/0263-2772.htm

National standards of customer satisfaction in facilities management


Matthew Tucker and Michael Pitt
School of the Built Environment, Liverpool John Moores University, Liverpool, UK
Abstract
Purpose The purpose of this paper is to provide an overview of the rst stage of primary research undertaken to establish generic customer satisfaction benchmarks for the facilities management (FM) industry, and test whether the benchmarks can be strategically implemented into individual FM provider organisations to further enhance their existing performance measurement processes and subsequent service provision. Design/methodology/approach The study proposes the development of a conceptual framework, the Customer Performance Measurement System (CPMS). The CPMS consists of four stages, and uses a mixed methodological strategy. This paper provides the ndings from the rst stage of the CPMS, to establish generic customer satisfaction benchmarks for the FM industry. This is undertaken through two annual customer satisfaction surveys in 2007 and 2008 across the UK and Ireland. Findings The paper establishes customer satisfaction benchmarks for individual FM services. The benchmarks identify trends between key variables of criticality, efciency and service provision, including general variables regarding the performance of the FM team delivering the services and overall satisfaction with all services provided. Research limitations/implications The research presented forms part of a wider study testing the concept of the CPMS framework. The paper provides an overview of the wider study, while focusing on the completion of the rst stage of primary research. Originality/value Information on the application of customer satisfaction indicators within the industry is limited. This paper provides an insight into how customers perceive individual FM services within the UK and Ireland. Keywords Benchmarking, Customer satisfaction, Performance management, Facilities, United Kingdom, Ireland Paper type Research paper

Standards of customer satisfaction 497


Received March 2009 Accepted June 2009

Introduction Facilities management (FM) is the integration and alignment of the non-core services, including those relating to premises, required to operate and maintain a business to fully support the core objectives of the organisation (Tucker and Pitt, 2008). Although FM services are non-core in nature, if managed correctly they should have a strategic importance to adding value to an organisations core business delivery. The standard at which an organisation believes it is delivering FM services can often be distinctly different from the perceptions of the customers/users receiving the services. This is reinforced by the GAPS model created by Parasuraman (2004), illustrating that service quality fails when there is a gap between customer expectations and perceptions. FM therefore has a critical importance within the workplace, where the performance of

Facilities Vol. 27 No. 13/14, 2009 pp. 497-514 q Emerald Group Publishing Limited 0263-2772 DOI 10.1108/02632770910996342

F 27,13/14

498

service delivery needs to be measured to ensure added value is being achieved. Moreover, Tucker and Smith (2008) express that the perceptions of the user to the initial input of the service delivery process are equally important, as they essentially determine the strategic and operational objectives of the organisation. The importance of gathering service delivery data from a customer perspective in FM is clear. In 2004, the British Institute of Facilities Management (BIFM), part funded by the Department of Trade and Industry (DTI) (now known as the Department for Business Innovation and Skills (BIS)), issued a report titled Rethinking facilities management: accelerating change through best practice (BIFM, 2004). Promoting customer satisfaction was regarded as one of the top ve issues today and in ve to ten years time for FM. Performance benchmarking was also noted as being important, as although not in the top ve current issues, it showed the largest change in importance for ve to ten years time across all FM functions. The report is now ve years old, and based on these ndings the importance of promoting customer performance measurement and the use of benchmarking within a strategic FM function should now be essential. Studies on performance measurement within FM have been commonplace over the past two decades. Studies have generally tended to focus on how FM organisations can manage performance strategically to achieve added value and more efcient service delivery (e.g. Amaratunga and Baldry, 2000, 2002, 2003; Hinks and McNay, 1999; Kincaid, 1994; Varcoe, 1996a), and the importance of benchmarking, which have arguably leaned towards more nancially orientated factors (e.g. Massheder and Finch, 1998a, b; McDougall and Hinks, 2000; Wauters, 2005; Williams, 2000; Varcoe, 1996b). Specic studies focusing on the involvement of the customer in FM are also evident (e.g. Bandy, 2003; Amaratunga et al., 2004; Futcher et al., 2004; Shaw and Haynes, 2004; Walters, 1999; Loosemore and Hsin, 2001), but they are not as frequent as more generic performance measurement papers. Moreover, specic research on the application of customer satisfaction indicators as a strategic performance measurement process within FM is insufciently documented. According to Sarshar and Pitt (2009), FM suppliers need to go beyond their existing customer performance measurement systems, moving from customer satisfaction surveys towards 360 degree client perception management systems. By this, Sarshar and Pitt (2009) explain that this would require a review of client requirements from both quantitative and qualitative data, where most surveys are quantitative in nature and consequently miss out on important issues. This paper forms the rst stage of a wider research study investigating how FM provider organisations can strategically use and apply customer performance measurement to innovate and improve FM service provision. This was achieved by developing a conceptual model, known as the Customer Performance Measurement System (CPMS) (Tucker and Pitt, 2009), which incorporates a robust process of quantitative and qualitative methods that are accessible and strategically applicable to FM organisations. The following objectives were set to investigate operationally the development of the CPMS: . establish generic industry benchmarks of customer satisfaction for specic FM services in the UK and Ireland; . strategically apply generic industry benchmarks of customer satisfaction to an FM organisation through the quantitative measurement of internal client-base

benchmarks and a qualitative assessment of the organisations existing processes; and assess the extent to which the CPMS model can enhance an FM organisations existing customer performance measurement processes and contribute to strategically innovating and improving FM service provision.

Standards of customer satisfaction 499

This paper provides an overview of the wider study, and then focuses on the primary research undertaken to complete the rst aim identied above. The CPMS model According to Fellows and Liu (2003), modelling is the process of constructing a model, a representation of a designed or actual object, process or system, a representation of reality. To this, the development of a conceptual framework or model, termed CPMS, was developed to effectively test the aims and objectives set in the previous section. The CPMS will create a continuous improvement process allowing both customers and FM providers to gain knowledge and accessibility to customer performance data, and help FM provider organisations prioritise and strategically apply customer performance measurement information. The studies mentioned in the Introduction were inuential in developing the CPMS framework. The overarching concept of applying benchmarking techniques however came from Camp (1989), who suggested that benchmarking is the search for industry best practices that lead to superior performance. Hence, by establishing a national customer satisfaction benchmark of FM service delivery, this will help kick-start the potential to obtain superior performance for FM service providers. However, the fundamental inuence in developing the model reected on the work of Amaratunga and Baldry (2002), who emphasised the importance of performance management, where results in performance measurement indicated what happened, not why it happened, or what to do about it. Hence the CPMS model uses external and internal benchmarks to explain what has happened, a gap analysis procedure will explain where it has happened, but the most important component, i.e. a strategic implementation process, will explain why and how FM providers can go about improving customer performance. The CPMS framework consists of four key stages (Figure 1):

Figure 1. CPMS

F 27,13/14

500

Stage 1: National benchmarking The purpose of this initial stage is to determine an external customer opinion via generic benchmarks within the FM industry. Key customer satisfaction attributes will be agreed across individual FM services and will be targeted at organisations that receive and/or operate FM services in-house (i.e. no FM providers). Stage 2: Internal benchmarking An internal benchmarking exercise will then take place within an individual FM provider organisation. This will require access to the FM providers client base. The exercise will largely mimic the content of Stage 1, providing a direct comparison to evaluate how individual FM providers are performing against the perceived national standards. Data would also be collected from a qualitative assessment of key FM staff within the organisation, and a sample of their client base, to further understand the current organisational framework for capturing customer satisfaction. Stage 3: Gap analysis The CPMS is now able to analyse the specic gaps/areas for improvement within the service delivery of the FM provider. Stage 4: Customer performance strategy The FM provider organisation is now in a position of the informed client. Strategic decisions can be made to devise new processes to add value and further enhance the organisations service quality.

Research structure In order to test the effectiveness of the conceptual CPMS framework described, a mixed methodological approach is taken using both quantitative and qualitative methods. More specically, a mixed sequential explanatory strategy (Creswell, 2009) is used, where quantitative data and qualitative data are collected in sequential phases. This also gives weight to the collection of quantitative data, where initial quantitative data is collected in the rst phase, and is followed up by the collection of qualitative data in the second phase to help inform the initial quantitative ndings (Creswell, 2009). Creswell (2009) states that theory use in mixed methods studies may include theory deductively, in quantitative theory testing and verication, or inductively as in an emerging qualitative theory or pattern. Essentially this study adopts a deductive approach, where theory is systematically tested in order to unravel new ndings and theory (Stages 1-3 of the CPMS). However, the nal process is inductive (Stage 4 of the CPMS), where the researcher infers the implications of his or her ndings for the theory that prompted the whole exercise (Bryman and Bell, 2007). Stage 1 is conducted through a large-scale quantitative study in order to establish generic customer satisfaction benchmarks for the FM industry. Stage 2 then involves testing whether the benchmarks can be applied to an instrumental case study, through an individual FM provider organisation. This will be achieved through an internal quantitative study to compare the case studys client base benchmarks (Stage 2a), to the generic benchmarks established in Stage 1. It will also be achieved through a qualitative study (Stage 2b) involving a series of semi structured interviews with case study staff and clients to inform the researcher of the existing processes involved for assessing customer satisfaction. Stage 3 then amalgamates the data established from Stages 1 and 2, analysing the key gaps and areas for improvement within the case study. This information will then be disseminated to case study staff for consideration.

Stage 4 then nally infers the ndings by providing suggestions for improvement to the existing processes. This will take place by conducting a series of post-CPMS interviews with case study staff to validate the effectiveness and applicability of the framework (Stage 4a), followed by a series of collective case studies, through a selection of external FM providers to validate externally the effectiveness and applicability of the framework (Stage 4b).

Standards of customer satisfaction 501

CPMS Stage 1: methodology Survey design A focus group was arranged in 2007 to liaise with FM experts to nd out exactly what would be the most effective benchmarks to provide within the FM industry. The focus group contained the following members: . senior management representation from a leading FM provider in the UK; . a specialist FM consultant with experience in client/provider relationships; and . representatives from two leading FM journal magazines in the UK. The focus group produced the following outcomes regarding the establishment of customer satisfaction benchmarks in FM: . to establish customer feedback for individual FM services; . to establish how important FM services are to the customer; . to establish how efcient the level of service is provided to the customer; . to establish how each service is provided, and how long (i.e. in-house or outsourced); and . to establish general perceptions of overall satisfaction of all services provided, level of innovation in service provision, and the FM team providing the services within the customer organisation. The survey sought to obtain information on individual FM services using ordinal questions to rank each FM service by efciency, criticality and service provision. The FM services covered were: . maintenance of the building fabric; . mechanical and electrical (M&E) engineering; . waste management; . maintenance of grounds and gardens/internal planting; . cleaning; . catering; . mailroom; . security; . health and safety; . reception (including switchboard); and . helpdesk.

F 27,13/14

502

The level of criticality was rstly rated using a four-point scale under the following criteria: . Not critical Although the service is important, overall business operations would not be effected by its temporary delivery failure. . Moderately critical The service would have some repercussions in the occurrence of failure, but can quickly be overcome. . Very critical Delivery failure of the service would completely effect business operations and cause major problems continuing in its absence. The level of efciency was then rated using a ve-point scale under the following criteria: . Excellent Standards almost always meet and frequently exceed expectations. Performance in key areas is uniformly high. . Good Standards almost always meet and often exceed expectations. Performance in key areas is uniformly good and in some cases excellent. . Acceptable Standards usually meet expectations though there is room for improvement in some areas. No failings in key areas. . Poor Standards regularly fail to meet expectations and signicant room for improvement. Some failings in key areas. . Unacceptable Standards fail to meet expectations in most areas and improvements required urgently. The level of service provision was then provided using a nominal scale under the following criteria: . outsourced for less than a year; . outsourced for 1-3 years; . outsourced for more than three years; and . service provided in-house. Customers also had the opportunity to comment on their satisfaction with more general themes relating to the delivery of FM services within their organisation. In particular, customers provided feedback on the following themes: . aspects of the FM team in terms of people involvement and cultural t, training and guidance, and general attitude; . the level of innovation in service provision; and . overall satisfaction with the services received. These questions were rated using the same ve-point scale as the efciency variable. Survey method Specic constraints were identied when considering the best survey approach to adopt in order to collect generic customer benchmarking data successfully. These included:

geographical spread in order to establish generic benchmarks, a signicant geographic spread was required; time consideration had to be given to the time it would take to administer the survey; budget conducting a large scale national survey has potentially high costs associated with it, and hence in order to successfully conduct a survey across a wide geographical spread, careful consideration to the most cost-effective survey method was needed to avoid potentially high costs; and target characteristics targeting business customers within a workplace environment can potentially have adverse effects if the wrong method is chosen.

Standards of customer satisfaction 503

Through identication of these issues, it was determined that the most cost-effective survey method to use was an online web-based survey method. Target population and procedure Due to the specic focus on FM, the logical audience that could capture a wide-ranging viewpoint was through BIFM members. The BIFM consists of over 12,000 members, and although it is not possible to determine the exact proportion of FM customers within this population, it provides an adequate population that was diverse and variable enough to provide a representative sample of organisations receiving FM services within the UK. Initially the survey was undertaken in 2007 only, with a further annual survey undertaken in 2008 in order to test whether there were any signicant differences in customer perceptions on an annual basis. The surveys targeted organisations that receive and/or operate in-house FM services (i.e. the customer as opposed to the provider). A simple random sample was then implemented by conducting two identical online surveys with all BIFM members in the UK and Ireland in 2007 and 2008. This means that each unit of the population has equal opportunity to complete the survey (Bryman and Bell, 2007). Response rate A response rate could not be calculated due to the unavailability of statistics determining the proportion of BIFM members receiving and/or operating in-house services. However, although response rates are useful they do not show how accurate or condent the survey data is in determining the views of the total population (i.e. including those who do not participate). This can be achieved by using a 95 per cent condence interval method, in which it is evident that the survey produced accurate ndings. Through a sample size of 230 collected in 2007 from a total BIFM member population of 11,414 (BIFM, 2007), a 95 per cent condence interval of 6.4 per cent was calculated. This means that if 50 per cent of respondents answer yes to a yes/no question, we can be 95 per cent certain that the views of the total population answering yes (including those who did not participate in the survey) will lie between 43.6 per cent and 56.4 per cent. In 2008, a sample size of 222 was collected from a total BIFM member population of 12,678 (BIFM, 2008), giving a 95 per cent condence interval of 6.5 per cent. These gures were deemed to provide a fairly accurate indication of customer satisfaction benchmarks[1].

F 27,13/14

504

CPMS Stage 1: Analysis A systematic analysis procedure was undertaken for each question asked within the 2007 and 2008 surveys: (1) Kolmogorov-Smirnov one-sample test (to identify whether the data was normally distributed); (2) Mann-Whitney U test (to identify any signicant differences between 2007 ndings and 2008 ndings); (3) frequency distribution tests (to identify trends in percentages between individual variables); and (4) x 2 tests (categorical analysis to identify any signicant relationships between two individual variables). Kolmogorov-Smirnov one sample test The one-sample Kolmogorov-Smirnov test compares the scores in a given sample to a normally distributed (theoretical) set of scores with the same mean and standard deviation (Field, 2009). Each variable produced a high signicance level (, 0.005), meaning that the data was not normally distributed and further inferential analysis tests to understand differences between variables had to be non-parametric. Mann-Whitney U test The Mann-Whitney U test can be used for non-parametric data to see whether there are any differences between two variables where different participants have been used. This was the most appropriate test to use, as although each survey was completed by the same target population (i.e. BIFM members), the actual participants within that target population may be different in each survey. In addition, because Mann-Whitney tests are generally better suited to large samples, a Monte Carlo method was adopted to increase the accuracy of the results, by creating a theoretical distribution similar to that found in the sample and then taking several samples (the default in SPSS being 10,000) (Field, 2009). The results found that no variables had a high signicance level (, 0.005), suggesting that customers perception and satisfaction levels when rating characteristics of FM services did not dramatically differ in the space of one year, i.e. customers perception and satisfaction levels of received FM services are not any more different in 2008 than in 2007. Frequency distribution analysis Because the ndings of the Mann-Whitney test identied very little difference between the 2007 and 2008 customer satisfaction benchmarks, this section reports on the frequency distribution trends from the most recent survey (2008). It does however reinforce the ndings by illustrating central tendency trends for 2007 and 2008. Criticality. In terms of customers perception of the criticality of FM services to overall business operations, the majority of services were perceived to be very critical. This included traditional hard services such as M&E engineering, services with high legislative demands such as health and safety, and front-line services such as the reception and helpdesk services. In summary, Table I shows

Very critical services

M&E engineering (81 per cent) Security (72 per cent) Health and safety (66 per cent) Helpdesk (52 per cent) Mailroom (52 per cent) Reception (51 per cent) Cleaning (52 per cent) Building fabric (49 per cent) Waste management (47 per cent) Catering (45 per cent) Grounds and gardens (58 per cent)

Standards of customer satisfaction 505


Table I. Criticality of FM services from customer satisfaction survey 2008

Moderately critical services

Not critical services

the frequency distribution trends (based on the most frequent category that customers selected from the 2008 survey): In addition, Figure 2 illustrates these trends using a mean score, and highlights the similarity between the 2007 and 2008 surveys. Here is it clear to see which services are considered more critical than others. Efciency. In terms of customers satisfaction with the efciency of service delivery of individual FM services, the majority of customers rated all services as good or excellent. The highest rating services included health and safety, reception, and the mailroom. The lowest rated services included harder services such as waste management, building fabric, and M&E engineering services. In summary, Table II shows the frequency distribution trends from 2008 survey (recoded to a three-point scale). In addition, Figure 3 illustrates these trends using a mean score, and highlights the similarity between the 2007 and 2008 surveys. Here the graph shows that there is not a great difference in the efciency of each service, as they are generally rated between 2 (good) and 3 (acceptable). Provision. In terms of customers identifying the provision of services delivered at their location, the majority of customers outsource their services, and have done so for more than three years. However, the majority of customers provided the health and safety service in-house, along with front-line services such as reception, and

Figure 2. Criticality of FM services using a mean score from the customer satisfaction surveys in 2007 and 2008

F 27,13/14

Service Reception Health and safety Mailroom Catering Helpdesk Security Grounds and gardens Cleaning Waste management M&E engineering Building fabric Note: Figures given are percentages

Good 81 80 78 69 67 66 63 61 58 57 52

Fair 17 16 21 27 25 28 32 29 36 34 39

Poor 2 4 1 4 8 6 5 10 6 9 9

506
Table II. Efciency of FM services from customer satisfaction survey 2008

Figure 3. Efciency of FM services using a mean score from the customer satisfaction surveys 2007 and 2008

helpdesk services, and also the mailroom. In summary, Table III shows the frequency distribution trends (based on the most frequent category that customers selected from the 2008 survey) In addition, Figure 4 illustrates these trends using a mean score, and highlights the similarity between the 2007 and 2008 surveys. Here it is clear to see the difference in services generally outsourced for either one-three years or more than three years (mean score between 2 and 3) and provided in-house (mean score close to 4). General. Customers also rated general variables of service delivery, including aspects of the FM team delivering the services onsite, satisfaction with the level of innovation in service provision, and an overall satisfaction for the delivery of all services onsite. Generally, customers rated aspects of the FM team positively, with the vast majority rating all aspects as good or excellent. A summary is provided below (based on percentage rating good or excellent from 2008 survey): . people involvement and cultural t (81 per cent); . training and competence (67 per cent); and . general attitude (78 per cent).

However, customers had a more varied opinion regarding the level of innovation in service provision, with just over a third (40 per cent) who rated good or excellent, just over a third (36 per cent) rating acceptable, and a large proportion (24 per cent) rating poor or unacceptable. Finally, customers were generally satised overall with all services provided at their location, with 59 per cent rating good or excellent, with around a third (35 per cent) rating acceptable and only 6 per cent rating poor or unacceptable. Chi-square tests Although the central tendency and frequency distribution analysis identied potential relationships between the data, inferential analysis was undertaken in the form of x 2 tests (or Fishers exact tests, where less than 20 per cent of cells produced counts less than 5) in order to establish any relationships between variables that were statistically signicant. In addition, if a signicant relationship was found, the strength of association was then tested using Cramers V test[2]. Criticality/efciency/provision variables. The following hypotheses were explored for each service in both the 2007 and 2008 surveys:

Standards of customer satisfaction 507

Outsourced services

Waste management (88 per cent) M&E engineering (82 per cent) Grounds and gardens (82 per cent) Cleaning (81 per cent) Catering (80 per cent) Building fabric (70 per cent) Security (66 per cent) Health and safety (88 per cent) Mailroom (80 per cent) Reception (76 per cent) Helpdesk (67 per cent) Table III. Provision of services from customer satisfaction survey 2008

In-house services

Figure 4. Provision of FM services using a mean score from the customer satisfaction surveys 2007 and 2008

F 27,13/14

508

the level of criticality people associate with a service relates to how efcient people perceive that service to be. the level of criticality people associate with a service relates to the way the service is provided; and the level efciency people perceive a service to have relates to the way the service is provided.

Only the catering and mailroom services identied a signicant association between the efciency and criticality of services in both the 2007 and 2008 surveys (Table IV). The maintenance of the grounds and gardens was the only service to produce a signicant association between the criticality and provision of services in both the 2007 and 2008 surveys. This service was generally rated least critical and predominantly provided in-house (Table V). There were no signicant associations found between the efciency and provision of services in both the 2007 and 2008 surveys. FM team variables. The following hypothesis was explored for the level of criticality/efciency/provision of each service relating to aspects of the FM team delivering the services onsite in both 2007 and 2008 surveys:
2007 survey Strength of Statistical signicance signicance (Fishers Exact) (Cramers V) 0.005 Slight (0.199) 2008 survey Strength of Statistical signicance signicance (Fishers exact) (Cramers V) 0.0013 Moderate (0.202) General crosstabulation trend from contingency table Customers perceive criticality of service as moderately critical to business operations and efciency of service delivery as good Customers perceive criticality of service as very critical to business operations and efciency of service delivery as good

Service Catering

Mailroom Table IV. FM services with a signicant association between criticality and efciency variables

0.000

Moderate (0.230)

0.001

Slight (0.194)

Table V. FM services with a signicant association between criticality and provision variables

Service Grounds and gardens

2007 survey Statistical Strength of signicance signicance (x 2 ) (Cramers V) 0.003 Moderate (0.250)

2008 survey Statistical Strength of signicance signicance (x 2) (Cramers V) 0.011 Moderate (0.215)

The level of criticality/efciency/provision associated with a given service relates to the level of perception associated with the different aspects of the FM team.

Standards of customer satisfaction 509

The only variable to show signicant associations with aspects of the FM team in both the 2007 and 2008 surveys was the efciency of service delivery. In terms of people involvement and cultural t of the FM team, there were signicant associations with all but four services, which were the waste management, grounds and gardens, cleaning and catering services (Table VI). Generally customers who rated the efciency of service delivery as good also rated the people involvement and cultural t of the FM team as good. In terms of training and competence of the FM team, again there were signicant associations with the majority of services, apart from the grounds and gardens, mailroom, cleaning and catering services (Table VII). Generally customers rated the efciency of service delivery as good also rated the training and competence of the FM team as good. In terms of general attitude of the FM team, there were signicant associations with six services, which were building fabric, M&E engineering, health and safety, reception, mailroom, and helpdesk (Table VIII). Generally customers rated the efciency of service delivery as good also rated the general attitude of the FM team as good.
2007 survey Strength of Statistical signicance signicance (Cramers V) (Fishers exact) 0.000 0.001 0.003 0.001 0.000 0.002 0.009 Moderate Moderate Moderate Moderate Moderate Moderate Moderate (0.284) (0.201) (0.224) (0.227) (0.290) (0.216) (0.215) 2008 survey Strength of Statistical signicance signicance (Cramers V) (Fishers exact) 0.000 0.000 0.003 0.000 0.000 0.000 0.000 Moderate (0.319) Moderate (0.356) Slight (0.194) Moderate (0.279) Moderate (0.262) Moderate (0.336) Moderate (0.259)

Service Building fabric M&E engineering Security Health and safety Reception Mailroom Helpdesk

Table VI. FM services with a signicant association between efciency and people involvement of FM team variables

Service Building fabric M&E engineering Waste management Security Health & safety Reception Helpdesk

2007 survey Strength of Statistical signicance signicance (Cramers V) (Fishers exact) 0.000 0.001 0.019 0.006 0.008 0.004 0.002 Moderate (0.359) Moderate (0.217) Slight (0.155) Moderate (0.245) Slight (0.172) Moderate (0.249) Moderate (0.210)

2008 survey Strength of Statistical signicance signicance (Cramers V) (Fishers exact) 0.000 0.000 0.000 0.000 0.000 0.000 0.008 Moderate Moderate Moderate Moderate Moderate Moderate Moderate (0.232) (0.249) (0.238) (0.265) (0.261) (0.239) (0.227) Table VII. FM services with a signicant association between efciency and training and competence of FM team variables

F 27,13/14

510

Overall variables. The following hypotheses were explored for the level of criticality/efciency/provision of each service relating to general variables in both 2007 and 2008 surveys: . The level of criticality/efciency/provision associated with a given service relates to the level of perception associated with the amount of innovation in the provision of the services delivered. . The level of overall satisfaction with all services relates to the level of perception associated to the amount of innovation in the provision of the services delivered. The only variable to show signicant associations with the level of innovation in service provision in both the 2007 and 2008 surveys was again the efciency of service delivery, in which all but two services showed a signicant association (Table IX). The only two variables not to were the catering and grounds and gardens services, which were only signicant in the 2008 survey. Finally, the overall satisfaction with all services was tested for any signicant relationship with the level of innovation in service provision. Both the 2007 and 2008 survey produced highly signicant results (0.000 in both). The strength of association

Service Table VIII. FM services with a signicant association between efciency and general attitude of FM team variables Building fabric M&E engineering Health and safety Reception Mailroom Helpdesk

2007 survey Strength of Statistical signicance signicance (Cramers V) (Fishers exact) 0.000 0.001 0.003 0.005 0.001 0.001 Moderate (0.298) Moderate (0.237) Moderate (0.215) Slight (0.198) Moderate (0.227) Moderate (0.269)

2008 survey Strength of Statistical signicance signicance (Cramers V) (Fishers exact) 0.000 0.000 0.010 0.003 0.000 0.036 Moderate (0.252) Moderate (0.285) Slight (0.166) Moderate (0.200) Moderate (0.302) Slight (0.159)

Service Building fabric M&E engineering Waste management Cleaning Security Health and safety Reception Mailroom Helpdesk

2007 survey Strength of signicance Statistical (Cramers V) signicance 0.000 0.000 0.000 0.029 0.003a 0.000a 0.014a 0.004a 0.001a Moderate (0.246) Moderate (0.214) Moderate (0.250) Slight (0.156) Slight (0.186) Moderate (0.208) Slight (0.158) Slight (0.185) Moderate (0.225)

2008 survey Strength of signicance Statistical (Cramers V) signicance 0.000 0.000 0.000 0.012 0.000a 0.000a 0.033a 0.000a 0.001a Moderate (0.284) Moderate (0.290) Moderate (0.218) Slight (0.172) Moderate (0.219) Moderate (0.246) Slight (0.154) Moderate (0.228) Moderate (0.224)

Table IX. FM services with a signicant association between efciency and level of innovation in service provision

Note: aFishers exact test

of both surveys was also very strong (0.470 in 2007 and 0.490 in 2008) using Cramers V test (Table X). Through cross-tabulating the two variables, generally it can be assumed that customers overall satisfaction with all services is good and their satisfaction with the level of innovation in service provision is also good. CPMS Stage 1: discussions and implications The paper has provided an overview of the rst stage of primary research undertaken within a wider study focusing on how customer performance measurement information can be strategically implemented into FM provider organisations. This rst stage of primary research has aimed to establish generic customer satisfaction benchmarks of service delivery for the FM industry. Following the Mann-Whitney tests, it was established that there was no signicant difference between the 2007 and 2008 survey ndings. This suggests that if generic customer satisfaction benchmarks are to be permanently implemented across the UK and Ireland, the survey would not need to be conducted every year, but possibly updated every two or three years minimum. The frequency distribution analysis identied some interesting ndings in the relationships between the criticality, efciency, and provision of certain services. Generally, it could be assumed that the health and safety, mailroom, reception, and helpdesk services were all considered very critical, highly efcient, and predominantly provided in-house. One could contend that because health and safety services rely on strict legislative demands, that possibly a specialist provider would be more suited to deliver the service. However, this does not appear to be the case across the UK and Ireland, with customers considering the service to be highly efcient via in-house methods. The reception and helpdesk services are also interesting ndings, as they are generally considered to be front-line services and are therefore naturally perceived to be very critical, in which customers again seem to entrust as an in-house service, and provide highly efcient results. Conversely, the M&E engineering, waste management and building fabric services were considered very or moderately critical, but rated lowest in terms of efciency, and predominantly outsourced. These services are often considered more traditional hard functions of FM compared to those rated more positively above, but are distinctly more negative in terms of service delivery. Through the x 2 and Fishers exact tests, however, only a small proportion of services (mailroom, catering, grounds and gardens) actually produced signicant relationships between the level of criticality, efciency, and provision of services, suggesting that although the trends identied through the frequency distribution analysis are important to consider, they do not necessarily prove that this is the norm for all customers across the country.

Standards of customer satisfaction 511

2007 survey Statistical signicance Strength of signicance (Fishers exact) (Cramers V) 0.000 Strong (0.470)

2008 survey Statistical signicance Strength of signicance (Fishers exact) (Cramers V) 0.000 Strong (0.490)

Table X. Overall satisfaction with all FM services with a signicant association with the level of innovation in service provision

F 27,13/14

512

The x 2 and Fishers exact tests did, however, produce the most frequent levels of signicant relationships between the efciency of services and the aspects of the FM team delivering the services. This suggests that customers are conscious about the actual people delivering the services on-site, implying that choice of the right personnel for particular services by FM providers is essential in achieving higher levels of customer satisfaction regarding the efciency of service delivery. Equally, there were many signicant relationships evident regarding the efciency of services and the level of innovation in service provision. This was also notable from customers overall satisfaction with all services provided on-site, which produced a strong association to the level of innovation in service provision. This suggests that customers are conscious of how innovative FM providers are in delivering certain services, implying possibly that there is an expectation for FM providers to be proactive in their approach to service delivery, which can have a direct impact to customer perception of the efciency of service delivery. In summary, the customer satisfaction benchmarks established within this research have produced some signicant ndings that could potentially further enhance FM provider organisations existing performance measurement strategies. The authors are now working closely with a case study FM provider organisation to test the further stages of the conceptual CPMS framework, which we hope to publish in the near future.
Notes 1. The condence intervals would actually be more accurate than the gures calculated above as the BIFM member population gures include FM provider organisations, which were not eligible to take part in the survey. This would mean that the total population targeted would be less, giving a stronger condence interval. However, because it was not possible to identify a split between providers and customers, this cannot be proved, so the above condence interval gures are quoted. 2. The x 2 test ndings within this section only report on signicant relationships found in both 2007 and 2008. However, other signicant relationships between variables were evident for individual years. References Amaratunga, D. and Baldry, D. (2000), Assessment of facilities management performance in higher education properties, Facilities, Vol. 18 Nos 7/8, pp. 293-301. Amaratunga, D. and Baldry, D. (2002), Moving from performance measurement to performance management, Facilities, Vol. 20 Nos 5/6, pp. 217-23. Amaratunga, D. and Baldry, D. (2003), A conceptual framework to measure facilities management performance, Property Management, Vol. 21 No. 2, pp. 171-89. Amaratunga, D., Baldry, D. and Haigh, R. (2004), Customer-related facilities management process and its measurement: understanding the needs of the customer, Proceedings of CIB W70 Facilities Management & Maintenance Hong Kong 2004 Symposium, Hong Kong Polytechnic University, Hong Kong. Bandy, N.M. (2003), Setting service standards: a structured approach to delivering outstanding customer service for the facility manager, Journal of Facilities Management, Vol. 1 No. 4, pp. 322-36.

BIFM (2004), Rethinking facilities management: accelerating change through best practice, available at: www.bifm.org.uk (accessed 3 August 2009). BIFM (2007), BIFM Annual Review 2007, available at: www.bifm.org.uk (accessed 4 March 2008). BIFM (2008), BIFM Annual Review 2008, available at: www.bifm.org.uk (accessed 1 February 2009). Bryman, A. and Bell, E. (2007), Business Research Methods, 2nd ed., Oxford University Press, Oxford. Camp, R.C. (1989), Benchmarking The Search for Industry Best Practices that Lead to Superior Performance, ASQC Quality Press, New York, NY. Creswell, J.W. (2009), Research Design: Qualitative, Quantitative, and Mixed Methods Approaches, 3rd ed., Sage Publications, London. Fellows, R. and Liu, A. (2003), Research Methods for Construction, Blackwell, Oxford. Field, A. (2009), Discovering Statistics Using SPSS, 3rd ed., Sage Publications, London. Futcher, K., So, M. and Wong, B. (2004), Stakeholder value in facilities management, Proceedings of the CIB W70 Facilities Management and Maintenance Hong Kong 2004 Symposium, Hong Kong Polytechnic University, Hong Kong, Vol. 2004. Hinks, J. and McNay, P. (1999), The creation of a management-by-variance tool for facilities management performance assessment, Facilities, Vol. 17 Nos 1/2, pp. 31-53. Kincaid, D.G. (1994), Measuring performance in facilities management, Facilities, Vol. 12 No. 6, pp. 17-20. Loosemore, M. and Hsin, Y.Y. (2001), Customer-focused benchmarking for facilities management, Facilities, Vol. 19 Nos 13/14, pp. 464-75. McDougall, G. and Hinks, J. (2000), Identifying priority issues in facilities management benchmarking, Facilities, Vol. 18 Nos 10-12, pp. 427-34. Massheder, K. and Finch, E. (1998a), Benchmarking methodologies applied to UK facilities management, Facilities, Vol. 16 Nos 3/4, pp. 99-106. Massheder, K. and Finch, E. (1998b), Benchmarking metrics used in UK facilities management, Facilities, Vol. 16 Nos 5/6, pp. 123-7. Parasuraman, A. (2004), Assessing and improving service performance for maximum impact: insights from a two-decade-long research journey, Performance Measurement & Metrics, Vol. 5 No. 2, pp. 45-52. Sarshar, M. and Pitt, M. (2009), Adding value to clients: learning from four case studies, Facilities, Vol. 27 Nos 9/10, pp. 399-412. Shaw, D. and Haynes, B. (2004), An evaluation of customer perception of FM service delivery, Facilities, Vol. 22 Nos 7/8, pp. 170-7. Tucker, M. and Pitt, M. (2008), Performance measurement in facilities management: driving innovation?, Property Management, Vol. 26 No. 4, pp. 241-54. Tucker, M. and Pitt, M. (2009), Customer performance measurement in facilities management: a strategic approach, International Journal of Productivity and Performance Management, Vol. 58 Nos 5/6, pp. 407-22. Tucker, M. and Smith, A. (2008), User perceptions in workplace productivity and strategic FM delivery, Facilities, Vol. 26 Nos 5/6, pp. 196-212. Varcoe, B.J. (1996a), Facilities performance measurement, Facilities, Vol. 14 Nos 10/11, pp. 46-51.

Standards of customer satisfaction 513

F 27,13/14

514

Varcoe, B.J. (1996b), Business-driven facilities benchmarking, Facilities, Vol. 14 Nos 3/4, pp. 42-8. Walters, M. (1999), Performance measurement systems a study of customer satisfaction, Facilities, Vol. 17 Nos 3/4, pp. 97-104. Wauters, B. (2005), The added value of facilities management: benchmarking work processes, Facilities, Vol. 23 Nos 3/4, pp. 142-51. Williams, B. (2000), An Introduction to Benchmarking Facilities and Justifying the Investment in Facilities, Building Economics Bureau, London. Corresponding author Matthew Tucker can be contacted at: m.p.tucker@ljmu.ac.uk

To purchase reprints of this article please e-mail: reprints@emeraldinsight.com Or visit our web site for further details: www.emeraldinsight.com/reprints

Вам также может понравиться