Академический Документы
Профессиональный Документы
Культура Документы
by
Steve Garlick
Geoff Pryor
2004
iii
Contents
Page
Acknowledgments........................................................................................................vi
Executive summary....................................................................................................vii
1
Introduction............................................................................................................ 1
1.1 Background to project.................................................................................. 1
1.2 Definitions of benchmarking........................................................................ 1
1.3 Evaluation and trust..................................................................................... 2
1.4 Approach........................................................................................................4
1.5 Report outline................................................................................................ 8
Literature review................................................................................................... 9
2.1 Background to university benchmarking................................................... 9
2.1.1 University quality auditing agenda................................................... 11
2.2 Australian university benchmarking.........................................................13
2.2.1 Early university benchmarking exercises..........................................13
2.2.2 University-specific benchmarking.................................................... 14
2.2.3 McKinnon benchmarking manual.....................................................14
2.2.4 Facilities management benchmarking...............................................15
2.2.5 Libraries benchmarking.....................................................................15
2.2.6 Technical services benchmarking......................................................16
2.2.7 Sector-wide benchmarking................................................................17
2.2.8 External relations benchmarking.......................................................17
2.3 Comments.....................................................................................................18
3 Methodology......................................................................................................... 21
3.1 Introduction................................................................................................. 21
3.2 University survey........................................................................................ 21
3.2.1 Background....................................................................................... 21
3.2.2 Use of benchmarking........................................................................ 22
3.3 University case studies................................................................................ 23
3.3.1 University involvement.................................................................... 23
3.3.2 The workshop process.......................................................................24
3.3.2.1 First workshop.......................................................................24
3.3.2.2 Ongoing discussion............................................................... 25
3.3.2.3 Second workshop................................................................. .25
3.3.2.4 Combined discussion.............................................................25
3.3.3 Case study universities...................................................................... 26
3.3.3.1 University of the Sunshine Coast.......................................... 26
3.3.3.2 Swinburne University of Technology................................... 26
3.3.3.3 Monash University................................................................ 26
3.3.3.4 Curtin University of Technology.......................................... 27
3.3.3.5 Royal Melbourne Institute of Technology............................ 27
3.3.3.6 Griffith University.................................................................27
3.3.4 Evidence from the case studies......................................................... 27
iv
3.3.4.1
3.3.4.2
3.3.4.3
3.3.4.4
3.3.4.5
3.3.4.6
3.3.4.7
Sector implementation......................................................................................... 53
7.1 Survey evidence........................................................................................... 53
7.2 Workshop evidence..................................................................................... 54
Process evaluation................................................................................................ 55
References................................................................................................................... 61
Appendices.................................................................................................................. 65
A.1 University benchmarking survey.....................................................................65
A.2 First workshop program...................................................................................71
A.3 Second workshop program...............................................................................73
A.4 Discussion kit.....................................................................................................75
Tables
Table 1.1
Table 3.1
Table 3.2
Table 4.1
Table 4.2
Table 5.1
Figures
Figure 1.1 The emerging show me and involve me world...................................3
Figure 6.1 Phases for an improvement cycle in university functions....................46
Figure 6.2 Reviewing the current situation.............................................................. 47
Figure 6.3 Strategic planning.....................................................................................48
Figure 6.4 Implementing actions...............................................................................49
Figure 6.5 Review of progress...................................................................................50
Figure 6.6 Learning for continuous improvement..................................................50
Figure 7.1 Ways to improve effectiveness of the McKinnon et al.
benchmarking manual.............................................................................53
vi
ACKNOWLEDGMENTS
Considerable thanks are due to the six universities that gave their time to participate in
the project over a period of around four months. In particular we are grateful to the
following people who were a stimulus for the project in each of the six institutions:
Don Maconachie (University of the Sunshine Coast), Anne Langworthy (Swinburne
University of Technology), Adolph de Sousa (Curtin University of Technology), Jill
Dixon (Monash University), Anne Badenhorst (Royal Melbourne Institute of
Technology) and Vicky Pattemore (Griffith University).
We also want to thank the 28 universities that took the time to complete the
benchmarking questionnaire.
The Project Steering Committee provided helpful comments as the project evolved.
The committee comprised: Ms Sian Lewis, Director, Quality Unit, Higher Education
Group, Department of Education, Science and Training (DEST); Mr Ian Hawke,
Director, Office of Higher Education, Queensland Government, representing the Joint
Committee for Higher Education; Professor John Dearn, Pro Vice-Chancellor
(Academic), University of Canberra, representing the Higher Education Research and
Development Society of Australasia; Professor Adrian Lee, Pro Vice-Chancellor
(Education and Quality Improvement), University of New South Wales; Mr Conor
King, Director Policy, Australian Vice-Chancellors Committee; and Ms Rhonda
Henry, Branch Manager, Educational Standards Branch, Australian Education
International (AEI) Group, DEST.
Particular thanks are due to Dr Claire Atkinson, Assistant Director, Quality Unit,
DEST, who managed the project for the department and ensured everything ran
smoothly.
Dr Steve Garlick
2004
vii
EXECUTIVE SUMMARY
This project began with two objectives:
1.
2.
to review the use of benchmarking generally among universities and suggest how
benchmarking might be made a more effective tool in the light of the pressures
and changes impacting on the sector.
Two approaches were used to examine and report on the two objectives. First, a
survey of the use of the McKinnon et al. manual and of benchmarking generally in
Australian universities, and second, an in-depth workshop and discussion program in
six case study universities undertaken over a four-month period.
Seventy percent of all universities responded to the benchmarking survey, and
included metropolitan and non-metropolitan, small and large, and new and old
institutions across all states. The six universities taking part in the workshops also
included representation from metropolitan and non-metropolitan, old and new, and
large and small institutions across three states.
While benchmarking has been increasingly used in Australian universities over the
last decade, a review of literature in Chapter 2 suggests this use has, in the main, been
superficial and peripheral. It has generally not been used for organisational
improvement in core business areas. Its development has not kept pace with changes
in recent years in performance reporting and quality auditing, with which it sometimes
gets confused. Most of the benchmarking use in universities has been for functions
such as libraries, facilities management and technical support. Apart from its use in
these areas, the McKinnon manual has been used mainly as a resource for ideas for
management.
Six universities were asked to be part of the projects case studies because they had
functional areas they were keen to improve. Apart from student admission and student
complaint processes, these functional areas included their relations with their local
and regional communities, teaching and learning strategies, research services, and
examinations and assessments.
Each of the six universities separately took part in a four-stage process of facilitated
workshops and discussion over a four-month period. The first series of workshops,
attended by an average of 20 to 25 staff and other stakeholders, including from
outside the university, was designed to share understandings about the environment of
change in universities, as well as definitions and concepts of benchmarking. It also
highlighted issues, drivers and impediments and directions for improvement in the
targeted areas. A discussion kit, based on the outcomes of the first workshop and
tailored to each institution, was used by smaller groups formed from participants
attending the first workshop. Over a four to six week period these institution-specific
working groups gathered information, addressed the issues raised in the initial
workshop, and made specific recommendations for benchmarking in the targeted
functional area. Each of the working groups presented their finding in a second series
viii
ix
Our feeling is that a simple approach to organisation improvement along these lines,
built on principles of dialogue, wide collaboration, reflection, leadership commitment
and learning, has a better chance of encouraging an effective approach to
improvement benchmarking. Such approaches could also help reinforce the objectives
for quality improvement currently being sought by the Australian Universities Quality
Agency in their cyclical process of auditing universities.
We recommend that these principles, the reports recommendations, and the
facilitative discussion approaches and tools used in this project be further discussed
with relevant stakeholders and reviewed over a longer period. The period for this
project was too short to fully appreciate their effectiveness as a means for assisting
universities to progress their improvement objectives. Early evidence, however,
suggests that with further development they may make a useful contribution in this
area.
INTRODUCTION
1.1
Background to project
Definitions of benchmarking
Being clear about what is meant by benchmarking is an important first step in the
university evaluation process. As Massaro (1998) has stated, it has become a loose
term surrounded by considerable jargon, covering a multitude of sins:
Our positive response to the above definition stems from a perspective that
universities are institutions of learning and thus ought also to be learning institutions.
This benchmarking definition by Jackson and Lund is fundamentally about learning
and about how improvement can be achieved through collaboration and active on-theground participation. Indeed, it is the mutually reinforcing force created by learning
and collaboration (more usefully defined as building connectivity) that underpins our
approach to improvement in this report. This definition also attracts us because we
view it as contrasting with the template approach of the McKinnon et al. manual,
which we believe restricts opportunities for improvement as it does not explicitly
foster collaboration and learning.
1.3
Trust
High
trust me
involve me
tell me
show me
Low
Low
Transparency
As trust diminishes, the demand for transparency
in the form of assurance mechanisms increases.
High
Receiving significant sums of public funding and having a key role in the
development of a nations future human capital resource, as well as regional
community, individual and business outcomes, universities have a considerable
societal obligation to perform, domestically and internationally, in ways that deliver
value for money. Universities are therefore not isolated from an involve me world.
As the global and knowledge worlds have become intertwined, the university has the
potential to be a new unifying space that can contribute more to the community, the
professions, society and culture. Demands on the university from these areas for more
involvement are increasing.
The last eight to ten years have seen a growth in performance benchmarking in the
higher education environment as governments have sought increased quality in
teaching and learning, greater industry applicability in research, greater efficiency in
institutional operation, and greater prudential responsibility for the public funds
provided. Consumers both domestic and international students, employers seeking
to improve their outputs and, more recently, local and regional communities are also
expecting more from their universities. Many stakeholders are becoming less
interested in how their institution rates against others than in the increasing returns
benchmarking can generate for them over time and how they can participate in the
process to ensure this takes place.
The higher education package, reflecting these views, stated:
Institutions need to be given maximum opportunity, consistent with public
accountability and social responsibility, to develop innovative responses to
rapidly changing environments in teaching and learning, in the direction and
commercialisation of research, and engagement with industry, research
institutions and other education providers. (Nelson 2003, p. 10)
Griffith University
Monash University
research services.
Table 1.1 provides detail on the areas explored by each university in the
benchmarking project and Chapter 3 provides details on the methodology used.
The six universities volunteered their involvement following a general invitation
provided to all universities to take part. The six universities provided coverage across
three states and included a large university, small university, non-metropolitan
university, technical university, and a semi-autonomous campus in a distributed
campus network.
Benchmarking templates for student admission and student complaint processes
following the McKinnon et al. framework, as required by the project brief, are
described in Chapter 4. Because the McKinnon et al. approach was found wanting as
a benchmarking tool that can assist the university with its improvement goals
(Chapter 5), we have gone much further and proposed an alternative approach based
on principles of collaboration, dialogue and learning to ensure consistency with an
involve me evaluation framework (Chapter 6).
an investigation into how benchmarking could be used as a tool to assist the university position itself in relation to the
learning and teaching agenda highlighted in the higher education reform package particularly focusing on
monitoring performance, identifying good practice in teaching and learning, and the dissemination of outcomes
a specific examination of how the university engaged with its community through its learning town strategy.
admission and complaint processes and how a set of benchmarks might look for this function for a metropolitan
university with a decentralised semi-autonomous campus administration.
There were three areas of interest for the project at this university:
student admission and complaint processes from the perspective of a smaller and newer university
the university's regional development objectives the university saw its connections in this area as being particularly
important for its future
positioning the university in the context of the learning & teaching agenda outlined in the higher education reform
package.
There were two areas of focus for the project at this university:
approaches to more efficiently handle complaints and grievances raised by domestic and international students
approaches that will enhance consistency and timeliness in policy, procedures and practices relating to student
examination and evaluation.
1.5
Report outline
Chapter 2 discusses issues identified in the literature associated with the design and
application of benchmarking in the university environment in Australia and
internationally. A number of examples of the application of benchmarking in
Australian universities are provided.
Chapter 3 details the methodology used in the project. It describes the survey of
universities that was undertaken and its findings. It also describes the workshop and
discussion processes that were undertaken in the six case studies in the project and
discusses some of the generic issues that came from them.
Chapter 4 presents additional templates, following the McKinnon et al. format, for
student admission processes and student complaint processes as required by the
original project brief. The templates draw on the information from the six case study
universities.
Chapter 5 discusses the McKinnon et al. benchmarking manual and the
reasons it has not been extensively adopted in the Australian university
scene. The section draws on the survey of universities and the workshops
held in the six university case studies.
Chapter 6 presents a five-phase approach to benchmarking for improvement
as an alternative to that provided by McKinnon et al. It discusses this in the
context of the issues that arose in examining the six functional areas of
university activity through the workshop process.
Chapter 7 suggests an implementation initiative to encourage greater uptake
of the proposed approach to university benchmarking across the Australian
system of higher education.
Chapter 8 reports on the feedback received on the workshop and discussion
process that was put in place for the project in the six universities.
Chapter 9 provides the conclusions and recommendations of the project.
2.
LITERATURE REVIEW
This chapter reviews the literature on the environment in which universities are
attempting to apply evaluation management tools like benchmarking and the way they
are using benchmarking.
2.1
Benchmarking as a tool for learning about best or good practice has been building
momentum in Australian business over the last decade and a half. There are some
useful web sites that aim to assist business incorporate these concepts into their daily
practice (e.g. the Benchmarking in Australia site at http://www.ozemail.com.au/benchmark/ and the Australian Quality Council site at http://benchnet.com/aqc/).
Concerted benchmarking initiatives undertaken by Australian higher education
institutions have been less enthusiastically taken up and our literature review reveals
there is little evidence of any university benchmarking being reported before the mid1990s. Mostly, benchmarking has been of interest to larger universities wanting to
assess and compare their administrative functions as part of their membership of
international benchmarking consortia. Of particular focus have been functional areas
in universities such as libraries, facilities management (buildings, grounds etc.) and,
more recently but to a lesser extent, technical services (e.g. laboratories, studios)
where access to physical objects rather than human capital development has been the
main focus. In the last two years there has also been a growing interest in applying
benchmarks to the relationship a university has with its regional community.
The practice of benchmarking may have two objectives first, as a means for
assessing the quality and cost performance of an organisations practices and
processes in the context of industry-wide or function-specific best practice
comparisons. This has been generally used as part of an organisations accountability
responsibility to an accrediting, funding or regulatory authority.
Second, and more fundamentally, benchmarking can be used as an ongoing diagnostic
management tool focused on learning, collaboration and leadership to achieve
continuous improvement in the organisation over time. This second objective is one of
the primary concerns discussed in this report.
Adopted from the world of business, benchmarking, as a tool for management
improvement in universities, has struggled to gain standing. Where it has been taken
up it has rarely gone beyond quantitative assessment of where an organisation sits
with respect to its competitors based around agreed indicators. From our review of the
literature it appears that few universities have chosen to adopt benchmarking as a
longer-term management tool with a core role for continuous improvement and
human resource development.
A number of factors may be behind this relatively slow adoption of benchmarking for
improvement by universities. Some of these issues have been highlighted in the
literature, and in the university survey, the case study workshops and the discussions
we undertook for this project.
10
The first of these factors is the complex nature of change impacting on universities,
creating an environment that makes it difficult to have a point of reference from
which to embark on a concerted program of reform. At one level is the ongoing push
and pull effect between the government, market and academia (Clark 1983), while at
another level there is a range of economic, social, demographic, cultural,
technological and spatial pressures.
As Clark (1998) has stated:
The universities of the world have entered a time of disquieting turmoil that has
no end in sight. As the difficulties of universities mounted across the globe
during the last quarter of the twentieth century, higher education lost whatever
steady state it may have once possessed. (p. xiii)
The benchmarking workshops in the six case study universities identified a number of
international and national trends and changes impacting on the higher education
sector that influence their operational environment and the directions that
benchmarking might take. For the most part, these issues and trends are consistent
with those identified in detail in other research reviewing higher education (see Clark
1995; Trow 2000; Coaldrake and Stedman 1998; Cunningham et al. 2000; Anderson
et al. 2002) and are not elaborated on in this project report. To give a context for the
remainder of the report they are simply summarised in this section as follows:
Increasing competition from other learning program providers from the public and
private sectors has tended to build a resistance to sharing experiences, practices
and data within the sector.
There has been increased use of information and communication technology for
such activities as student admission, library borrowing and courseware access.
The report by Martin (2003), reflecting on the outcomes of the Australian Universities
Quality Agency auditing process report from 2002, notes the destabilising effect of
continuing, fundamental change in higher education and an inability of many
institutions to close the loop between strategic plans, their implementation, review
and improvement.
The second factor restricting a full appreciation of the role benchmarking might play
in the university environment is that the current auditing and quality assurance agenda
11
12
prepared by the institutions with a focus on the whole institution rather than on
particular disciplines or faculties. The government was then advised on how to
allocate the Quality Assurance Program funds. Institutions were ranked in a number
of hierarchical bands, and allocations associated with these bands were on a sliding
scale so that all universities received some of the additional funding.
After three years the Committee for Quality Assurance in Higher Education was
disbanded (Anderson et al. 2000), and under subsequent arrangements universities
made their own annual submissions to the DEST based on their quality assurance and
improvement plans for the forthcoming triennium through an annual profiles process.
The plans outlined the institutions goals and strategies, and the indicators they used
to monitor progress in achieving those quality goals. After a few years the plans were
discontinued because each institution approached its quality responsibilities
differently and because of national and international demands from customers for a
more rigorous and transparent quality assurance system.
Good reviews of the early period of quality assurance for universities are provided in
Anderson et al. (2000) and Harman and Meek (2000).
To address issues associated with university quality, and a process of independent
review, the Australian Universities Quality Agency (AUQA) was established in 2000.
AUQA is owned by the federal, state and territory ministers for higher education and
operates independently of governments and the higher education sector under a board
of directors.
The mission of AUQA is to undertake periodic audits of higher education institutions
and to report on the relative standards of the Australian higher education system and
its quality assurance processes, including their international standing, as a result of
information obtained during the audit process. AUQA has now undertaken a series of
university quality audits (refer: http://www.auqa.edu.au) and maintains a good
practice database. By participating in this audit process, as indicated in the university
survey for this project, some universities are seeing an increasing role for
benchmarking to assist them with their quality improvement preparation.
In reviewing the eight AUQA audit reports undertaken in 2002, Martin (2003) says
that universities councils were urged by audit panels to establish appropriate
performance indicators and benchmarks and to ensure systematic monitoring of
performance. According to AUQA, a lack of appropriate hard data quantitative
measures, against which to judge the institutions performance, has been a limiting
factor in increasing benchmarking activity (Martin 2003, pp. 1314). From the audit
reports, Martin states:
Outside the research portfolio and some administrative and support areas,
benchmarking was found to be uniformly weak, and reference to external
comparators inconsistent. Five institutions were urged to identify appropriate
national and international benchmarking partners and others were cautioned
about the need for hard data to substantiate claims about national pre-eminence
or world-class performance. (p. 14)
13
Martin concludes that the real usefulness of the AUQA audit process and report is
the extent to which it is used to focus intra-institutional conversation about the
range of issues it covers and about quality improvement more generally (p. 31).
While we agree with the views expressed in this conclusion by Martin about the
AUQA process and reports, and the need for all stakeholders to see improved
outcomes, we are not convinced that the audit process, by itself, fully leads the
institution to this outcome. We are also not convinced that the uptake of
benchmarking in universities is only held back by hard data problems. The problem
is more significant than this, and an audit for accountability culture does not
automatically lead on to a collaboration and learning for improvement culture. Its
this distinction that needs more attention.
2.2
2.2.1
14
Through the CHEMS process, university participants choose the area of their
operation that they want to have benchmarked.
CHEMS has two core functions. The first of these is to identify and publish good
practice information about management in member universities, and the second is to
help members undertake benchmarking activities in their own institutions (Fielden
and Carr 2000).
A similar club to that of CHEMS exists for English universities. Called the English
Universities Benchmarking Club, with funding from the Higher Education Funding
Council for England, it aims to support the ongoing benchmarking activities of its
members until they are self-sufficient in pursuing this activity (refer:
http://www.eubc.bham.ac.uk/action.htm).
In 1996 Ernst and Young completed a student administration benchmarking study in
Australia (Massaro 1998) to identify best practice approaches across seven
universities in relation to examinations, enrolments and graduation procedures. The
study, while providing some useful guidance for university administration on the
matter of standards, seems not to have resulted in the introduction of continuous
measurement and improvement in this area.
Universitas 21 comprises a small membership network of similar universities around
the world. This network exchanges experiences and practices in the delivery of
teaching to undergraduate and graduate students. Established in 1997, the network has
recently begun benchmarking practices in management, research, and teaching and
learning to encourage the uptake of better practices among its members. The
organisation recognised the difficulty of making performance comparisons between
institutions across national systems and for this reason focuses only on exchanging the
learning from the respective experiences of members rather than the results
(Universitas 21 1999).
2.2.2
University-specific benchmarking
15
Libraries benchmarking
Australian university libraries have led the way in the application of higher education
benchmarking in Australia (see Robertson and Trahn 1997). This interest has
stemmed mainly from the need to achieve better outcomes in university libraries in
the face of reduced resources, increased use of information technology, and increased
demand from an expanding university sector.
The Northern Territory University (now Charles Darwin University), to achieve
continuous improvement in the delivery of its research information services
(http://www.ntu.edu.au/library/bench1.htm ), compared its library acquisitions,
cataloguing and information services with eight other Australian libraries, as well as
with library practice in the United States (Massaro 1998).
In 1995, the library of the Queensland University of Technology carried out a
benchmarking project with comparisons with the library at the University of New
South Wales (Robertson and Trahn 1997). The project was part of a university-wide
benchmarking exercise carried out by each faculty and division. The objective was to
improve processes within the university and to pilot and also increase awareness of
benchmarking techniques. The project led to changes in library practice in relation to
throughput time, document delivery and general research support. More importantly,
it led to a culture among staff of reflection on the various functions in the library with
a view to improvement in operations. Robertson and Trahn (1997) note in particular
the need for benchmarking exercises to be part of broader goals of quality
16
The absence of any benchmarks for university technical support services (such as
laboratories and studios) in the McKinnon et al. manual prompted the Office of
Technical Services (OTS) at Griffith University to design their own (Urquhart, Ellis,
and Woods 2002). The basic template presented by McKinnon was accepted as the
platform for the project and new benchmarks were defined. The benchmarks were
determined through in-house staff brainstorming. While there was some peer review,
the OTS does advocate greater third-party input from upstream and downstream
suppliers and users of their service. Benchmarks in learning and teaching, research,
equipment, workplace health and safety, and the work environment are defined and
rated against a defined best practice identified using staff input, existing university
student and employer surveys, and regular user surveys.
The Griffith OTS approach to benchmarking technical services is fundamentally
bottomup and based on consensus, with work groups thereby gaining ownership.
Through this process, changes in attitudes and behaviour among staff are envisaged to
occur along with a commitment to achieving improvement. Importantly, staff anxiety
about organisational change is reduced through their close involvement and a constant
feedback process. Benchmarks are also ratified and revised by clients of the OTS, and
there has been peer input from other universities.
In a recent publication, A report on the implementation of technical services
benchmarks project (Office of Technical Services 2003), the OTS team at Griffith
University reported on the results of the implementation of their benchmarking
program:
17
Importantly, the OTS report identifies that implementation ratings were highest in
those areas where OTS staff had greatest control over the benchmarking process.
2.2.7
Sector-wide benchmarking
A report for the National Board of Employment, Education and Training in 1998
published a set of sector-wide indicators to measure quality improvement that could
be used for international comparison. While not strictly a benchmarking analysis, the
project goes through many of the required steps of ascertaining indicators that are
relevant to arriving at a quality assessment for a sector-wide approach. The analysis,
however, focuses heavily on the availability of data in the selection of appropriate
indicators. An initial list of around 80 potential sector-wide indicators was identified.
Because of measurement difficulties, an inability to be reflective of real performance,
and such other limitations as lack of stakeholder consensus, the list was narrowed to
17 and then to 13 core indicators for which data was readily available.
2.2.8
community context
However, the framework is ostensibly the same as McKinnon et al.s and, as a result,
suffers from the same inadequacies (explored later in Chapter 5) particularly the
segregation of functions into categories and subcategories rather than joining them up
to generate stronger collaboration and learning across functions.
18
Comments
While only a decade old, overall the literature suggests that the development of
benchmarking as a tool for university improvement could be better advanced in
Australia. Much of its history has been concerned with hard data measures and an
application in administrative support areas. There has been limited benchmarking of
some university functions. There is considerable scope to do more work in this area.
While there has been more university benchmarking activity in recent years,
particularly in administrative support areas like facilities management, technical
services and libraries, much of it has been concentrated in the larger universities as
part of their membership of international benchmarking associations. The larger
institutions have also employed management consultants to help them find their way
through the jargon and the process of benchmarking for performance assessment
comparison.
Benchmarking appears to have gone along two routes. The first emphasises
quantitative assessment using performance indicators and auditing with the use of
templates. It gives emphasis to performance assessment rather than improvement, and
it segments rather than joins up functional areas, thereby limiting the learning process
19
20
21
METHODOLOGY
3.1
Introduction
add specific elements to the McKinnon et al. manual dealing with university
complaints and admissions procedures for students
explore prospects for improving the usefulness of the manual, and benchmarking
generally, as a tool for university improvement.
Two methods were used to obtain the necessary information for the project. These
were a survey of universities, and in-depth facilitated learning and discussion in six
university case studies. This chapter explains how these two instruments were
implemented and what was revealed in relation to the application of benchmarking
generally in universities.
Chapter 4 uses the information from the investigation to help construct benchmarking
templates for student admissions and student complaints, following the McKinnon
format, as required by the original project brief. Chapter 5 presents the findings from
the survey and case study investigation as they relate to the usefulness of the
McKinnon et al. manual as a tool for university benchmarking. Chapter 6 draws on
the workshop discussions, in particular, to design an alternative approach to university
benchmarking for improvement based around principles of learning and collaboration.
3.2
3.2.1
University survey
Background
10
8
8
3
5
1
4
21
18
6
33
Survey response
5
7
7
3
4
1
1
16
12
4
24
Source: S. Stevenson, M. Maclachlan and T. Karmel (1999); university benchmarking survey, 2003.
22
All correspondence about the survey was sent to the Vice-Chancellor in each
institution and a deadline to respond was set at 25 October to ensure enough time for a
considered response. In all cases, questionnaires were completed by senior
management (Vice-Chancellor, Deputy Vice-Chancellor or Pro Vice-Chancellor).
3.2.2
Use of benchmarking
All but three responding organisations stated they had used benchmarking in some
way in their operations, although the extent and nature of the use varied considerably.
For many universities that stated they had experience in benchmarking, usage was
mainly for ad-hoc assessment purposes, generally for developing performance
indicators and for performance reporting or in reviewing specific areas (e.g. financial
planning). Some universities claimed they used benchmarking for strategic planning,
while others used it in their quality improvement strategy for specific functional areas
(particularly libraries, facilities management, research training and external
examination assessment). Other uses of benchmarking were staff evaluation,
professional accreditation and comparison with other institutions.
Some of the larger universities stated they used benchmarking as part of their
membership of wider benchmarking programs such as the Association of
Commonwealth Universities Management Benchmarking Program.
Table 3.2 shows the proportion of responses to the survey for each of 19 identified
purposes for which benchmarking could be applied.
Table 3.2:
Category
Proportion of total
responses (%)
89
70
70
67
63
56
48
44
41
41
41
37
30
26
22
19
19
15
11
Interestingly, benchmarking has mostly been seen as a tool for general management
improvement, strategic planning and performance assessment in certain targeted
functional areas of university activity, such as research, teaching and learning (more
than 50 per cent of responses). Building commitment to the organisation and
23
partnership building were viewed as being only moderately important objectives for
undertaking benchmarking (more than 37 per cent of responses). Areas such as staff
development, collaboration, service support effectiveness, and resource allocation
were rated lowly as a use for benchmarking by universities (up to 30 per cent of
responses).
3.3
3.3.1
University involvement
Each of the six universities identified areas of their activity they believed needed
some improvement and where they wanted to apply benchmarking processes. Table 1
shows the areas of focus for each university in the project. In summary:
24
Two universities wanted to explore their teaching and learning activities in the
light of the added focus to this given in the governments reform package for
higher education (Swinburne and Sunshine Coast).
3.3.2
Each of the six university case studies took part in a four-stage discussion process
designed to facilitate learning how a benchmarking regime that would lead to
improved processes and outcomes in the university situation might be put in place.
Within this program they also reviewed whether the framework put forward by
McKinnon et al. would assist this aim.
This method was adopted as an innovative approach considered likely to elucidate a
richer picture of the circumstances of benchmarking practice. An integral element of
the process design was to create a learning situation where participants had an
opportunity to collect and consider information, share their understandings, and arrive
at a consensus view about future benchmarking goals and directions. Participants in
the process included a broad spectrum of stakeholder interest in the function being
reviewed, at all levels of authority in the university.
The role of the Australian Universities Quality Forum workshop in June 2003 was
very helpful in refining ideas and in obtaining data to use in the process. The Project
Steering Committee managed by DEST also facilitated a discussion of aspects of the
eventual process.
The four-stage process referred to above is discussed below in more detail.
3.3.2.1
First workshop
The first workshop lasted between three and four hours and explored the internal and
external context in which the university was operating. It facilitated an agreed
understanding within the group about language, definitions, and concepts associated
with benchmarking. Participants at the first workshop also discussed the directions
benchmarking might go in the university and the role that the McKinnon et al. manual
might play in this.
At each university between 15 and 25 participants took part in the first workshop
which generally followed the program at Appendix A.2.
Shortly after the first workshop in each of the six universities, a written report of the
workshop proceedings, along with a discussion kit that drew on the emerging issues
that were identified in the first workshop and in the literature, was produced and
provided to the group.
25
3.3.2.2
Ongoing discussion
The discussion kit was tailored to cover the issues of each university that were
identified at the first workshop. The kit comprised a number of modules, including an
introduction and generic subject matter about benchmarking, with tasks designed to
stimulate further more specific small group discussion and learning within each
university without the direct assistance of an external facilitator (Appendix A.4).
The internal and ongoing group discussions using the kit were over a four- to sixweek period following the first workshop. Our assessment of this process, however, is
that considerably more time needs to be made available for this phase of discussion
because of the practical difficulties in getting all stakeholders together into working
groups to fully formulate agreed directions. To be most effective, the process does
require a significant commitment of time from people who are already stretched
because of their ongoing work. Working group sizes for this phase varied from a low
of five participants up to ten.
3.3.2.3
Second workshop
Following the kit-based discussion period, a second workshop, again for three to four
hours, was held to discuss the findings from the working group discussions and to
agree on a benchmarking program that could be taken forward within the university,
including a very practical look at the impediments and opportunities that needed to be
addressed in the organisation to give it effect (Appendix A.3). A second objective of
this workshop was to assess how effective the McKinnon et al. manual might be in the
university's benchmarking formulation exercise and to work towards drafting a
benchmark assessment template for the areas each had identified as a priority.
An average of 10 to 20 participants took part in the second workshop series.
3.3.2.4
Combined discussion
The final element in the case study process of discussion about university
benchmarking was to share experiences across the six case studies. Difficulties with
participant availability at the time of the year (November) meant that this had to be
done through a teleconference to meet the project timetable.
The purpose of this element of project methodology was twofold:
1. to identify common issues and views across the sector in relation to the
application of benchmarking generally in the university and in relation to the
McKinnon manual in particular
2. to see whether there was a common view about the conclusions being formed
from the project, and how an alternative approach to university benchmarking
might be implemented in the sector.
26
3.3.3
3.3.3.1
The University of the Sunshine Coast asked to be involved in the project for several
reasons: it wanted to learn more about benchmarking as its experience to date was
limited; and it wanted to further develop its strategic agenda in relation to its regional
development objective, its teaching and learning efforts, its student complaint and
grievance processes, and its student admission processes.
Fifteen staff from senior management, academic staff across the three faculties, and
general staff at various levels from several areas of administration dealing with
student admission and complaint processes took part in the facilitated workshops.
Participants found the workshops and discussion process helped them better
understand the difference between benchmarking for assessment and for
organisational improvement and clarified much of the mythology and corporate
speak that traditionally accompanies benchmarking. They also acknowledged there
were benefits for themselves from the process in regard to undertaking their particular
jobs and their own actions.
The university committed itself to implementing the outcomes of the workshops and
discussion. Reports from the workshops and discussion were provided to the
University's Teaching and Learning Committee, Research Committee and Audit
Committee, and the profile and understanding of benchmarking in the university have
been raised accordingly.
3.3.3.2
The benchmarking workshops were centred on the Lilydale campus of the university
and involved 15 academic and administrative staff including senior management, as
well as desk officers. The campus of the university wanted to develop a better system
for handling student admissions and complaints, as well as to further develop its
teaching and learning strategy and regional engagement practice with the local
community.
The Academic Unit review being undertaken within the campus is committed to
developing a benchmarking agenda for 2005 and embedding it in the campuss
various programs. As a result, the campus is now interested in exploring
benchmarking other areas of university activity such as student examinations and
assessments and learning from other universities about data choices.
3.3.3.3
Monash University
27
More than 25 participants took part in the benchmarking workshop process at Curtin
University where the focus was on the student complaint process and in getting
consistency in student examinations and assessment procedures and processes across
the university. The workshops and discussions included senior and junior
administrative staff, senior academic staff from a number of faculties, students,
relevant external agencies and the student union body.
A report based on the benchmarking project workshops is currently being put to
senior university management. The Teaching and Learning Committee of the
university are already taking up the student complaints benchmarking issue.
3.3.3.5
Griffith University
During the first workshop in each of the six case studies, groups were asked what they
thought were issues that had arisen with their universitys efforts at applying
benchmarking processes to date. What follows is a summary of the issues identified
and recorded by participants. These summaries are a compendium of the results taken
from the issues raised in all workshops.
3.3.4.1
28
For some universities, in its crudest form benchmarking is simply about identifying
some key performance indicators and downloading a range of DEST sector-wide data
to support the measures. In such processes, many felt the university senior
management team simply wanted to get results that showed the organisation in the
best light. With universities having this view, any benchmarking exercise was seen as
an additional task and an additional cost, rather than being a natural part of the
organisations ongoing activity and an investment in improvement. There was
therefore some confusion in concepts and terminology between the objectives of
benchmarking, quality auditing and performance indicators.
Workshop responses suggested that the current quality assurance auditing program
and the accountability environment had caused universities to see benchmarking as
quantitatively assessing and reporting on performance to outside agencies, at the
expense of focusing on the universitys own structures, processes, and behaviours that
determine the performance outcome.
In summary, what we found was a widely held view that benchmarking was about
assessing performance against a set target. However, we also found a view that
benchmarking should be about the underlying practice behind the performance
identified as needing improvement.
3.3.4.2
There was a view among workshop participants that there was so much difference
between universities, and the functional units within them, that it was meaningless to
make serious inter-organisation performance assessment comparisons for
benchmarking purposes even where universities broadly saw themselves as part of a
group or network of like universities. This view about differences became more
pronounced as the workshop process evolved.
3.3.4.3
Use of data
When data was used for benchmarking, the experience of universities was that most
readily obtained data was often made to fit the evaluation circumstances, and that the
use of non-quantitative information might convey a better description of the situation.
Qualitative information and stories tend not to be included because they take time to
collect, are more expensive to collect than easily downloaded statistics, and are not
easily used for comparative purposes. This attitude stems from the view that
benchmarking is an external control tool and an extra task to add to those already
being undertaken.
3.3.4.4
While university staff recognise that benchmarking is a useful improvement tool and a
necessary part of the accountability environment in universities, they are deeply
sceptical about current applications of benchmarking. Benchmarking is seen by some
staff as an excuse to be able to reduce possible options for improvement to a lowest
common denominator. It brings to a head the conflict between quality and quantity in
education.
29
There was a belief that some senior managers pay only lip-service to benchmarking
processes aimed at improvement and that they are only interested in performance
indicators from a numbers perspective to prove how well they are going. There
appears to be no commitment to implement reform highlighted by benchmarking
outcomes. The view of the workshop participants was that benchmarking tends to be
driven by individuals in the university with a keen interest in the subject and so there
are different approaches by different parts of the university. While benchmarking may
be discussed at the highest levels of the university, the nature of the discussion seems
to focus on the productivity statistics rather than the quality improvement aspects of
the process.
A change of culture is needed in relation to benchmarking and the question is how to
bring this about. There needs to be strong leadership from senior management for
across-the-board improvement as a learning organisation.
The workshop process highlighted the need to have the leadership of the university
committed to making the necessary changes for the organisation to improve. The
involvement of university leadership in the workshop and discussion process varied
across case studies. Where senior staff were directly involved in the workshops, there
was an immediate commitment to implement new approaches. In other situations
participants were able to identify pathways to bring their recommendations to senior
management for attention.
3.3.4.5
Such an inclusive approach enables people to feel they played an important role
within the organisation and could make a real contribution to its improvement.
Workshops included stakeholders across functional areas (administration and
30
academic) at all levels within the university and, in some cases, students, the student
union, and external stakeholders from the local and regional community, state
government and industry. This participation was a strong base from which to
emphasise the importance of the need for inclusiveness.
3.3.4.7
31
4.1
Background
One of the requirements of this project was to add specific elements to the McKinnon
et al. manual dealing with university complaint and admission procedures for
students, as these were not considered in the original publication.
Each of the six universities participating in the project was asked to identify
functional areas with which they felt they needed assistance to develop a
benchmarking for improvement program. Four universities included student
complaint processes and three universities included student admission processes
among the functions they wanted to explore from a benchmarking perspective.
All universities were asked to consider the appropriateness of the current template
approach provided in the McKinnon et al. manual. Comments about the manuals
approach to university benchmarking are detailed in the following chapter. This
chapter presents McKinnon et al. type templates for these two important university
functions based on the workshop process.
4.2
4.2.1
Universities involved
The following four universities were involved in the benchmarking workshop process
to explore improvements in their student complaint processes:
Monash University
4.2.2
There were many issues raised during workshops in these four universities about the
way they deal with student complaints and grievances. The following four issues were
raised consistently in the workshops:
What to measure
The main issue here is to be sure to measure what is actually happening in
terms of the complaint process, and not just rely on an end-result measure
because there is ready data available. New measures that highlight the
underlying processes associated with a student complaint may be necessary. It
is important to be aware of all the various channels or gateways through which
a student complaint or grievance may be made to enable data gathering to
occur. Because different universities use different complaint reporting and
32
Data
It is important to know what data is currently being collected to assess
complaint processes and why, how records of complaint details are being kept
and what is done with them to ensure improvement can occur. The focus on
the data should not be on the simple recording of statistics at a particular point
but, importantly, how a solution to the complaint can occur. It is not worth
collecting the data unless there is a process that can use the data to enable
improvement to occur. There may be a hesitation by some universities to
assiduously collect complaint data as, in a superficial accountability
framework, complaints may reflect on the integrity of the university.
Good practice
Good practice was seen where there is a culture for learning and the
opportunity for improvement, where staff have a chance to be heard, are
valued, and are supported at senior level and through training and mentoring.
Good practice was seen where there is no fear of retribution by the
complainant. It was also seen to occur when staff is not threatened by
receiving feedback when senior management are involved, where there are
regular meetings with student and staff and where complaints are not seen as
necessarily a bad thing but rather an opportunity to improve things.
4.2.3
Benchmarking template
In accordance with the aim to add student complaint procedures to the McKinnon et
al. pro-forma framework, the following proposals emerged from the workshop and
discussion process:
Benchmarking rationale
33
Sources of data
This might include student and staff feedback through questionnaires, routine
service feedback forms, focus groups, and analysis of the records of
complaints lodged and responses to them across all service areas. Issues to be
canvassed should include complaint numbers logged, area of complaint,
numbers resolved, appeals processes undertaken, complaint turnaround time,
clarity of process, communication and advice for both students and staff, and
mutually agreed satisfaction with outcome.
Good practice
Good practice complaints policy and procedure should be clear for both
students and the staff. Statements about the practice should be readily
accessible and informative. It should offer both formal and informal ways for
the student to feel confident in making a complaint, whatever their home
country culture, and in any subsequent appeal avenues.
The process of complaint handling should be efficiently resourced and staff
should receive appropriate training and support in carrying out this role. There
must be effective desk procedures for receiving complaints, as well as for their
recording, processing and reporting. There must be processes for discussing
and following up trends in complaints, for measuring the effectiveness of, and
student satisfaction with, policy and procedures, and for translating this
feedback into policy and procedure improvement.
Table 4.1 contains the benchmarking template created in the workshops and
discussions. The template is based on McKinnon et al.s performance rating scheme,
which ranges from a low level (1) through to a high level (5).
34
Table 4.1:
4.3
Level 5
Complaints policy and procedures
are in place that are easily
accessible and have an associated
framework that enables
improvements over time based on
information feedback mechanisms.
The majority of staff and students
are aware of the complaints policy
and procedures within the
university.
Feedback from students that they
are confident their complaint in
any area of the university is being
handled effectively.
A process for ensuring complaints
are acted on appropriately and
translated into improved policy and
practice in the designated area.
4.3.1
Universities involved
Monash University
4.3.2
There were many issues raised during the workshops and discussions relating to the
processes employed by universities in dealing with student admissions. The following
issues were most consistently raised:
35
Different universities saw the student admission process beginning and ending
at different points. For some, admission processes included initial marketing to
potential students, potential student inquiries, registering for admission, fees,
scholarships, graduation, and post-graduation services. Others saw the process
restricted to the formal process of admitting students to the university only.
These differences between institutions make university comparisons in this
area difficult.
Good practice
In a benchmarking for improvement program good practice was seen as
having:
! effective staff training programs in place
! effective student admission policies and procedures in place (desk, phone
and web)
! good channels of communication with all areas of the university
! accurate and relevant information
! quick turnaround times
! regularly reviewed admission policies and procedures
! test audits of the turnaround times of responses to inquiries of all kinds
! surveys of student and staff opinion of admission staff services.
4.3.3
Benchmarking template
In accordance with the aim to add student admission procedures to the McKinnon et
al. pro-forma framework, the following proposals emerged from the workshop and
discussion process:
Benchmarking rationale
Every university needs efficient core student administrative services covering
enquiries, admission, progression, fees and dues, graduation and scholarships.
Sources of data
Data can come from test audits of the speed, accuracy and comprehensiveness
of responses to personal, telephone and written enquiries and applications, and
from data needed for student administration and reporting purposes. There
may be surveys of student opinion about the responsiveness of the university
to their enquiries and applications.
Good practice
36
Level 5
Fast response times on the same or
next day and information provided to
admission enquiry is comprehensive.
Application response time is a
priority and fast.
Established service agreements in
place across university that enable a
proactive approach to anticipated
student admission requirements.
Feedback from students and staff on
admission service provided by the
university indicates it is of a high
standard.
All necessary and anticipated
information is provided and clear
steps are defined for applicant to
follow, including dates.
37
4.4
Comments
38
39
5
5.1
Feedback from the university survey, the six university case study workshops and the
associated discussion kit process highlighted the following points about the usefulness
of the McKinnon et al. benchmarking manual for university improvement.
On the positive side, the manual was seen as:
However, it was recognised that the environment in which universities now operate
has changed and will change further particularly following the reforms outlined in
Our universities: Backing Australias future (Nelson 2003). These changes mean that
a different approach to university benchmarking is required. Such approaches need to
recognise the diversity among universities, the need to move beyond simple periodic
reporting of results, an increasing societal requirement to not simply be told about
audit results but to be involved in their determination, and the need to deliver better
and sustainable outcomes to a wide range of relevant and interested stakeholders.
While a large number of universities indicated in the survey they knew about
McKinnon et al.s benchmarking manual and had used it superficially, very few
believed their university performance had actually improved as a result of using it.
Many found the manual difficult to comprehend, complex in detail and difficult to
apply to their specific set of circumstances. If such a document and approach was to
be promoted or required, most universities believed there was a need to make it
simple and much more accessible.
There was a view that the McKinnon et al. manual had its prime focus on accounting
for performance, rather than encouraging better practice and change through organic
internal reflection, collaboration, learning and action over the long term. This view
was held despite one of the stated purposes outlined on page 1 of the manual, viz:
It provides senior staff with tools to ascertain performance trends in the
university and to initiate continuous self-improvement activities. (p. 1)
[emphasis added].
There was a general view that the manual significantly fails in the second part of this
objective namely, to initiate continuous self-improvement. There was also a view
that the first part of the objective, in limiting involvement to senior management,
40
ignored the equal involvement of a wide range of relevant and interested stakeholders
within and outside the university.
According to one comment in the workshops, the manual:
leaves you hanging at the performance assessment stage with no guidance to
help you improve what you do.
Other comments about the McKinnon manual from the university survey included:
that [the manual] was overly prescriptive. The manual makes too many
assumptions about how things should/must happen, that there is a need for an
allowance to be made for the range of ways that the manual will be used, and that
the overall presentation could be more dynamic.
The major criticism from [name of university] perspective is that the manual
was geared to the traditional university experience and did not serve to capture
the context of a regional/newer/flexible learning/non-traditional student
base/non-research intense institution in many areas.
The key issue is clearly on what institutions want out of benchmarking. How is it
going to help us improve? Thus good practice should be an essential component.
What we don't want is simply numbers and figure tables.
The university believes that the essential character of benchmarking should be
internally focused. It should aim to facilitate and enhance the management and
effective utilisation of resources and the quality of outcomes, and not be regarded
as a mechanism through which sector-wide assessments of performance or
rankings of institutions are contemplated. This view would therefore inform our
position regarding any further development of the [McKinnon et al.] manual.
There was a view that the manual was a one size fits all tool that did not allow for
diversity either between universities or across functions within the one university.
Where the manual was used for example, such as in benchmarking library facilities
management and technical support areas it had to be rebuilt from the ground up.
Two-thirds of higher education respondents to the survey claimed they had used the
McKinnon et al. manual in some way in their institutions. The remaining third had not
used the manual at all. Some of the universities that had not used the manual believed
it was more suited to the needs of the larger universities than the smaller nonmetropolitan institutions.
Universities saw the manual as useful in helping them prepare for the Australian
Universities Quality Agency quality audit process by enhancing awareness among
staff of the issues involved in benchmarking. However, there is limited evidence of
the use of the manual in the portfolios presented to audit panels, according to the
Executive Director of the Australian Universities Quality Agency (Martin 2003). The
manual has been little used for other purposes such as programs of organisation
improvement, and partnership building.
Most universities saw the McKinnon manual as a peripheral rather than a core tool for
improvement purposes, saying it did not fit their particular circumstances.
Nevertheless, respondents did state that they felt the manual provided a stimulus to
implementing a logical measurement regime. There has been no wholehearted or
consistent adoption of the manual by any of the universities responding to the survey.
41
Table 5.1 demonstrates ways universities who claimed they used the manual believed
it could be improved to enhance its usefulness to the higher education sector.
Table 5.1: Improving the effectiveness of the McKinnon et al. manual
Category
Training program
Referral service
User guide
More functionally specific detail
Specifically tailored to the needs of the institution
Proportion (%)
83
78
72
56
44
Most users responding to the survey felt there was a need for some form of additional
assistance in implementing a benchmarking regime in the form of a training program
or access to a central referral service or guide, rather than making design changes to
the manual itself. Universities believed their individual circumstances were so
different from each other that a prescriptive tool like the manual was only of generic
and superficial use rather than of practical and useful guidance.
In the workshops, which facilitated an in-depth discussion of the manual pro forma,
real concerns were expressed about the approach to benchmarking as outlined by
McKinnon, and these concerns are elaborated below.
5.2
Specific perceptions
This section addresses specific concerns regarding use of the McKinnon et al. manual.
5.2.1
The most significant complaint about the McKinnon et al. manual is that it focuses, in
its pro-forma approach and language, on providing an off-the-shelf tool for readily
assessing relative performance, rather than focusing on engendering a behaviour of
collaborative improvement within the organisation. This view is supported by the
workshops and the results of the university survey.
From the university benchmarking usage survey it is clear that where the manual has
been used, it was for a limited range of management functions. These included use as
a reference to gain ideas and criteria for developing key performance indicators for
performance reporting, identifying areas for course evaluation, and for financial
planning. Most universities saw the manual as peripheral for organisational
improvement.
The manual did not encourage deeper thinking about what really is involved in
benchmarking and where improvement can best occur to get the best return on outlay.
In the workshops, comments included the importance of sharing knowledge and
experience in areas of practice, not being part of a prescriptive and rigid process with
implications tied to funding, and fostering an experience based on flexibility and
commitment, connectivity and reflection towards doing a professional job. There was
a need to change the culture of the organisation through collaboration, knowledge
exchange, learning, and recognition and reward. While the manual was seen as a tool
42
that could assist senior management respond to outside accountability demands, there
was a concern that such use would lead to staff cynicism about what accountability
organisations wanted to use the information for.
5.2.2
Reductionism
Language
The language used in the manual was seen as turgid, jargon-based, and off-putting to
the wide spectrum of university stakeholders (i.e. those outside senior university
management), as well as to relevant external stakeholders who would have a role in
an organisations improvement process. Culture, behaviour and participation in an
organisation are seen to be as important as (if not more important than) any structures
that are put in place. That is, many found the manual difficult to follow and therefore
tended to dismiss its usefulness on that basis.
5.2.4
The distinction between what was a leading, lagging or learning benchmark was seen
by all workshops as unclear, confusing and generally unnecessary. The whole process
of organisation improvement has to do with learning, and this should not be
compartmentalised into just a few sub-elements of what should be a holistic process
of review. The general feeling was that this differentiation should be dropped.
43
McKinnon et al. define good practice on pages 7 and 8 of the manual and use the
concept as a proxy for so-called best practice, believing this term to be too
controversial. However, in the manual there is no indication of how the descriptions
of good practice contained in each benchmark are actually arrived at and what
justifies them being regarded as good practice. There is also no reference to relevant
practice that might exist outside the university system; nor is there any recognition
that good practice might change over time, given new systems, new technology and
new policy directions.
Our preference here is to use the concept better practice because it is immediately
relevant to the circumstances at hand and is about improvement from whatever might
be the particular base level that is relevant to the university in its development phase.
The question arises as to what should happen in an organisation if the benchmarking
process suggests that good practice has been attained? Use of the term better
practice connotes a never-ending improvement process.
A related question here is about who determines what the description of good practice
should be? It might be that this assessment process should include input by a critical
friend (not unlike what has been occurring in trial audits which prepare universities
for the Australian Universities Quality Agency audit process), and from relevant
institutions and customers from outside the university system. In the McKinnon-type
templates presented for student admission and student complaint processes in Chapter
4, the good practice definition comprised the agreed views of around 100
stakeholder participants in the workshop process for this project with a range of
perspectives and roles associated with the university.
5.2.6
Assessment levels
The question was raised during the workshop process of what attaining a level 5 in the
McKinnon manual actually meant for the process of organisation improvement. Is it a
time to celebrate and do no more? Is it particularly relevant for all stakeholders or
simply to make senior management feel good? Is the path between level 1 and level 5
a straight line course of action, and how can this occur when the rate of learning will
vary from area to area and from time to time, depending upon a whole range of
external and internal circumstances?
The view coming out of the workshop process is that a level 5 can only be achieved
when learning is exhausted, which of course should never happen in an improvement
process. A level 5 therefore puts an artificial ceiling on improvement that ought not be
there. There should be continuous review and continuous learning about ways to
make the organisation function better. As a result, the term better practice was
designed to measure whether improvement had occurred rather than what (artificial)
level should be attained. Again, the issue of what is good practice as measured by a
rating of 5 might be fixed to a specific point in time, but changed circumstances will
44
alter this notion considerably and so the concept of good practice, or a rating of 5, is
a static notion in a dynamic environment.
5.2.7
Pro-forma approach
The pro-forma approach of the manual was seen in the workshops as too systematised
and rigid: in reality the approach needs to be flexible and cognisant of individual
circumstances. The manual was seen as a one size fits all, topdown approach, rather
than one that enables a flexibility to respond to the many and diverse circumstances
found in the operations of a university.
The idea of a pro-forma also implies that comparisons might be made using these
same pro-forma. As stated earlier, the project has found that a comparison among
universities in any functional area is very challenging, as they are so different.
5.2.8
Participants in the final combined teleconference workshop felt that there was a need
to make a distinction between a how to process and a what to process in a
benchmarking manual. There was not a need for a good practice definition, but
rather a need for a process for achieving good practice, or more appropriately,
better practice. They also believed it was necessary to avoid a divisive and
threatening environment to carry out benchmarking.
45
6.1
Background
The workshop and discussion process carried out in the six case study universities
revealed that the McKinnon et al. benchmarking manual was not user-friendly and did
not address all stakeholder interests in the university to bring about improvement on
an ongoing basis. In response to this, through the exchange facilitated in the second
round of benchmarking workshops, the following principles were identified as
underpinning an approach to organisation improvement through evaluation:
leadership, including at the most senior levels, and resources be made available
for both the review process and the resulting improvement initiatives
Based around these principles, a good program for quality improvement within the
university was seen as comprising the following characteristics:
goals, policies and procedures that are accessible and understood by all relevant
staff, students and other stakeholders participating in the process of improvement
measures of performance for the function, with mechanisms for both internal and
external data support, including from non-university comparisons that are
consistent with agreed improvement goals and the changing environment in which
the function has to perform
leadership and commitment from senior management for the drive and the
resources to assist with an improvement program
46
commitment. It is an intrinsic and ongoing part of the operating environment and not
a one-off statistical exercise based only on the collection of comparative performance
indicators.
6.2
Five phases were articulated as making up one generic cycle of working towards
better practice. A number of sub-elements to each phase might be suggested;
however, this would vary for each functional circumstance.
Figure 6.1 shows these five phases as:
a recognition that learning from the previous phases can lead to further
improved approaches in an ongoing process.
Phase 1
Current
situation
Collaboration
Phase 4
Reviewing
progress
Learning
Strategic
planning
Leadership
implementation
Phase 3
Phase 2
47
6.2.1
As Figure 6.2 shows, the first phase provides a view of the operating environment
(internal and external) impacting on the university with respect to the particular area
in which improvement is being sought. Those in the workshop identified that much
data may already be available to portray this environment. In many cases, however,
this data is neither in a simple nor accessible form, or in a form that enables an
overview of existing practices.
The purpose of this first phase is, therefore, to gather and make accessible to
stakeholders all available information that facilitates identifying the external and
internal factors at work (drivers and impediments), and the way they shape and
influence the present operating environment for the university and targeted functional
area. This material may include: policy and procedure documents; staff, stakeholder
and student surveys and views; staff recruitment programs; budget implications; and
wider factors and influences. An analysis of this data may highlight gaps to be filled.
There may also need to be additional input from stakeholders, including from external
groups such as professional and accrediting agencies, government, business, and local
and regional communities at this point.
Participants in the improvement program will need to reach agreement about the
nature of the operating environment impacting on the function being reviewed. At this
point there will need to be a clear statement by senior management of the importance
of the activity and a clear indication of the practical support provided for the
evaluation.
Figure 6.2:
Breaking it
down
1. Current situation
Objective:
An agreed statement by all relevant
stakeholders about:
The effectiveness of existing policy
& practice as it relates to the targeted
function.
The existing internal & external
environment (strategic, regulatory,
policy, societal, etc) in which the
targeted function sits.
Inputs:
Information collection concerning
the targeted function (eg surveys,
focus groups, policies, procedures,
workshops, etc.
An inclusive discussion so as to gain
general agreement to the current
situation, to identify impediments,
opportunities, drivers, etc that impact
on improvement
48
6.2.2
There is a need to develop a strategic plan of action, along the lines indicated in
Figure 6.3, so that all stakeholders know in which direction they will collectively head
with respect to their actions. Before this can occur, however, there needs to be
agreement among the participating stakeholders about what practice is a better
practice to aim for. This may be defined by the participants, rather than being some
definition borrowed from an outside agency or group with little knowledge of the
particular circumstances being reviewed.
This phase is envisaged as an inclusive process involving all relevant stakeholders
(including those who are external to the organisation) and is initially about sharing
understandings and being comfortable about the future vision particular goals,
language, concepts, culture, constraints, impediments and opportunities as it relates
to their perspective on matters to do with the targeted area for improvement.
The actual process may vary according to the particular situation so that it may be a
facilitated workshop process, a web-based communication process, questionnaire
approach (less favoured) or some other approach that facilitates an exchange of
perspective and understanding among stakeholders. A written and agreed document
that spells out and justifies areas for action, responsibility, indicators of success, and a
timetable for implementation will be an objective from this phase.
Figure 6.3:
Strategic planning
Breaking
it down
6.2.3
2. Strategic planning
Objective:
An agreed written strategy to improve
existing policies & processes for the
targeted function that spells out areas for
action, responsibility, timetable, key
performance indicators, better practice
targets etc.
Inputs:
An inclusive process of all relevant
stakeholders that can lead to a common
view about goals, definitions, language,
culture, concepts, impediments,
opportunities etc.
Review of Phase 1 information.
Commitment from senior management
to provide resources & to implement
The third phase is to make sure the energies and ideas that have been previously
developed and agreed to in the first two phases are put into practice. Support from
senior management will be critical at this stage.
49
Implementing actions
Breaking
it down
3. Implementing action
Objective:
The details of the agreed strategy are put into
practice in a meaningful way, e.g. staff advised,
training & information programs in place, review &
recording mechanisms strengthened, performance
measures in place, management agreement to ensure
process is carried out.
Inputs:
Appropriate instructions from Phase 2 strategy
are conveyed to all stakeholder interests
concerned about bringing about improvement in
targeted function.
Stakeholders understand the task
Time, guidance, information, training, etc. given
as necessary.
6.2.4
The process implemented in Phase 3 needs to show, after an agreed time lapse, that
improvements are actually being generated and that they are consistent with the wider
changing environment. Therefore, a regular review and reporting (Figure 6.5) of the
changing operation impact of the function, as well as a review of the surrounding
environment which has an impact on it, as identified in Phase 1, is required to see
whether there are new imperatives that need to be factored into the improvement
process.
This new information needs to be reviewed through the forum of stakeholders to
ensure that change is actually taking place in the direction required.
50
Figure 6.5
Review of progress
Breaking
it down
6.2.5
4. Review progress
Objective:
Collect evidence to demonstrate
improvements are occurring
successfully from the original baseline
situation identified in Phase 1.
Inputs:
Baseline data from Phase 1,
strategy plan from Phase 2 and
actions implemented from Phase 3.
New data collection re operation of
function, student and staff surveys,
feedback from external stakeholder
agencies etc.
A forum/ process for reviewing the
newly collected data.
A key element of the total process is that there is a point where the experiences gained
in reviewing and reforming existing practices generate new or better understanding
that need to feed back to ensure improvements continue to occur. Therefore, this
phase, as shown in Figure 6.6, should involve a formal feedback process, which
reflects upon the entire process (phases 1 to 4) and highlights the lessons and
experiences gained which can then be fed back into the process (particularly phases 2
and 3) to ensure effectiveness is reinforced.
Figure 6.6: Learning for continuous improvement
Breaking it
down
51
6.3
Comments
52
53
7 SECTOR IMPLEMENTATION
7.1
Survey evidence
From the survey results, most universities believed the McKinnon benchmarking
manual could only be a limited tool in the organisation improvement process. There
was a view it should not be used as a process for ensuring standards across the sector.
Several suggestions were proposed in the survey to see if improvements could be
made to the existing manual to increase the depth and breadth of its uptake among
universities in their institutional improvement agenda. These included making the
manual more function-specific in its detail, tailoring it to specific improvement needs
of the individual institution, having an accompanying user guide, establishing a
centralised referral service (including web-based) that would coordinate and
disseminate benchmarking experiences to institutions, and put in place training
programs on how to work with the manual.
Rather than make any design changes or additions to the existing manual to make it
more accessible, most respondents to the survey believed there was a need for some
form of closer contact and advice to better appreciate, understand and implement the
full benefits of a benchmarking program for organisational improvement. Training
programs, an independent central referral service or guide were suggested as the main
mechanisms that might assist with this tailored advice to encourage greater uptake
from the current low levels, as Figure 7.1 below shows.
Figure 7.1: Ways to improve effectiveness of the McKinnon et al. benchmarking manual
16
Survey respondents
14
12
10
8
6
4
2
0
ni
ai
Tr
ng
to
t
nc
fu
s
ed
ne
lit
na
io
of
n
il
ta
de
io
ut
it
st
in
if i
ec
sp
Source: University benchmarking survey, 2003
vic
er
ls
rra
fe
Re
e
or
e
id
gu
er
Us
d
re
i lo
Ta
Improvement methods
54
7.2
Workshop evidence
The six university workshops and the associated discussion processes also reinforced
the need for closer hands-on guidance to stimulate the uptake of benchmarking. Issues
raised during the workshops that impact on the uptake of benchmarking for
improvement, apart from senior management commitment mentioned earlier, include:
lack of available time to design own processes for benchmarking that meet their
own circumstances
The results from the six university workshops suggest a benchmarking guide by itself
would be insufficient to build a momentum for organisational change through
stakeholder collaboration and learning.
The final teleconference involving all six university case studies agreed that the best
means for implementing an enhanced benchmarking regime to generate sector-wide
improvement was a central referral centre that was web-based. A web page, either
attached to the DEST web page or through some other university sector-wide
organisation or an independent group, could provide case studies of benchmarking
applications, background to the history and use of benchmarking in higher education,
links to other relevant benchmarking sites, and similar. It could also provide feedback
to questions from those seeking to implement their own benchmarking for
improvement program. It was also suggested that to ensure such a mechanism has a
high uptake there may be a need for some incentive from the government to
encourage greater benchmarking activity among universities.
55
8 PROCESS EVALUATION
As a final element in this study, each of the six case study universities was asked how
they thought the process of discussion used in the project, as outlined in earlier
sections of the report, worked as a means of better understanding how benchmarking
might occur for their university. This question is important for it provides an insight
into the means of stimulating dialogue and learning across stakeholder interests,
whatever they might be. Most importantly, the process stakeholders were asked to
comment upon goes well beyond the simple information dump workshop that we
judged from our investigations is more commonplace in the university environment.
Improvement requires involvement, understanding and commitment across a
spectrum of interests working together toward a common goal.
The advantages of the process implemented in this project were seen to be:
It was clear that the subject warranted attention, especially when it was initiated or
supported by key stakeholders.
The initial workshop enabled those with a concern and even doubt about the
subject of benchmarking to find out what others think and gain a better
appreciation of how it might work in their situation.
It enabled time for reflection and for group members to meet informally between
workshop sessions to discuss issues of relevance to the project.
It is suggested that these processes need to come from within both the university
system at large and the individual institution, so they have the commitment of senior
management; otherwise they run the risk of all the good work not being acted on. It is
also difficult fitting in such time-intensive processes within an already busy university
schedule for staff where there is little time given over for reflection and improvement
planning.
The discussion kit was seen as something that potentially could be quite helpful. We
learnt that the kit needed to be well-designed and to include both generic and specific
subject area examples. The generic modules were important to allow many an
opportunity to gain a wider understanding of the subject, but also the eventual need is
for something that staff can address which is directly relevant to their daily work
tasks. It was suggested that a web version would enable an interactive approach to
stimulating conversation around the issues across a wider group of the university,
rather than being limited to the participants in the workshop.
56
Another potential aid was the use of an outside facilitator or critical friend. Such a
resource enables some of the difficult and culture-related issues to be brought out into
the open for discussion.
Finally, the project allowed only around four months participation in the workshop
and discussion process by each case study university and its stakeholders. This was
not enough time to fully judge the outcomes that can come from a process like this,
although the early indications of commitment to change were very positive.
57
Conclusions
When this project began it had two objectives. The first was to add specific elements
to the McKinnon et al. benchmarking manual dealing with student admission and
student complaint processes. These two functions of university responsibility were not
addressed in the original publication. The second objective was to review the use of
benchmarking generally among universities and to suggest how it might be a more
effective tool in the light of the external pressures and changes now impacting on the
sector, and the reforms contained in the government's package on higher education.
The project found that the McKinnon manual was not seen very positively as a
benchmarking tool that could assist universities with their improvement agendas. Its
uptake among universities has been low. Where it has been used, it was mostly for
peripheral purposes. Some areas of university activity, such as facilities management,
libraries and technical services, appear to have gained some benefit from the manual
as a framework for their benchmarking work. There is little evidence the manual has
stimulated university-wide organisational improvement, particularly in core areas of
university activity.
Generally, the manual has been seen as being too prescriptive, its language confusing
and generally not attuned to the diversity between and within university operations.
The manual has a one size fits all approach that might have some relevance to the
large university, but generally of minor use to others. It was seen as too topdown and
not inclusive of those people in the university or even relevant stakeholders outside
the institution responsible for making things happen on a daily basis. By not
building these people connections from the start, the improvement process suffers
from a diminished knowledge exchange and learning environment. Facilitating these
environments in institutions is important for stimulating the creative outcomes that
take the institution to a new level of improvement, competitiveness and viability. The
McKinnon et al. approach was seen as an anathema to what a learning organisation
should be yet learning is what university core business is supposed to be. It offered
no instruction to assist universities reflect on their functions with stakeholders and
customers.
The manual in a number of respects has added confusion to the subject of
benchmarking. First, universities found the leading, lagging, and learning benchmarks
difficult to distinguish in a practical sense. In the study it was argued that all
improvement ought to be about learning so there should be no need to make this
three-way distinction. Second, segmenting university activity into 67 sub-elements,
then omitting many others which universities felt should be included or expanded,
was a divisive approach to what should be a collaborative approach across university
silos and with other stakeholders. Third, a self-assessment rating scale of 1 to 5
against a definition of good practice was seen as not taking the organisation forward
on an improvement agenda. The manual does not answer the question of what a score
of 5 means the reality is that learning about improvement should never be
exhausted. Not unrelated to this is the way the manual is laid out, and it is difficult to
see a causally linked improvement process in it. Fourth, the language of the manual
58
was seen as turgid and off-putting to the wide spectrum of stakeholders who ought to
be involved in processes of organisational improvement.
Chapter 4 of this report provides template pro forma for student admissions and
student complaints, as required by the original project brief. However, all six case
study universities in the project had great difficulty being able to construct a
benchmarking pro forma along McKinnon lines that would be helpful for their area of
chosen priority.
More important than any of these concerns about the manual, however, is that for
many participants in this study the manual contributed to uncertainty about reviewing
ones own performance with a view to improvement in the university environment.
There is a concern about revealing performance in case it is penalised, rather than
seeing benchmarking as the process of learning to improve in a collaborative way. In
such an environment, needing to improve is somehow seen as a weakness, rather than
judged as demonstrating the leadership character of wanting to be better.
Benchmarking is seen by many in universities as an additional task with resource
costs, rather than a natural part of the way one operates and, as such, is seen in a
superficial or peripheral way. Benchmarking, based on the McKinnon et al. way of
doing it, is seen as a tool for senior management or outside funding and regulatory
agencies to do over the organisation, rather than one that can build teamwork.
Nevertheless, despite all of this, the McKinnon manual has at least put the subject of
benchmarking more to the fore amongst the many things that universities are required
to deal with in establishing themselves as viable and responsive organisations. The
task is to take benchmarking from being perceived as a short-term hard datadriven
assessment exercise for regulatory purposes to one concerned with fundamental
improvement predicated on collaboration and learning. An increasing involve me
requirement by society in the evaluation and review of institutions like universities
will be a result of an evolving global knowledge world.
For all of the reasons stated above, we do not support simply adding elements for
student admissions and student complaints using the McKinnon manual framework.
Nor do we support any other means for enhancing the basic McKinnon et al.
framework to assist universities with their improvement programs. Organisational
improvement is more personal than the template approach advocated by McKinnon et
al. It needs to incorporate leadership, commitment, learning and collaboration. To get
universities to fundamentally improve what they do requires a different approach to
benchmarking than we have so far seen. It needs to be simple and it needs to be
inclusive.
In this project we have adopted an approach to inquiring into benchmarking that
stressed dialogue (learning to think together) among stakeholders, as it was clear this
was a significant missing element from the model put forward by McKinnon et al.
Over a four-month period, two facilitated workshops with a range of university
stakeholders, a discussion kit and a teleconference were used with each of six case
study universities that wanted to improve what they were doing in certain areas.
Ideally a process of open and mutual inquiry and opinion forming like this should
occur in-house, but it might also gain from some carefully measured outside
59
assistance, and ideally it should extend over a much longer period of time as opposed
to the four months that the project allowed.
Nevertheless, the results achieved in this project and the feedback received from the
participants over this relatively short period suggest we were on the right track with
the approach that was put in place. Already a number of participants in the project are
progressing their benchmarking through the learning for improvement approach.
Therefore, we have proposed an approach to organisational improvement that
comprises five phases of activity. In summary these phases involve: reviewing the
current environment impacting on the area where improvement is being sought;
agreeing to a strategy plan to implement initiatives and agreeing upon a performance
assessment regime; being committed to implementation; reviewing progress; and
learning for continuous improvement. We have provided a generic template for
improvement along these lines. Institutions and their stakeholders, with some outside
assistance, should design their own specific program for improvement around the
generic approach and principles suggested in this report.
Our feeling is that a simple approach to organisational improvement along these lines,
built on principles of dialogue, collaboration, reflection, leadership commitment and
learning, has a better chance of encouraging a whole-of-organisation approach to
improvement. It would also go some of the way to overcoming much of the negative
attitudes associated with the approach to university benchmarking and the auditing
agenda that currently exist.
For these reasons we also suggest that access to hands-on learning and activity about
benchmarking for improvement needs to be enhanced to give universities the
confidence to take it on with greater enthusiasm. Web page access, via a central
facility, is suggested. Such a web page might include background and literature about
benchmarking in the university environment and case studies of successful
benchmarking in higher education, as well as provide guidance to those undertaking
benchmarking for improvement.
Increasingly, the Australian Universities Quality Agency quality audit process is
asking universities to explore benchmarking for improvement. We feel the process
outlined in this report could assist this agenda.
9.2
Recommendations
(a) That the McKinnon et al. manual not be revised or enhanced as a tool that can
assist universities with their benchmarking for improvement objectives.
Benchmarking for improvement in universities requires an approach that is more
personal and tailored to individual institution and function circumstances, involves
a wide cross-section of relevant stakeholders, and is based on learning.
(b) That consideration be given to providing direct assistance to those universities that
want to put in place their own programs of organisational improvement based on
principles of learning, collaboration and leadership, following the approach
outlined in Chapter 6. This assistance could be in the form of workshop and
discussion facilitation and a central web-based advisory service.
60
(c) That the discussion kit compiled for this project as an initial effort be developed
into a more comprehensive tool for either web or hard copy use by interested
universities.
(d) That the approach to university improvement outlined in this report be trialled and
evaluated over a 12-month period in a university situation, as the four-month
period allowed in this project was judged too short to be fully effective.
(e) That the approach suggested in this report be presented as a workshop at a
forthcoming Australian Universities Quality Forum, perhaps including progress
reports from universities that participated in this project, to elucidate responses
across a wider array of institutions and functions.
61
REFERENCES
Anderson, D., Johnson, R. and Milligan, B. 2000, Quality assurance and
accreditation in Australian higher education: An assessment of Australian and
international practice, EIP Report No. 00/1, Department of Education, Science
and Training, Canberra.
Anderson, D., Johnson, R. and Saha, L. 2002, Implications for universities of the
changing age distribution and work roles of academic staff, Department of
Education, Science and Training, Canberra.
Butcher, J., Howard, P., McMeniman, M. and Thom, G. 2002, Engaging community
service or learning?: Benchmarking community service in teacher education,
Department of Education, Science and Training, Canberra.
Charles, D. and Benneworth, P. 2001, The regional contribution of higher education:
A benchmarking approach to the evaluation of the regional impact of HEI, Centre
for Urban and Regional Development Studies, University of Newcastle upon
Tyne.
Clark, B. 1983, The higher education system: Academic organisation in crossnational perspective, Pergamon, New York.
Clark, B. 1995, Places of inquiry: Research and advanced education in modern
universities, University of California Press, Berkeley.
Clark, B. 1998, Creating entrepreneurial universities: Organisational pathways of
transformation, Pergamon, Oxford.
Coaldrake, P. and Stedman, L. 1998, Australia's universities confronting their futures,
University of Queensland Press, Brisbane.
Commonwealth Higher Education Management Service (CHEMS) 1998,
Benchmarking in higher education: An international review, CHEMS.
Cunningham, S., Ryan, Y., Stedman, L., Tapsall, S., Bagdon, K., Flew, T. and
Coaldrake, P. 2000, The business of borderless education, Evaluations and
Investigations Program, Report No. 00/3, Department of Education, Training and
Youth Affairs, Canberra.
Delfgaauw, T. 2000 The Shell story A journey towards sustainable development, in
the presentation series From sustainability to Dow Jones, organised by Edmonds
Management, Melbourne, Australia.
Fielden, J. 1997, Benchmarking university performance, CHEMS monograph No. 19.
Fielden, J. and Carr, M. 2000, CHEMS International Benchmarking Club,
Chapter 15 in N. Jackson and H. Lund (eds), Benchmarking for higher education,
The Society for Research into Higher Education, Buckingham.
62
63
64
65
APPENDICES
A.1 University benchmarking survey (to all Vice-Chancellors)
Professor Paul Thomas
Vice-Chancellor
University of the Sunshine Coast
Maroochydore DC Qld 4558
66
The attached survey asks some particular questions about university benchmarking
experience that would be helpful for the project. The AVCC have supported the
gathering of this information.
The initiatives flowing out of the Governments higher education policy framework,
Our Universities Backing Australias Future, as well as observations being made in a
number of completed audits undertaken by AUQA highlight a need to move quickly
in exploring options for supporting universities develop their benchmarking
capability. The present project will provide input into this.
The benchmarking review project is due for completion towards the end of
November. We are therefore keen to learn about your experiences by Monday 20
September this year.
Should you require further information either about the project or about this
information request contact either myself or Dr Claire Atkinson, Quality Unit, DEST,
ph: 02 6240 5397, email: claire.atkinson@dest.gov.au
Yours sincerely
Dr Steve Garlick
Director
20 August 2003
67
68
No
No
No
No
4. From your perspective, what are the main purposes for benchmarking in the
university environment? (mark relevant boxes)
a) University-wide improvement
b) Reflection and strategy planning
c) Building organisation-wide commitment to goals
d) Substantiating achievements
e) An aid in transparent planning
f) Particular functional area improvement
g) Opportunity identification
h) Partnership strengthening
i) Dissolving boundaries within and between universities
j) Strengthening service support links
k) Building performance assessment into learning and teaching
l) Research performance
m) Research training performance
n) Keeping ahead of competition
o) Enabling internal performance-based resource allocation
69
70
71
"
2.
Program
Question 2
What issues have arisen for Swinburne as a result of its
benchmarking efforts?
72
15.00
Question Group A
Question Group B
73
A.3
1.
2.
Program
Plenary session
10.10
Presentation by group on discussions held since first workshop and
proposals that have been identified
The group has been furthering its understanding of the application
of benchmarking in relation to student admissions and complaints,
teaching and learning and regional development at USC with the
assistance of the Discussion Kit. The group has been identifying
ways benchmarking might be made a more explicit part of
University improvement in these areas and what the issues are that
need resolving in making this effective.
10.25
10.45
11.30
74
11.50
12.30
12.50
Review of project
Include feedback from participants on the use of the two rounds of
workshops and discussion kit process in the University of the
Sunshine Coast.
1.00
75
Swinburne University of
Technology
Stage One
76
Introduction
Module 1
Module 2
Module 3
This module addresses University admissions procedures for both national and
international students.
Module 4
This module addresses University complaints procedures for both national and
international students.
Module 5
Module 6
Module 7
Attachment
77
Introduction
Purpose of the Discussion Circle Kit
This Discussion Kit comprises a number of modules with tasks designed to
promote small group discussion and participation around issues relating to the
University Benchmarking Project that are relevant to the circumstances of the
individual university. It is a task-oriented guide to facilitate dialogue among
stakeholders and the collection of additional information.
The initial modules contain the broad issues which have been identified by DEST
as important ones to think about. Subsequent modules deal with issues
considered as being especially relevant to each university.
Preparation
As a facilitator, or the person first getting the group together, it is important to be
organised and familiar enough with the issues to help discussion flow. Going
through the material beforehand and thinking a little about it will help. Remember
78
that some of the points below will apply more to larger groups than to a small
group of colleagues.
Your job as facilitator also includes coordinating and stimulating (but not
necessarily doing) the practical organisation - making sure the group has what it
needs for the session (e.g. photocopies of the relevant section of the kit, pens
and butchers paper if you plan to use it for taking notes) and encouraging
dialogue.
As a facilitator, you play a vital role in helping the group work well together - for
example, setting a positive tone and letting others have their say before
expressing your own opinions. The group may decide to share the role of the
facilitator, so everyone has a chance to develop their skills in this area. If so,
make sure people read this guide before they start to facilitate.
There are three key points to determine at your first meeting:
the group's priorities for discussion;
how much time people are prepared to spend on the group in the first
instance; and
who will record what is said and how and when it will be fed back to the
project team.
79
Module One
Background
Genesis
This project is part of a review being undertaken by the Department of Education,
Science and Training (DEST) of the McKinnon, Walker and Davis (1999)
publication Benchmarking: A manual for Australian universities
www.dest.gov.au/archive/highered/otherpub/bench.pdf
The manual is an important tool for Australian universities to measure and
improve performance. Since its production, there have been changes in Australia
and internationally that make it necessary to consider the currency of the
document. To ensure its ongoing relevance in the rapidly changing global
educational environment, a revision of the manual and of the implementation of
benchmarking initiatives is now an important priority.
A quick environmental scan suggests that a range of national and international
activities have significance for updating the current manual. While some of these
may simply signal the need for some unpacking a broad benchmark described in
the manual, others challenge the way particular university functions are
portrayed within the benchmarking framework.
Among the issues that need consideration in updating the manual are:
the critique of the current benchmarks
recent relevant research and activities related to the university
benchmarking agenda
the experience of those who have been using the benchmarks
the challenge to identify stronger leading indicators
links to the Governments objectives and priorities for higher education
While uptake nationally of the McKinnon Benchmarking Manual generally appears
patchy, there is now greater awareness by universities of benchmarking as a
management tool. Some areas within universities have developed aspects of the
Manual to a greater degree to suit their specific needs. There have also been
requests for enhancements to the Manual in the specific areas of complaints and
admissions procedures for domestic and international students that were earlier
omitted (see Chapters 7 and 10 of McKinnon et al), and in the area of community
engagement.
This project is a review which aims to: (a) add specific elements to the Manual
dealing with university complaints and admissions procedures for domestic and
international students; and (b) explore prospects for generally enhancing the
usefulness of the Manual in the light of the changed environment for universities,
university diversity, and university benchmarking experiences. The report
produced from responses under aim (b) will assist DEST decide whether to
undertake a more extensive stage two update of the Manual.
80
Project design
This project is being implemented at two levels. At one level, general
information is being collected about university experiences with
benchmarking in Australia and internationally and in using the McKinnon
manual. At another level, a process of self-exploration is being
undertaken in collaboration with six universities through a series of
workshops and discussions. This kit aims to assist this process.
Each of the six case study universities in the project will participate in a
four-stage process of knowledge exchange. The four stages include:
An initial workshop of three to four hours that explores and gains
agreement amongst the stakeholders to the design characteristics
of an effective benchmarking framework for designated university
functions.
A period of information collection and resolution formation to
assess university performance and to identify impediments. This
period will last around four to six weeks. The Discussion Kit is
particularly designed to assist with this phase of the project.
A second workshop to agree on the performance assessment and
to identify and agree where improvement can be made.
A third workshop comprising representatives from the six
participant universities that will identify common issues across the
sector relating to university benchmarking performance.
The project includes an opportunity for specific university objectives to be
addressed. These will vary according to the particular issues of concern to the
university. Thus this kit is also to be used by groups to cover the prime interests
of the university rather than the more general issues of McKinnon et al, although
responses are being sought from each participating university on these issues as
well.
Project implementation
The table on the next page identifies the six universities participating in
the project. They are:
1.
2.
3.
4.
5.
6.
81
Table 1
University/ functional
area
Curtin University of
Technology
Area of focus
Griffith University
The focus here is to identify benchmarking processes, 'good practice' definitions and current
performance assessment in the provision of research office support to faculties and senior
executive in relation to research policy, grants, statistics, publications, senior officer advice, etc.
A particular focus here is on student admissions and complaints processes, given the size of the
student population, the high proportion of international students, and the multi-campus nature of
the University.
The focus here is on relationships the Community & Regional Partnerships Group is fostering
between the University and the local and regional communities in which the RMIT has a presence.
The Office is already in the process of developing performance-based indicators of these
relationships and so this project could assist in further enhancing these indicators. The approach
could either be specific to a particular location (eg Hamilton/ Grampians, or North Melbourne,
Inner Melbourne or East Gippsland) or it could be more generic drawing on these areas to give
specific focus to what is developed in keeping with the work of the Community Indicators
Working Group.
Monash University
RMIT
There were two areas of focus for the project at this university.
Approaches to more efficiently handle complaints and grievances raised by domestic and
international students.
Approaches that will enhance consistency and timeliness in policy, procedures and practices
relating to student examination and evaluation.
82
Swinburne University of
Technology
83
Expected outcomes
Project-specific outcomes
In keeping with the general objectives of the project there will be two sets of
outcomes:
This Kit is a tool to use to assist in collecting the data that will address these two
objectives.
Each participating university may have its own areas where it wishes to consider
benchmarking practice. This kit allows for these circumstances, and this section will
be used to spell out the desired outcomes required by each participating university.
For Swinburne University, the outcome will be:
"
"
NOTE: It will be useful for all participants in these Circles to have access to
copies of the McKinnon et al report. Refer:
www.dest.gov.au/archive/highered/otherpub/bench.pdf
84
Module Two
Broad benchmarking issues
This module addresses broader benchmarking issues and acts as a potential vehicle to
subsequently considering specific issues for the University. The general issues are:
benchmarking experiences and initiatives undertaken by Australian universities,
challenges associated with designing and implementing effective benchmarking
regimes,
Background
In Australia the adoption of benchmarking has become more prominent on the back
of the emergence of knowledge as a significant driver of global competitiveness.
Government has also required increased quality in teaching and learning, greater
applicability in research undertakings, greater efficiency in institutional operation,
and greater prudential responsibility for the public funds that are provided.
Consumers are also expecting more from their universities both as students
investing in their own human capital, and more recently a contribution to the
community through their presence as community organisations. Stakeholders are
less interested in how 'their' institution rates against others, and more interested in
the increasing returns it can generate over time.
The report Universities at the Crossroads (2002) has stressed the importance of
monitoring and continuous improvement in the way universities deliver
improvements in quality, efficiency and probity in a way that involves external
scrutiny from stakeholders and regular reporting of outcomes to the community. A
benchmarking regime can provide a framework for this to occur.
Benchmarking has two objectives. First it is used as a means for evaluating the
quality and cost of organisational activities, practice, and process in the context of
industry-wide 'good' or 'best practice'. Second, it can be used as a diagnostic
management tool to achieve continuous improvement in the organisation over time.
While a number of universities have pursued the former objective in various ways,
considerably fewer have chosen to adopt benchmarking as a longer run management
tool for continuous improvement.
Organisations such as the North American College and Business Officers (NCUBO),
the Commonwealth Higher Education Management Services (CHEMS) club, the
HEFCE Good Management Practice programme, the EMSU, the Association of
Commonwealth Universities (ACU) and others have all facilitated higher education
benchmarking in one way or another. Several Australian universities have been part
of some of these initiatives.
85
86
TASKS
Task 1
At the AUQF workshop mentioned above, other points about benchmarking were
made as follows:
A key issue is whether the McKinnon et al document is supposed to be
prescriptive or advisory there is uncertainty about this! There are built in
assumptions within the document.
It is important this benchmarking is not just to set up a process for DEST to use
There is a need to foster a culture of discourse and debate about benchmarking
within the university
Task 3
Task 4
What does your group believe are the challenges associated with
designing and implementing effective benchmarking regimes in your
University?
Task 6
87
The task is to review this pro-forma and its structure for its usefulness
when applied to the situation of your University.
Task 7
McKinnon et al have identified nine major and 67 benchmarking subcategories for university performance assessment. The nine categories
cover governance, planning and management; external relationships;
financial and physical infrastructure; learning and teaching; student
support; research; library and information services;
internationalisation; and staffing.
The task is to review these categories and sub-categories as to their
relevance for your University and identify additional/ replacement
elements.
Task 8
Task 9
In Task 8 above, the words 'good practice' were used. How would
your group determine what is good practice?
88
Student services
Student administrative services (See also Benchmark 3.6)
Lagging
Benchmark rationale:
Every university needs efficient core student
administrative services covering enquiries, admission, progression, fees and other
dues, graduation, and scholarships which are oriented to student service and which
are efficient and economical.
Sources of data:
The data will come from test audits of the speed, accuracy and
comprehensiveness of responses to personal, telephone and written enquiries, and of
the reporting of data needed for student administration and reporting purposes.
Surveys of student opinions of student administrative service attitudes and
responsiveness.
Good practice:
Administrative staff service attitudes and competencies, characterised by speedy,
accurate and complete answers to enquiries, both external and internal, and further
follow-up which ensures complete enquirer satisfaction, whether from potential
students or relatives, current students, past students, or staff members. Sufficiently
efficient processes at enrolment, graduation and other peak times to avoid long
queues and delays. Student services linked efficiently to financial services for prompt
acknowledgement of payment of fees, billing etc. Prompt provision within the
university of class lists, room allocations, examination requirements and other
student data ensuring efficient student administration. Sufficiently modern hardware
and software to provide immediate on-demand responses about individual students
and groups of students to those who need to know (that is, for students themselves,
course coordinators, heads of departments and university executives).
Good practice will also provide automatic computerised student data reports and
flagging of problems regarded by the university as important (e.g., overdue
payments, particular student success records, students not meeting progression
requirements, classes not meeting minimum enrolment requirements, mismatches of
room allocation and class sizes, etc.).
Levels:
1
Unfavourable student opinion
of services. Limited
unresponsive services.
Limited function hardware.
Slow non comprehensive
services. No benchmarks.
Spasmodic monitoring
without comparisons.
Occasional adjustments but
not Systematic.
3
Equivocal student opinion of
services. Waiting times. No
systematic monitoring or
follow up
of service levels. Limited
hardware functionality.
Limited benchmarks. Regular
monitoring and regular
adjustments.
5
High student opinion of
service. Low waiting times.
Systematic tracking and
follow-up of enquirers. Full
service computers with
modem availability.
Comprehensive benchmarks
of the quality of services.
Use of objective standards
for monitoring all key
services.
89
Self assessment :
Check assessment :
Module Three
Admissions procedures for
national and international students
This module addresses issues associated with University admissions procedures for
both national and international students.
Background
The Commonwealth Department of Education, Science and Training (DEST) identified
gaps in the McKinnon Benchmark manual with respect to the management of
admission and grievance procedures. The Department has commissioned this
Benchmarking Project to explore the development of benchmarks on university
complaints management processes and university admissions procedures, for both
national and international students. Issues such as admission practices are emerging
currently as concerns for several western governments. For example, the UK
Government recently asked a team to identify good practice in university admissions
with a view to subsequently issuing a statement of principles about admissions, for
adoption by universities.
At the recent AUQF workshop held in Melbourne in June 2003, the subject of
benchmarking was discussed. In responding to a question regarding Benchmarking
of admissions of university students, this issue included the following:
Why should we benchmark? The following are a number of reasons provided (with
some editing to make the points clearer):
In support of a university strategy
A gap in management has been identified
To differentiate the university
To learn about the situation at the university
To improve the existing processes in the university
To ensure integration into existing systems within the university
To support key performance indicators in place
Incorporate into strategic plans of the institution
As a feedback loop
Tasks are listed on the following page.
90
TASKS
Task 1
In one of our workshops the comment was made that in terms of the student
admission process, commencement starts very early.
Task 2
In another of our workshops the following question was asked when considering
student admission benchmarking processes: What principles do we use in setting
and applying benchmarks?
Task 3
Task 4
What are the data sources (formal and informal) that might help in
making an assessment of current performance against the benchmark?
Task 6
Using the answers to the two previous questions, can you develop a
pro-forma for Swinburne University along the lines of that supplied by
McKinnon?
Task 7
91
Module Four
University complaints procedures
This module address issues associated with University complaints procedures for
both national and international students.
Background
At the recent AUQF workshop held in Melbourne in June 2003, the subject of
benchmarking university complaints processes was discussed. One report considering
this issue included the following list if points. [Note: some editing has taken place to
make the points clearer]:
92
TASKS
Task 1
The group could review these statements and come to their own
conclusions
Other comments from the same AUQF workshop about benchmarking a complaints
system were as follows
How do we set up good systems? What is a good system characterized by?
It is not just a list of factors but a systemic approach eg capture complaint record
it correct the problem.
Qualities of a good system include: complaints are heard, the issue surfaces and is
acted upon, the system is transparent, accurate, accessible, contains natural justice,
has included in it the separation of roles, is right, is consistent, is efficient,
is dynamic! Is linked in as part of an overall effective system
Task 2
Task 3
What are the data sources (formal and informal) that might help in
making an assessment of current performance against the benchmark?
Task 4
Using the answers to the two previous questions, can you develop a
Swinburne pro-forma along the lines of that supplied by McKinnon
Task 5
93
Module Five
Specific Swinburne issues
Admissions and complaints
This module follows up the issues of benchmarking university admissions and
complaints functions (for national and international students) and processes and to
propose how a set of benchmarks might look for Swinburne University in this area.
Note that Modules three and four above provide some core materials on
these subjects and were partially covered in the August Benchmarking
workshop.
TASKS
Task 1
We suggest that the group review Modules three and four to ensure
that they have covered the general issues of benchmarking raised in
them.
What follows are specific issues drawn from the August workshop to enable a more
detailed drilling down process to take place.
Among responses to the initial workshop questions regarding future trends and
changes over the next 10 years and issues which have arisen for Swinburne as a
result of its benchmarking efforts, a number of related points were raised about the
context in which Swinburne will operate in attracting students. These included the
following:
(The general) University stranglehold on pre-university education will change
(Swinburne is) looking at the transition from secondary education to first year
students
Word-of-mouth (among prospective students and their parents about the
University) is significant
Recognition of prior and concurrent learning will grow requires links to others
including the private sector
Collaboration with other organizations and sectors will increase
Task 2
The group might flesh these thoughts out and develop a coherent
statement as to the environment in which Swinburne believes it will
operate over the next 10 years in attracting students.
In considering future trends and reflecting upon past experiences in the plenary
session of the August workshop, the comment was made that Administration
system challenges will increase.
Task 3
The group could elaborate upon what these future challenges will be
for Swinburne bearing in mind such later comments as Understand
the limitations of technology recognise systems might not always do
what is expected, things outside the norm do exist.
94
In the early plenary session of the August workshop, a number of comments were
made regarding benchmarking. Some of these are:
The definition of benchmarking is complex
(What is) comparative reflection compared to benchmarking
What are we going to benchmark against?
Task 4
What responses would the group provide to the issues raised in these
three comments when applied to student admissions and complaints
for Swinburne?
In practical terms, how do the operations of the Student Centre fit into
the framework outlined above?
95
Module Six
Specific Swinburne issues
Teaching and Learning
This module addresses benchmarking Swinburne university teaching and learning
processes and it suggests how a set of benchmarks might look for Swinburne
University in this area.
TASKS
In one of the plenary sessions of the August workshop, regarding future directions,
the following comment was made that there will be Increased appreciation of quality
of learning, and this will include other dimensions of learning and outcomes outside
traditional forms. Can look around the world for examples of this happening!
Task 1
This issue should be teased out further and its implications for
Swinburne identified.
The issue of staff at Universities in the future was also raised in the plenary session
regarding future directions. The following three points are examples:
Credentialism among staff is rising
To be an academic will require higher qualifications there is also a shift in
the way these qualifications are measured eg PhDs using other ways than
long written dissertations
There will be an increase in an emphasis on teaching and where does it fit in
all of these changes
Task 2
In terms of teaching and learning, the group might discuss and identify
the expectations that will be placed upon staff over the next decade
Before moving into small group discussions, a number of points were made about
what might be components to benchmarking. The following are two of these:
Academic review is a good example of performance against own experience,
strategy plan or university expectation
need to be honest, confirmatory, (and) provide useful feedback
Task 3
During the discussions at the August workshop, many comments were made
about Benchmarking. Some of these are set out below:
need adequate data & records for evaluations
data that exists is used because it exists
there is a trap of measuring what you can so then the data is fitted to suit the
purpose
96
what tools are required for evaluation - but probably (at Swinburne we) dont
have these at the moment
students are representatives in decision-making processes
imitation (by others) is a good indicator of success
benchmarking implies resources some expectations are different compared to
what can be provided
and what we want to provide
(there ought to be) external validation
need to be honest, confirmatory, (and) provide useful feedback
Additionally, the small group looking at a benchmarking regime for teaching and
learning produced the following
A hybrid of our own benchmark initiatives and those against other campuses eg
Hawthorne, including some second guessing of national perspectives
Strategic initiatives to documented
Staff development
Quality of teaching student satisfaction surveys of one sort or another, focus
groups
Got to compare like with like such as a similar University eg UWS perhaps best
if discipline based moderation, comparison
Issue is entry level ie entry scores or other measures
Retention rates what are we measuring here? Interpretation needs to be in
context.
Measure of employability graduate attributes
Collaboration and partnerships eg other universitys and industry
All through the process must have consistent measures
Appropriate policies and procedures
Fitness of courses constant review, second guessing what industry and
government is happening
What do you put in place courses, technology etc
What is the experience?
Course advisory committees more industry etc feedback re our yardsticks
especially in disciplines
Evaluating CEQ types of experiences parallel data, make use of open ended
questions software assistance available soon?
Subject surveys on line?
Areas of external accreditation to assist with benchmarking eg psyche,
accounting, computing, engineering
Flexibility
o how to measure this
o Approach to T&L plans
o Objectives
o Systems and resources
Integration equal input from all groups eg library, ITS, Admin, academics etc
Reward teaching awards, service awards, student rewards prizes etc
Task 4
97
TASKS
[In undertaking these tasks the group may wish to consult with a further number of
the University's external community stakeholder interests.]
Regarding the relationship of Swinburne to its engagement with the regional
Community, it appeared that this issue was a significant one not clearly resolved.
In fact the small group looking at this issue stated: There is the issue of
recruitment of local students or becoming a regional university to our regional
students.
Provision of education to outer eastern region
Learning and teaching and appropriateness to our region
Correct practice community numbers on CACS
Task 1
Task 2
During the discussions at the August workshop, many comments were made
about Benchmarking. These covered issues dealing with data, stakeholders,
consultation processes, connectivity between community objectives and University
programs, aspirational culture, engagement processes, etc. Some of the specific
comments are set out below:
need adequate data & records for evaluations
data that exists is used because it exists
there is a trap of measuring what you can so then the data is fitted to suit the
purpose
98
what tools are required for evaluation - but probably (at Swinburne we) dont
have these at the moment
students are representatives in decision-making processes
imitation (by others) is a good indicator of success
benchmarking implies resources some expectations are different compared to
what can be provided
and what we want to provide
(there ought to be) external validation
need to be honest, confirmatory, (and) provide useful feedback