Вы находитесь на странице: 1из 107

Benchmarking the university: Learning about improvement

A report for the

Department of Education, Science and Training

by

Steve Garlick
Geoff Pryor

Regional Knowledge Works

2004

Commonwealth of Australia 2004


ISBN: 0 642 77469 2
This work is copyright. A part from any use as permitted under the Copyright Act
1968, no part may be reproduced by any process without permission from the
Commonwealth, available from the Department of Communications, Information
Technology and the Arts. Requests and inquiries concerning reproduction and rights
should be addressed to Commonwealth Copyright Administration, GPO Box 2154,
Canberra ACT 2601 or e-mail: commonwealth.copyright@dcita.gov.au.
The views expressed in this publication do not necessarily reflect the views of the
Department of Education, Science and Training.

iii

Contents
Page
Acknowledgments........................................................................................................vi
Executive summary....................................................................................................vii
1

Introduction............................................................................................................ 1
1.1 Background to project.................................................................................. 1
1.2 Definitions of benchmarking........................................................................ 1
1.3 Evaluation and trust..................................................................................... 2
1.4 Approach........................................................................................................4
1.5 Report outline................................................................................................ 8

Literature review................................................................................................... 9
2.1 Background to university benchmarking................................................... 9
2.1.1 University quality auditing agenda................................................... 11
2.2 Australian university benchmarking.........................................................13
2.2.1 Early university benchmarking exercises..........................................13
2.2.2 University-specific benchmarking.................................................... 14
2.2.3 McKinnon benchmarking manual.....................................................14
2.2.4 Facilities management benchmarking...............................................15
2.2.5 Libraries benchmarking.....................................................................15
2.2.6 Technical services benchmarking......................................................16
2.2.7 Sector-wide benchmarking................................................................17
2.2.8 External relations benchmarking.......................................................17
2.3 Comments.....................................................................................................18

3 Methodology......................................................................................................... 21
3.1 Introduction................................................................................................. 21
3.2 University survey........................................................................................ 21
3.2.1 Background....................................................................................... 21
3.2.2 Use of benchmarking........................................................................ 22
3.3 University case studies................................................................................ 23
3.3.1 University involvement.................................................................... 23
3.3.2 The workshop process.......................................................................24
3.3.2.1 First workshop.......................................................................24
3.3.2.2 Ongoing discussion............................................................... 25
3.3.2.3 Second workshop................................................................. .25
3.3.2.4 Combined discussion.............................................................25
3.3.3 Case study universities...................................................................... 26
3.3.3.1 University of the Sunshine Coast.......................................... 26
3.3.3.2 Swinburne University of Technology................................... 26
3.3.3.3 Monash University................................................................ 26
3.3.3.4 Curtin University of Technology.......................................... 27
3.3.3.5 Royal Melbourne Institute of Technology............................ 27
3.3.3.6 Griffith University.................................................................27
3.3.4 Evidence from the case studies......................................................... 27

iv

3.3.4.1
3.3.4.2
3.3.4.3
3.3.4.4
3.3.4.5
3.3.4.6
3.3.4.7

What does benchmarking mean?...........................................27


What to benchmark against?................................................. 28
Use of data.............................................................................28
Commitment by senior management.....................................28
Resources for benchmarking.................................................29
Being in or of the organisation.........................................29
The long haul.........................................................................30

Benchmarking templates for student admission and student complaint


processes................................................................................................................31
4.1 Background..................................................................................................31
4.2 Student complaint processes.......................................................................31
4.2.1 Universities involved.........................................................................31
4.2.2 Issues in student complaint benchmarking........................................31
4.2.3 Benchmarking template.....................................................................32
4.3 Student admission processes.......................................................................34
4.3.1 Universities involved.........................................................................34
4.3.2 Issues in student admission benchmarking....................................... 34
4.3.3 Benchmarking template.................................................................... 35
4.4 Comments.................................................................................................... 37

Comments about the McKinnon manual............................................................ 39


5.1 General perceptions.................................................................................... 39
5.2 Specific perceptions.....................................................................................41
5.2.1 Performance versus improvement.....................................................41
5.2.2 Reductionism.....................................................................................42
5.2.3 Language........................................................................................... 42
5.2.4 Leading, learning and lagging benchmarks...................................... 42
5.2.5 Good practice versus better practice................................................. 43
5.2.6 Assessment levels..............................................................................43
5.2.7 Pro-forma approach...........................................................................44
5.2.8 Combined workshop comments........................................................ 44

Revising the approach to benchmarking universities: Learning for


improvement...................................................................................................... 45
6.1 Background..................................................................................................45
6.2 Phases in benchmarking for improvement............................................... 46
6.2.1 Phase 1: Reviewing the current situation and environment.............. 47
6.2.2 Phase 2: Strategic planning............................................................... 48
6.2.3 Phase 3: Implementing actions..........................................................48
6.2.4 Phase 4: Review of progress............................................................. 49
6.2.5 Phase 5: Learning for continuous improvement................................50
6.3 Comments.....................................................................................................51

Sector implementation......................................................................................... 53
7.1 Survey evidence........................................................................................... 53
7.2 Workshop evidence..................................................................................... 54

Process evaluation................................................................................................ 55

Conclusions and recommendations.................................................................... 57


9.1 Conclusions.................................................................................................. 57
9.2 Recommendations....................................................................................... 59

References................................................................................................................... 61
Appendices.................................................................................................................. 65
A.1 University benchmarking survey.....................................................................65
A.2 First workshop program...................................................................................71
A.3 Second workshop program...............................................................................73
A.4 Discussion kit.....................................................................................................75
Tables
Table 1.1
Table 3.1
Table 3.2
Table 4.1
Table 4.2
Table 5.1

Universities participating in the benchmarking project.........................7


Characteristics of surveyed universities.................................................21
University benchmarking usage..............................................................22
Template for benchmarking student complaints.................................. 34
Template for benchmarking student admissions.................................. 36
Improving the effectiveness of the McKinnon et al. manual................41

Figures
Figure 1.1 The emerging show me and involve me world...................................3
Figure 6.1 Phases for an improvement cycle in university functions....................46
Figure 6.2 Reviewing the current situation.............................................................. 47
Figure 6.3 Strategic planning.....................................................................................48
Figure 6.4 Implementing actions...............................................................................49
Figure 6.5 Review of progress...................................................................................50
Figure 6.6 Learning for continuous improvement..................................................50
Figure 7.1 Ways to improve effectiveness of the McKinnon et al.
benchmarking manual.............................................................................53

vi

ACKNOWLEDGMENTS
Considerable thanks are due to the six universities that gave their time to participate in
the project over a period of around four months. In particular we are grateful to the
following people who were a stimulus for the project in each of the six institutions:
Don Maconachie (University of the Sunshine Coast), Anne Langworthy (Swinburne
University of Technology), Adolph de Sousa (Curtin University of Technology), Jill
Dixon (Monash University), Anne Badenhorst (Royal Melbourne Institute of
Technology) and Vicky Pattemore (Griffith University).
We also want to thank the 28 universities that took the time to complete the
benchmarking questionnaire.
The Project Steering Committee provided helpful comments as the project evolved.
The committee comprised: Ms Sian Lewis, Director, Quality Unit, Higher Education
Group, Department of Education, Science and Training (DEST); Mr Ian Hawke,
Director, Office of Higher Education, Queensland Government, representing the Joint
Committee for Higher Education; Professor John Dearn, Pro Vice-Chancellor
(Academic), University of Canberra, representing the Higher Education Research and
Development Society of Australasia; Professor Adrian Lee, Pro Vice-Chancellor
(Education and Quality Improvement), University of New South Wales; Mr Conor
King, Director Policy, Australian Vice-Chancellors Committee; and Ms Rhonda
Henry, Branch Manager, Educational Standards Branch, Australian Education
International (AEI) Group, DEST.
Particular thanks are due to Dr Claire Atkinson, Assistant Director, Quality Unit,
DEST, who managed the project for the department and ensured everything ran
smoothly.

Dr Steve Garlick
2004

vii

EXECUTIVE SUMMARY
This project began with two objectives:
1.

to add new elements, student admission processes and student complaint


processes to Benchmarking: A manual for Australian universities by McKinnon,
Walker and Davis (1999)

2.

to review the use of benchmarking generally among universities and suggest how
benchmarking might be made a more effective tool in the light of the pressures
and changes impacting on the sector.

Two approaches were used to examine and report on the two objectives. First, a
survey of the use of the McKinnon et al. manual and of benchmarking generally in
Australian universities, and second, an in-depth workshop and discussion program in
six case study universities undertaken over a four-month period.
Seventy percent of all universities responded to the benchmarking survey, and
included metropolitan and non-metropolitan, small and large, and new and old
institutions across all states. The six universities taking part in the workshops also
included representation from metropolitan and non-metropolitan, old and new, and
large and small institutions across three states.
While benchmarking has been increasingly used in Australian universities over the
last decade, a review of literature in Chapter 2 suggests this use has, in the main, been
superficial and peripheral. It has generally not been used for organisational
improvement in core business areas. Its development has not kept pace with changes
in recent years in performance reporting and quality auditing, with which it sometimes
gets confused. Most of the benchmarking use in universities has been for functions
such as libraries, facilities management and technical support. Apart from its use in
these areas, the McKinnon manual has been used mainly as a resource for ideas for
management.
Six universities were asked to be part of the projects case studies because they had
functional areas they were keen to improve. Apart from student admission and student
complaint processes, these functional areas included their relations with their local
and regional communities, teaching and learning strategies, research services, and
examinations and assessments.
Each of the six universities separately took part in a four-stage process of facilitated
workshops and discussion over a four-month period. The first series of workshops,
attended by an average of 20 to 25 staff and other stakeholders, including from
outside the university, was designed to share understandings about the environment of
change in universities, as well as definitions and concepts of benchmarking. It also
highlighted issues, drivers and impediments and directions for improvement in the
targeted areas. A discussion kit, based on the outcomes of the first workshop and
tailored to each institution, was used by smaller groups formed from participants
attending the first workshop. Over a four to six week period these institution-specific
working groups gathered information, addressed the issues raised in the initial
workshop, and made specific recommendations for benchmarking in the targeted
functional area. Each of the working groups presented their finding in a second series

viii

of workshops in each university, attended by around 15 to 20 participants, with the


objective of agreeing to specific directions for improvement in the function. The last
stage was a combined final discussion with representatives of the six case studies to
identify common themes and issues across the sector in relation to benchmarking for
improvement.
A practical examination of the usefulness of the McKinnon et al. manual was part of
the workshops, as was identifying the elements that would make up a program of
benchmarking for improvement that would be effective in the university situation
across functional areas.
The project found that the McKinnon et al. manual was not seen positively as a
benchmarking tool that could assist universities with their improvement agenda. It
was seen as having confusing language and concepts, as being a one size fits all top
down approach, and as anathema to learning for improvement organisations. It was
seen to also contribute further to the existing uncertainty and suspicion with which
evaluation and auditing processes generally are regarded in universities.
It is argued that organisational improvement is a more personal process for staff and
stakeholders than that fostered by a template-based manual of the type put forward by
McKinnon et al. It needs to incorporate collaboration and connectivity across
stakeholder interests, learning and knowledge exchange, and leadership commitment.
And it needs to be simple. We therefore do not advocate any updating of the approach
to university benchmarking advocated by McKinnon et al. in the current manual. It
does not offer a solution to benchmarking in an increasingly involve me evaluation
world.
To fulfil the requirements of the project, however, McKinnon-like benchmarking
templates for student admissions and student complaints are presented in the report.
These templates were framed on the basis of input received during the case study
workshops and discussions.
As an alternative, this report also proposes a five-phase approach to organisational
improvement for universities. These phases involve:
1. reviewing the current environment impacting on the area where improvement is
being sought
2. agreeing on a strategy plan to implement initiatives and on a performance
assessment regime
3. being committed to implementation
4. reviewing progress
5. learning for continuous improvement.
The report outlines inputs and objectives for each of these phases. While we provide a
generic approach to university organisation improvement, we suggest that institutions
and their stakeholders, with some outside assistance, design their own program for
improvement around the approach.

ix

Our feeling is that a simple approach to organisation improvement along these lines,
built on principles of dialogue, wide collaboration, reflection, leadership commitment
and learning, has a better chance of encouraging an effective approach to
improvement benchmarking. Such approaches could also help reinforce the objectives
for quality improvement currently being sought by the Australian Universities Quality
Agency in their cyclical process of auditing universities.
We recommend that these principles, the reports recommendations, and the
facilitative discussion approaches and tools used in this project be further discussed
with relevant stakeholders and reviewed over a longer period. The period for this
project was too short to fully appreciate their effectiveness as a means for assisting
universities to progress their improvement objectives. Early evidence, however,
suggests that with further development they may make a useful contribution in this
area.

INTRODUCTION

1.1

Background to project

This project is part of a review being undertaken by the Department of Education,


Science and Training (DEST) into the effectiveness of the McKinnon, Walker and
Davis (1999) publication, Benchmarking: A manual for Australian universities, as a
tool for university improvement.
The recent higher education reforms, announced by Minister Nelson (2003) in the
package, Our universities: Backing Australia's future, emphasised the need for
universities to generally improve their operations, function with a more diversified
funding base, build their uniqueness and increase their responsiveness to national
equity issues and educational quality.
In the context of this federal government agenda for improvement, the project had two
aims:
1. to add specific elements to the McKinnon et al. manual dealing with university
complaints and admissions procedures for students, which were not addressed in
the original publication
2. to explore prospects for enhancing the usefulness of the manual, and
benchmarking generally, as a tool for university improvement.
In initiating the project, DEST was concerned that universities have access to a
method that would not only help them assess their own performance but, importantly,
improve their practice in the light of the changing education environment, diversity
among universities, and university benchmarking experiences. This report suggests an
approach to meet this second objective.
The proposals produced in response to the second aim should assist DEST decide
whether to undertake a more extensive stage two update of the manual, or approach
the issue of improvement in university practice in a different way.
A copy of the McKinnon et al. manual can be seen at:
www.dest.gov.au/archive/highered/otherpub/bench.pdf.
Since its publication, the manual has been a stimulus for Australian universities
exploring benchmarking in their own institutions in various ways. However, also
since its publication, there have been changes in Australia and internationally that
make it necessary to consider the currency of the document and the application of
benchmarking generally as a tool to assist operational improvement in the sector. A
revision of the manual and the implementation of benchmarking initiatives was
therefore seen by DEST as important.
1.2

Definitions of benchmarking

Being clear about what is meant by benchmarking is an important first step in the
university evaluation process. As Massaro (1998) has stated, it has become a loose
term surrounded by considerable jargon, covering a multitude of sins:

qualitative comparisons, statistical comparisons with some qualitative


assessment of what the statistics mean, and the simple generation of statistical
data from a variety of sources which are then published as tables with no attempt
at interpretation. (p. 33)

It is this rather superficial tick-a-box template approach based around performance


indicators that seems to be what many in the university sector perceive benchmarking
to be.
As Fielden (1997) notes, higher education benchmarking tends to be confused with
collecting statistics or performance indicators and complaining about the poor cost
benefit of data collection exercises (p. 1).
For this project we were attracted to the following definition of benchmarking
provided by Jackson and Lund (2000):
Benchmarking is, first and foremost, a learning process structured so as to enable
those engaging in the process to compare their services/activities/products in
order to identify their comparative strengths and weaknesses as a basis for self
improvement and/or self-regulation. (p. 6)
and
Some benchmarking practitioners would argue that one of the defining
characteristics of benchmarking is that it is a collaborative process [involving]
the active participation of two or more organisations or organisational units in a
formal structured process that facilitates comparison of agreed practices,
processes and performance. (p. 7)

Our positive response to the above definition stems from a perspective that
universities are institutions of learning and thus ought also to be learning institutions.
This benchmarking definition by Jackson and Lund is fundamentally about learning
and about how improvement can be achieved through collaboration and active on-theground participation. Indeed, it is the mutually reinforcing force created by learning
and collaboration (more usefully defined as building connectivity) that underpins our
approach to improvement in this report. This definition also attracts us because we
view it as contrasting with the template approach of the McKinnon et al. manual,
which we believe restricts opportunities for improvement as it does not explicitly
foster collaboration and learning.
1.3

Evaluation and trust

The complexity of globalisation, shifts in power relationships, a sense of loss of


control over their lives by many people, and the increasing advent of a range of
significant environmental, cultural, business, community, social and other disasters
have reduced trust in institutions and increased the call for greater accountability,
independent auditing, and transparency in decision making. The result has been, over
time, a shift from a trust me attitude by institutions about their operations, to firstly
a tell me attitude by society, and then a show me world (Delfgaauw 2000), as
Figure 1.1 shows.
The flow-on from this situation has been an increasing demand for evaluative
mechanisms that demonstrate societal standards are being maintained and improved

through the practices of institutions, business and governments. This circumstance is


leading to a so-called evaluative state. At their best these emerging mechanisms
reflect new business models and new governance models that embed connections with
a wide range of stakeholders.
A range of auditing benchmarks and indicators is now being widely used, particularly
in the business sector, and increasingly being adopted in the university sector to
account for performance and to promote improvement (Jackson and Lund 2000, p. 3).
However, the risk now being recognised is the extent to which these measures simply
cover up performance by responding to the letter, and not the intent, of reporting
requirements through the use of top-down driven indicators rather than through
involvement, reflective dialogue and learning. Recent high-profile private sector
performance disasters with huge public implications (e.g. in energy, finance,
insurance, and health industries) are a real-life portrayal of this approach to
evaluation.
More important than simply reporting against external requirements, however, is the
extent to which there are real processes of improvement as an intrinsic part of the
evaluation process and the extent to which involvement is encouraged across a wide
spectrum of stakeholders with the knowledge, experience and skills to add to creative
and enterprising outcomes. The latest call from the wider community suggests an
involve me world (Figure 1.1) where all stakeholders with an interest in the outcome
are directly involved in the process of evaluation not simply to know, but to
improve.
Figure 1.1: The emerging show me and involve me world (adapted from Delfgaauw 2000)

Trust

High

trust me

involve me
tell me
show me

Low

Low

Transparency
As trust diminishes, the demand for transparency
in the form of assurance mechanisms increases.

High

Receiving significant sums of public funding and having a key role in the
development of a nations future human capital resource, as well as regional
community, individual and business outcomes, universities have a considerable
societal obligation to perform, domestically and internationally, in ways that deliver
value for money. Universities are therefore not isolated from an involve me world.
As the global and knowledge worlds have become intertwined, the university has the
potential to be a new unifying space that can contribute more to the community, the
professions, society and culture. Demands on the university from these areas for more
involvement are increasing.
The last eight to ten years have seen a growth in performance benchmarking in the
higher education environment as governments have sought increased quality in
teaching and learning, greater industry applicability in research, greater efficiency in
institutional operation, and greater prudential responsibility for the public funds
provided. Consumers both domestic and international students, employers seeking
to improve their outputs and, more recently, local and regional communities are also
expecting more from their universities. Many stakeholders are becoming less
interested in how their institution rates against others than in the increasing returns
benchmarking can generate for them over time and how they can participate in the
process to ensure this takes place.
The higher education package, reflecting these views, stated:
Institutions need to be given maximum opportunity, consistent with public
accountability and social responsibility, to develop innovative responses to
rapidly changing environments in teaching and learning, in the direction and
commercialisation of research, and engagement with industry, research
institutions and other education providers. (Nelson 2003, p. 10)

In addition to these expectations from outside, universities, faced with increasing


competition and reduced public sector funding, are increasingly seeing the need to lift
their performance through management improvement and are seeing benchmarking as
an additional management tool to assist them in this.
This project and report are about generating better outcomes through an approach to
evaluation that is predicated on involvement and not just reporting.
1.4 Approach
The approach to this project has two parts.
The first part was a general survey of 39 universities in the Australian higher
education system. The survey was targeted at senior management and explored the
nature and extent of the universitys use of the McKinnon et al. manual and their
experience with benchmarking generally. Twenty-eight responses (70%) were
received from the 39 universities asked to complete the survey.
In keeping with an increasing involve me approach to evaluation and improvement,
the second part of the approach adopted in this project sought to get closer to the
factors that result in improvement in the university in a range of different functional

areas through processes of collaboration, dialogue, learning and knowledge exchange.


An in-depth examination was undertaken of how benchmarking might best assist six
different university situations across a number of specific areas of self-identified
operational concern, including student admission processes and student complaint
processes.
Each of the six case study universities, and their stakeholders both inside and outside
the institution, participated in a four-stage process of discussion and knowledge
exchange. The four stages were:
1. an initial facilitated workshop over three to four hours, designed to:
share understandings about the current pressures on the university operating
environment, and the definitions, issues and directions for university
benchmarking generally in the light of this
review the McKinnon manual approach for its effectiveness as a tool for the
university in fostering improvement
make specific recommendations relating to benchmarking student admission
and student complaint functions, as well as other areas of priority for the
university and how a set of benchmarks might look in these areas.
2. a four- to six-week period of self-initiated discussion and information collection
within each university to further explore issues associated with benchmarking and
organisation improvement and to identify any potential impediments to its
effective application
A discussion kit was specifically designed to assist with this phase of the
project. The discussion kit was framed around the university benchmarking
issues identified in the first workshop, as well as issues evident in international
benchmarking literature and raised at the benchmarking workshop run by the
authors at the Australian Universities Quality Forum in Melbourne in 2003
(refer: http://www.auqa.edu.au/auqf/2003/program/2c_workshop.htm).
3. a second workshop to agree on an almost completed benchmarking program in
each of the targeted functional areas that could be taken forward as an
improvement program within the university
This included identifying the opportunities and impediments that needed to be
pursued to give effect to a program of implementation.
4. a final workshop, held in the form of a teleconference, comprising representatives
from the six participant universities and designed to identify common issues
across the sector relating to university benchmarking issues and implementation
processes
The workshop also gave the opportunity to test some of the conclusions drawn
from the project.
The results of this process were also discussed with the Project Steering Committee
whose role was to provide comments on the project as it evolved.
The six universities participating in the project were:

Curtin University of Technology

Griffith University

Monash University

Royal Melbourne Institute of Technology

Swinburne University of Technology

University of the Sunshine Coast.

The functional areas identified by the six universities in examining benchmarking in


the project included:

student admission processes

student complaint processes

community engagement and regional development

teaching and learning

student assessments and examinations

research services.

Table 1.1 provides detail on the areas explored by each university in the
benchmarking project and Chapter 3 provides details on the methodology used.
The six universities volunteered their involvement following a general invitation
provided to all universities to take part. The six universities provided coverage across
three states and included a large university, small university, non-metropolitan
university, technical university, and a semi-autonomous campus in a distributed
campus network.
Benchmarking templates for student admission and student complaint processes
following the McKinnon et al. framework, as required by the project brief, are
described in Chapter 4. Because the McKinnon et al. approach was found wanting as
a benchmarking tool that can assist the university with its improvement goals
(Chapter 5), we have gone much further and proposed an alternative approach based
on principles of collaboration, dialogue and learning to ensure consistency with an
involve me evaluation framework (Chapter 6).

Table 1.1: Universities participating in the benchmarking project


University/functional area
Area of focus
Griffith University
The focus for the project at this university was to identify benchmarking processes, good practice definitions and current
performance assessment in the provision of research office support to faculties and senior executive in relation to research
policy, grants, statistics, publications, senior officer advice.
Monash University
At this university the particular focus for the benchmarking project was on student admission and student complaint
processes, given the size of the student population, the high proportion of international students, and the multi-campus
nature of the university.
Royal Melbourne Institute of
Particular attention at this university was on the relationships the university is fostering with its various regional and local
Technology
communities in which it has a presence throughout Victoria through the Centre for Community & Regional Partnerships.
The centre is in the process of developing performance-based indicators for these relationships and so the present project
was seen as having the potential to further enhance the development of these indicators.
There were three areas of focus for the project at this university:
Swinburne University of
Technology

an investigation into how benchmarking could be used as a tool to assist the university position itself in relation to the
learning and teaching agenda highlighted in the higher education reform package particularly focusing on
monitoring performance, identifying good practice in teaching and learning, and the dissemination of outcomes

a specific examination of how the university engaged with its community through its learning town strategy.

admission and complaint processes and how a set of benchmarks might look for this function for a metropolitan
university with a decentralised semi-autonomous campus administration.

University of the Sunshine Coast

There were three areas of interest for the project at this university:
student admission and complaint processes from the perspective of a smaller and newer university
the university's regional development objectives the university saw its connections in this area as being particularly
important for its future

positioning the university in the context of the learning & teaching agenda outlined in the higher education reform
package.
There were two areas of focus for the project at this university:

approaches to more efficiently handle complaints and grievances raised by domestic and international students

approaches that will enhance consistency and timeliness in policy, procedures and practices relating to student
examination and evaluation.

Curtin University of Technology

1.5

Report outline

Chapter 2 discusses issues identified in the literature associated with the design and
application of benchmarking in the university environment in Australia and
internationally. A number of examples of the application of benchmarking in
Australian universities are provided.
Chapter 3 details the methodology used in the project. It describes the survey of
universities that was undertaken and its findings. It also describes the workshop and
discussion processes that were undertaken in the six case studies in the project and
discusses some of the generic issues that came from them.
Chapter 4 presents additional templates, following the McKinnon et al. format, for
student admission processes and student complaint processes as required by the
original project brief. The templates draw on the information from the six case study
universities.
Chapter 5 discusses the McKinnon et al. benchmarking manual and the
reasons it has not been extensively adopted in the Australian university
scene. The section draws on the survey of universities and the workshops
held in the six university case studies.
Chapter 6 presents a five-phase approach to benchmarking for improvement
as an alternative to that provided by McKinnon et al. It discusses this in the
context of the issues that arose in examining the six functional areas of
university activity through the workshop process.
Chapter 7 suggests an implementation initiative to encourage greater uptake
of the proposed approach to university benchmarking across the Australian
system of higher education.
Chapter 8 reports on the feedback received on the workshop and discussion
process that was put in place for the project in the six universities.
Chapter 9 provides the conclusions and recommendations of the project.

2.

LITERATURE REVIEW

This chapter reviews the literature on the environment in which universities are
attempting to apply evaluation management tools like benchmarking and the way they
are using benchmarking.
2.1

Background to university benchmarking

Benchmarking as a tool for learning about best or good practice has been building
momentum in Australian business over the last decade and a half. There are some
useful web sites that aim to assist business incorporate these concepts into their daily
practice (e.g. the Benchmarking in Australia site at http://www.ozemail.com.au/benchmark/ and the Australian Quality Council site at http://benchnet.com/aqc/).
Concerted benchmarking initiatives undertaken by Australian higher education
institutions have been less enthusiastically taken up and our literature review reveals
there is little evidence of any university benchmarking being reported before the mid1990s. Mostly, benchmarking has been of interest to larger universities wanting to
assess and compare their administrative functions as part of their membership of
international benchmarking consortia. Of particular focus have been functional areas
in universities such as libraries, facilities management (buildings, grounds etc.) and,
more recently but to a lesser extent, technical services (e.g. laboratories, studios)
where access to physical objects rather than human capital development has been the
main focus. In the last two years there has also been a growing interest in applying
benchmarks to the relationship a university has with its regional community.
The practice of benchmarking may have two objectives first, as a means for
assessing the quality and cost performance of an organisations practices and
processes in the context of industry-wide or function-specific best practice
comparisons. This has been generally used as part of an organisations accountability
responsibility to an accrediting, funding or regulatory authority.
Second, and more fundamentally, benchmarking can be used as an ongoing diagnostic
management tool focused on learning, collaboration and leadership to achieve
continuous improvement in the organisation over time. This second objective is one of
the primary concerns discussed in this report.
Adopted from the world of business, benchmarking, as a tool for management
improvement in universities, has struggled to gain standing. Where it has been taken
up it has rarely gone beyond quantitative assessment of where an organisation sits
with respect to its competitors based around agreed indicators. From our review of the
literature it appears that few universities have chosen to adopt benchmarking as a
longer-term management tool with a core role for continuous improvement and
human resource development.
A number of factors may be behind this relatively slow adoption of benchmarking for
improvement by universities. Some of these issues have been highlighted in the
literature, and in the university survey, the case study workshops and the discussions
we undertook for this project.

10

The first of these factors is the complex nature of change impacting on universities,
creating an environment that makes it difficult to have a point of reference from
which to embark on a concerted program of reform. At one level is the ongoing push
and pull effect between the government, market and academia (Clark 1983), while at
another level there is a range of economic, social, demographic, cultural,
technological and spatial pressures.
As Clark (1998) has stated:
The universities of the world have entered a time of disquieting turmoil that has
no end in sight. As the difficulties of universities mounted across the globe
during the last quarter of the twentieth century, higher education lost whatever
steady state it may have once possessed. (p. xiii)

The benchmarking workshops in the six case study universities identified a number of
international and national trends and changes impacting on the higher education
sector that influence their operational environment and the directions that
benchmarking might take. For the most part, these issues and trends are consistent
with those identified in detail in other research reviewing higher education (see Clark
1995; Trow 2000; Coaldrake and Stedman 1998; Cunningham et al. 2000; Anderson
et al. 2002) and are not elaborated on in this project report. To give a context for the
remainder of the report they are simply summarised in this section as follows:

Demographic shifts and changes, labour market casualisation, and changes in


student expectations are causing universities to be more responsive in course
design and delivery, and external study arrangements. There are changes in
location, gender, age cohort, cultural mix, the domestic and international student
mix, and full-time and part-time balance to which university course management
is responding.

Increasing competition from other learning program providers from the public and
private sectors has tended to build a resistance to sharing experiences, practices
and data within the sector.

Increased collaboration with outside groups in response to resource shortages has


led to partnerships with business, other education providers and, to a lesser extent,
with local communities.

University management has been corporatised through their marketing,


organisation design, income generation strategies, and governance.

There has been increased use of information and communication technology for
such activities as student admission, library borrowing and courseware access.

Teaching has been subject to increased credentialism.

The report by Martin (2003), reflecting on the outcomes of the Australian Universities
Quality Agency auditing process report from 2002, notes the destabilising effect of
continuing, fundamental change in higher education and an inability of many
institutions to close the loop between strategic plans, their implementation, review
and improvement.
The second factor restricting a full appreciation of the role benchmarking might play
in the university environment is that the current auditing and quality assurance agenda

11

is being seen by some universities not as an opportunity to improve what they do


through reflection and learning but as an accountability stick that prompts them to put
the best gloss on measured results for fear of any subsequent impact on reputation. As
a result, in this environment, benchmarking tends to be imposed as a short-term
quantitative exercise from the top down for a quick one-off assessment of
circumstances.
Third, there appears to be no consistency of approach or clarity of purpose in method
and process in the various applications of benchmarking among universities to meet
identified objectives, thereby making comparators in an already diverse sector a more
difficult exercise. To some extent the McKinnon et al. benchmarking manual was
designed to address this specific concern of consistency. However, in so doing the
manual has provided a one size fits all approach rather than an approach that
recognises diversity in life cycle, location, governance, size and other variables in and
between universities. This issue is discussed in more detail in Chapter 5.
Fourth, there has been little institutional leadership commitment at a senior level in
individual universities to ensure that the good practice learning that flows out of any
effective benchmarking process becomes embedded in the way the organisation does
its business (CHEMS 1998). While benchmarking has been borrowed from the
corporate sector, universities do not have the same degree of command and control to
ensure implementation of the traditional topdown approach to benchmarking of the
corporate sector. Universities need to construct their own approach to the design and
implementation of a management improvement regime like benchmarking.
Fifth, just as there is considerable diversity between universities, there is also
considerable diversity within university functions. While some activities
(e.g. libraries, facilities management, technical support and administration) have been
more amenable to benchmarking assessment because of a focus on physical objects,
the core business of teaching and research functions of universities and their external
relations have been less responsive to date to the current benchmarking agenda.
2.1.1

University quality auditing agenda

The requirements for universities to assure their customers, their accrediting,


regulating and funding agencies, and other stakeholders that they are designing and
delivering good-quality programs and are operating efficiently and equitably have
increased over recent years.
In a 1991 policy statement, Higher education: Quality and diversity in the 1990s, the
Government announced a comprehensive set of measures to enhance the quality of
higher education teaching and research. A major initiative was the provision of
additional funding to those universities that could demonstrate a high level of quality
assurance.
In 1992 the Committee for Quality Assurance in Higher Education was established as
a non-statutory ministerial advisory body by the Australian Government to assist in
the implementation of the Quality Assurance Program for the period 1993 to 1995,
including allocating the program funding. Quality review teams visited universities
who had agreed to participate in the programs. Assessment was based on portfolios

12

prepared by the institutions with a focus on the whole institution rather than on
particular disciplines or faculties. The government was then advised on how to
allocate the Quality Assurance Program funds. Institutions were ranked in a number
of hierarchical bands, and allocations associated with these bands were on a sliding
scale so that all universities received some of the additional funding.
After three years the Committee for Quality Assurance in Higher Education was
disbanded (Anderson et al. 2000), and under subsequent arrangements universities
made their own annual submissions to the DEST based on their quality assurance and
improvement plans for the forthcoming triennium through an annual profiles process.
The plans outlined the institutions goals and strategies, and the indicators they used
to monitor progress in achieving those quality goals. After a few years the plans were
discontinued because each institution approached its quality responsibilities
differently and because of national and international demands from customers for a
more rigorous and transparent quality assurance system.
Good reviews of the early period of quality assurance for universities are provided in
Anderson et al. (2000) and Harman and Meek (2000).
To address issues associated with university quality, and a process of independent
review, the Australian Universities Quality Agency (AUQA) was established in 2000.
AUQA is owned by the federal, state and territory ministers for higher education and
operates independently of governments and the higher education sector under a board
of directors.
The mission of AUQA is to undertake periodic audits of higher education institutions
and to report on the relative standards of the Australian higher education system and
its quality assurance processes, including their international standing, as a result of
information obtained during the audit process. AUQA has now undertaken a series of
university quality audits (refer: http://www.auqa.edu.au) and maintains a good
practice database. By participating in this audit process, as indicated in the university
survey for this project, some universities are seeing an increasing role for
benchmarking to assist them with their quality improvement preparation.
In reviewing the eight AUQA audit reports undertaken in 2002, Martin (2003) says
that universities councils were urged by audit panels to establish appropriate
performance indicators and benchmarks and to ensure systematic monitoring of
performance. According to AUQA, a lack of appropriate hard data quantitative
measures, against which to judge the institutions performance, has been a limiting
factor in increasing benchmarking activity (Martin 2003, pp. 1314). From the audit
reports, Martin states:
Outside the research portfolio and some administrative and support areas,
benchmarking was found to be uniformly weak, and reference to external
comparators inconsistent. Five institutions were urged to identify appropriate
national and international benchmarking partners and others were cautioned
about the need for hard data to substantiate claims about national pre-eminence
or world-class performance. (p. 14)

13

Martin concludes that the real usefulness of the AUQA audit process and report is
the extent to which it is used to focus intra-institutional conversation about the
range of issues it covers and about quality improvement more generally (p. 31).
While we agree with the views expressed in this conclusion by Martin about the
AUQA process and reports, and the need for all stakeholders to see improved
outcomes, we are not convinced that the audit process, by itself, fully leads the
institution to this outcome. We are also not convinced that the uptake of
benchmarking in universities is only held back by hard data problems. The problem
is more significant than this, and an audit for accountability culture does not
automatically lead on to a collaboration and learning for improvement culture. Its
this distinction that needs more attention.
2.2
2.2.1

Australian university benchmarking


Early university benchmarking exercises

An early benchmarking exercise in Australian universities is that reported by Weeks


(2000). The project was undertaken by the Teaching and Learning Development Unit
at Queensland University of Technology in 1995 to compare the ways in which
university teachers undertaking the Graduate Certificate in Education are prepared for
teaching. Comparisons were made with other universities in Australia, the United
Kingdom and the United States. Weeks concludes that when used as a process for
generating ongoing improvement, as opposed to it being an indicator of
competitiveness, benchmarking can be effective in influencing culture and practice
within the university through internal learning, and in building networks with
professional colleagues in related fields.
Several Australian universities in the late 1990s began to take part in an international
benchmarking consortium under the auspices of the National Association of College
and University Business Officers (NACUBO). NACUBO benchmarking (see
http://www.nacubo.org/website/benchmarking ) began in 1992 in the United States
and focuses primarily on benchmarking comparisons on the administrative, statistical
and financial functions of universities. A review of this process shows that issues of
concern associated with this approach to benchmarking have been the lack of
appreciation of university diversity, the lack of team building, and its relatively
narrow focus on United States processes. Comparing costs across countries was
particularly problematic in the approach taken in the NACUBO project. Few non
United States members now remain with the project (Fielden 1997). Nevertheless,
there are still about eight Australian universities with some part in the NACUBO
project, although the number of universities involved fluctuates from year to year.
A number of Australian universities have also become part of the Commonwealth
Higher Education Management Service (CHEMS) Club. CHEMS is the management
consultancy service of the Association of Commonwealth Universities. CHEMS
launched its international University Management Benchmarking Club for
universities in 1996. As distinct from the NACUBO arrangement, the CHEMS club
enables participating universities to compare their management practices and
processes (e.g. strategy, policy, human resources, student support, external relations,
and research management) against a range of comparable institutions (CHEMS 1998).

14

Through the CHEMS process, university participants choose the area of their
operation that they want to have benchmarked.
CHEMS has two core functions. The first of these is to identify and publish good
practice information about management in member universities, and the second is to
help members undertake benchmarking activities in their own institutions (Fielden
and Carr 2000).
A similar club to that of CHEMS exists for English universities. Called the English
Universities Benchmarking Club, with funding from the Higher Education Funding
Council for England, it aims to support the ongoing benchmarking activities of its
members until they are self-sufficient in pursuing this activity (refer:
http://www.eubc.bham.ac.uk/action.htm).
In 1996 Ernst and Young completed a student administration benchmarking study in
Australia (Massaro 1998) to identify best practice approaches across seven
universities in relation to examinations, enrolments and graduation procedures. The
study, while providing some useful guidance for university administration on the
matter of standards, seems not to have resulted in the introduction of continuous
measurement and improvement in this area.
Universitas 21 comprises a small membership network of similar universities around
the world. This network exchanges experiences and practices in the delivery of
teaching to undergraduate and graduate students. Established in 1997, the network has
recently begun benchmarking practices in management, research, and teaching and
learning to encourage the uptake of better practices among its members. The
organisation recognised the difficulty of making performance comparisons between
institutions across national systems and for this reason focuses only on exchanging the
learning from the respective experiences of members rather than the results
(Universitas 21 1999).
2.2.2

University-specific benchmarking

A number of Australian universities have had experience with benchmarking by


employing an outside management audit consultant to review particular aspects of
their operations in the context of making organisational change. In some cases
comparisons are made with overseas examples for which there is available data.
A search of university web pages and the university benchmarking survey undertaken
for this project suggest there are several universities that have employed management
consultants to assist them with this.
2.2.3

The McKinnon benchmarking manual

In Australia, Benchmarking: A manual for Australian universities by McKinnon,


Walker and Davis (1999) was designed to identify the most important aspects of
contemporary university life in changing times and to find ways of benchmarking
them (p. 1). The manual identifies 67 benchmarks in the following nine areas of
university activity: governance, planning and management; external relationships;
financial and physical infrastructure; learning and teaching; student support; research;

15

library and information services; internationalisation; and staffing. The manual


identifies good practice performance descriptions and sets out an approach to assess
achievement in outcomes (lagging), process (drivers), and rates of change (learning)
as a balanced scorecard approach to measuring university activity.
The McKinnon et al. manual was an attempt to address what was seen as the
piecemeal nature of benchmarking in the Australian university sector through a more
consistent approach. The manual, along with the increased policy commitment by the
Australian Government for quality auditing, has stimulated some increased interest
among universities to pursue benchmarking either in a whole-of-organisation sense or
in a function-specific sense. Its attempt at bringing consistency, however, resulted in a
tick-a-box template approach based on viewing university benchmarking purely as an
assessment exercise from the outside in, rather than an approach based on
fundamental improvement from the inside out. Chapter 5 specifically deals with these
issues in detail.
2.2.4

Facilities management benchmarking

The Australasian Association of Higher Education Facilities Officers has undertaken


benchmarking surveys of university facilities and services for a number of years
covering such areas as floor space, asset replacement value, maintenance costs,
energy consumption and costs, water consumption and costs, cleaning costs, security,
parking and telephones (http://www.aappa.com ). However, the data collected is more
along the lines of performance indicators than benchmarking that can influence
changes in human resource performance (Massaro 1998).
2.2.5

Libraries benchmarking

Australian university libraries have led the way in the application of higher education
benchmarking in Australia (see Robertson and Trahn 1997). This interest has
stemmed mainly from the need to achieve better outcomes in university libraries in
the face of reduced resources, increased use of information technology, and increased
demand from an expanding university sector.
The Northern Territory University (now Charles Darwin University), to achieve
continuous improvement in the delivery of its research information services
(http://www.ntu.edu.au/library/bench1.htm ), compared its library acquisitions,
cataloguing and information services with eight other Australian libraries, as well as
with library practice in the United States (Massaro 1998).
In 1995, the library of the Queensland University of Technology carried out a
benchmarking project with comparisons with the library at the University of New
South Wales (Robertson and Trahn 1997). The project was part of a university-wide
benchmarking exercise carried out by each faculty and division. The objective was to
improve processes within the university and to pilot and also increase awareness of
benchmarking techniques. The project led to changes in library practice in relation to
throughput time, document delivery and general research support. More importantly,
it led to a culture among staff of reflection on the various functions in the library with
a view to improvement in operations. Robertson and Trahn (1997) note in particular
the need for benchmarking exercises to be part of broader goals of quality

16

management and organisational improvement, and importantly, to encourage broadbased participation.


The Council of Australian University Librarians has developed a range of key
performance indicators, kits, manuals and software for its members covering such
areas as general library use, collection quality, catalogue quality, collection
availability, reference service and user satisfaction. The council is currently seeking
data from its members to continue the comparative analyses it has been conducting
for a number of years (see: http://www.caul.edu.au/best-practice/).
Wilson, Pitman, and Trahn (2000) in Guidelines for the application of best practice in
Australian university libraries: Intranational and international benchmarks and
Wilson and Pitman (2000) in Best practice handbook for Australian university
libraries identify a number of exemplars of best practice in library performance in
Australia and overseas and ways of enhancing current library performance indicators.
Wilson et al. also provide an evaluation of currently available methods for library
benchmarking, performance indicators and quality improvement approaches in
Australian and New Zealand university libraries. The authors argue the need for a
manual which Australian academic libraries can use to assist them in
implementing best practice initiatives (Wilson et al. 2000, p. 123). The manual by
Wilson and Pitman provides many useful references and strategies for university
libraries to pursue an approach to benchmarking for both assessment and
improvement purposes.
2.2.6

Technical services benchmarking

The absence of any benchmarks for university technical support services (such as
laboratories and studios) in the McKinnon et al. manual prompted the Office of
Technical Services (OTS) at Griffith University to design their own (Urquhart, Ellis,
and Woods 2002). The basic template presented by McKinnon was accepted as the
platform for the project and new benchmarks were defined. The benchmarks were
determined through in-house staff brainstorming. While there was some peer review,
the OTS does advocate greater third-party input from upstream and downstream
suppliers and users of their service. Benchmarks in learning and teaching, research,
equipment, workplace health and safety, and the work environment are defined and
rated against a defined best practice identified using staff input, existing university
student and employer surveys, and regular user surveys.
The Griffith OTS approach to benchmarking technical services is fundamentally
bottomup and based on consensus, with work groups thereby gaining ownership.
Through this process, changes in attitudes and behaviour among staff are envisaged to
occur along with a commitment to achieving improvement. Importantly, staff anxiety
about organisational change is reduced through their close involvement and a constant
feedback process. Benchmarks are also ratified and revised by clients of the OTS, and
there has been peer input from other universities.
In a recent publication, A report on the implementation of technical services
benchmarks project (Office of Technical Services 2003), the OTS team at Griffith
University reported on the results of the implementation of their benchmarking
program:

17

The implementation of the technical services benchmarks was achieved by


conducting a whole of element benchmarking exercise that incorporated a teambased approach to self-assessment. Work teams rated themselves against sixteen
benchmarks and associated benchmark statements across a comprehensive range
of technical support activities. (p. 3)
and
A key outcome of the project was in the area of team building and improved
working relationships (p. 3)

Importantly, the OTS report identifies that implementation ratings were highest in
those areas where OTS staff had greatest control over the benchmarking process.
2.2.7

Sector-wide benchmarking

A report for the National Board of Employment, Education and Training in 1998
published a set of sector-wide indicators to measure quality improvement that could
be used for international comparison. While not strictly a benchmarking analysis, the
project goes through many of the required steps of ascertaining indicators that are
relevant to arriving at a quality assessment for a sector-wide approach. The analysis,
however, focuses heavily on the availability of data in the selection of appropriate
indicators. An initial list of around 80 potential sector-wide indicators was identified.
Because of measurement difficulties, an inability to be reflective of real performance,
and such other limitations as lack of stakeholder consensus, the list was narrowed to
17 and then to 13 core indicators for which data was readily available.
2.2.8

External relations benchmarking

Butcher, Howard, McMeniman and Thom (2002) focused on community service as a


core activity of universities, and their report addresses practice and benchmarking
issues with particular reference to teacher education programs.
The report suggests the category External Impacts be removed from the McKinnon et
al. manual and that a separate category for benchmarking in community service be
created. This approach would give prominence to community service as the third
major role that universities play. The report suggests the following elements be
included in the category:

community context

community service plan

community engagement processes

community service learning

community service outcomes.

However, the framework is ostensibly the same as McKinnon et al.s and, as a result,
suffers from the same inadequacies (explored later in Chapter 5) particularly the
segregation of functions into categories and subcategories rather than joining them up
to generate stronger collaboration and learning across functions.

18

Following this theme of benchmarking the universitys contribution to its regional


community, Charles and Benneworth (2001), using the McKinnon et al. manual
framework, developed a benchmarking approach for the evaluation of the regional
connection of the higher education institution. The tool was developed as part of the
Regional Mission series of projects for the Higher Education Funding Council for
England in the nine designated English regions (Higher Education Funding Council
for England 2001). While not an Australian study, through its use of the McKinnon
manual as its basis, and the expanding interest in universityregional engagement
work in Australia and elsewhere, it offers an approach for evaluation studies that
needs to be commented on.
Whereas McKinnon et al. virtually ignored the external relations between the
university and its regional community, Charles and Benneworth (2001) identified
seven broad categories and 34 subcategories of regional development outcomes from
a universitys partnership performance with its regional community. The categories
were regional governance, human capital development, economic competitiveness,
social development, sustainable development, cultural development and equity issues.
The Charles and Benneworth approach to benchmarking also goes further than
McKinnon et al.s on the issue of process by suggesting dialogue and consensusbased assessment through workshops and information dissemination. However, as
with McKinnon and other benchmarking authors who use template-based approaches
(including Butcher et al. 2002), it does not address learning for improvement, which
includes all university, community and other relevant stakeholders working together
in a collaborative way. By relying on the McKinnon et al. framework, the Charles and
Benneworth (2001) approach continues the practice of separating, rather than joining
up, functions where there are creative connections that can be made for achieving
improved outcomes.
2.3

Comments

While only a decade old, overall the literature suggests that the development of
benchmarking as a tool for university improvement could be better advanced in
Australia. Much of its history has been concerned with hard data measures and an
application in administrative support areas. There has been limited benchmarking of
some university functions. There is considerable scope to do more work in this area.
While there has been more university benchmarking activity in recent years,
particularly in administrative support areas like facilities management, technical
services and libraries, much of it has been concentrated in the larger universities as
part of their membership of international benchmarking associations. The larger
institutions have also employed management consultants to help them find their way
through the jargon and the process of benchmarking for performance assessment
comparison.
Benchmarking appears to have gone along two routes. The first emphasises
quantitative assessment using performance indicators and auditing with the use of
templates. It gives emphasis to performance assessment rather than improvement, and
it segments rather than joins up functional areas, thereby limiting the learning process

19

about where improvement can occur and limiting longer-term commitment to


implementation.
In our view, the McKinnon et al. manual has contributed to somewhat of a momentum
among higher education institutions for viewing benchmarking in this way. From our
research in this project we have come to a conclusion that such approaches do not
capture the necessary commitment of the involved stakeholders to build a momentum
for improvement beyond the scorecard.
The second way to view benchmarking in the university situation uses terms like
collaboration, organisation learning, inclusiveness, reflection, review, leadership and
improvement. This way is about connecting up relevant stakeholders both within and
outside the institution in such a way that leads to knowledge exchange about why,
what, where and how improvement might occur. This argument is developed further
in Chapter 6.

20

21

METHODOLOGY
3.1

Introduction

As outlined in Chapter 1, the objectives of this project were to:

add specific elements to the McKinnon et al. manual dealing with university
complaints and admissions procedures for students

explore prospects for improving the usefulness of the manual, and benchmarking
generally, as a tool for university improvement.

Two methods were used to obtain the necessary information for the project. These
were a survey of universities, and in-depth facilitated learning and discussion in six
university case studies. This chapter explains how these two instruments were
implemented and what was revealed in relation to the application of benchmarking
generally in universities.
Chapter 4 uses the information from the investigation to help construct benchmarking
templates for student admissions and student complaints, following the McKinnon
format, as required by the original project brief. Chapter 5 presents the findings from
the survey and case study investigation as they relate to the usefulness of the
McKinnon et al. manual as a tool for university benchmarking. Chapter 6 draws on
the workshop discussions, in particular, to design an alternative approach to university
benchmarking for improvement based around principles of learning and collaboration.
3.2
3.2.1

University survey
Background

On 25 August 2003, 39 universities were sent the questionnaire (Appendix A.1),


which asked about their current use of benchmarking generally and their use of the
McKinnon et al. manual in particular. Twenty-eight responses (70%) were received,
and they included representation from rural and metropolitan, new and old, and large
and small institutions across all states, as Table 3.1 shows.
Table 3.1: Characteristics of surveyed universities
Characteristic
All universities
State distribution
New South Wales
Queensland
Victoria
South Australia
Western Australia
Tasmania
Others
Metropolitan
Non-metropolitan
More than 20 000 students
Less than 20 000 students

10
8
8
3
5
1
4
21
18
6
33

Survey response
5
7
7
3
4
1
1
16
12
4
24

Source: S. Stevenson, M. Maclachlan and T. Karmel (1999); university benchmarking survey, 2003.

22

All correspondence about the survey was sent to the Vice-Chancellor in each
institution and a deadline to respond was set at 25 October to ensure enough time for a
considered response. In all cases, questionnaires were completed by senior
management (Vice-Chancellor, Deputy Vice-Chancellor or Pro Vice-Chancellor).
3.2.2

Use of benchmarking

All but three responding organisations stated they had used benchmarking in some
way in their operations, although the extent and nature of the use varied considerably.
For many universities that stated they had experience in benchmarking, usage was
mainly for ad-hoc assessment purposes, generally for developing performance
indicators and for performance reporting or in reviewing specific areas (e.g. financial
planning). Some universities claimed they used benchmarking for strategic planning,
while others used it in their quality improvement strategy for specific functional areas
(particularly libraries, facilities management, research training and external
examination assessment). Other uses of benchmarking were staff evaluation,
professional accreditation and comparison with other institutions.
Some of the larger universities stated they used benchmarking as part of their
membership of wider benchmarking programs such as the Association of
Commonwealth Universities Management Benchmarking Program.
Table 3.2 shows the proportion of responses to the survey for each of 19 identified
purposes for which benchmarking could be applied.
Table 3.2:
Category

University benchmarking usage

General management improvement


Strategic planning
Research performance
Substantiating achievements
Functional area improvement
Teaching & learning performance assessment
Research training performance
Copying in best practice
Transparent planning
Opportunity identification
Building organisation-wide commitment
Partnership strengthening
Staff development
Internal resource allocation priority
Dissolving boundaries within & between universities
Strengthening service support links
Keeping ahead of competition
League ladder assessment
Obtaining external performance-based funding

Proportion of total
responses (%)
89
70
70
67
63
56
48
44
41
41
41
37
30
26
22
19
19
15
11

Source: University benchmarking survey, 2003

Interestingly, benchmarking has mostly been seen as a tool for general management
improvement, strategic planning and performance assessment in certain targeted
functional areas of university activity, such as research, teaching and learning (more
than 50 per cent of responses). Building commitment to the organisation and

23

partnership building were viewed as being only moderately important objectives for
undertaking benchmarking (more than 37 per cent of responses). Areas such as staff
development, collaboration, service support effectiveness, and resource allocation
were rated lowly as a use for benchmarking by universities (up to 30 per cent of
responses).
3.3

University case studies

3.3.1

University involvement

Six universities were included in a program of workshops and discussion to research


the usefulness of benchmarking in the university situation, including using the
McKinnon et al. manual approach. The process was designed to enable the
participating universities to work towards forming their own benchmarking
framework in areas of their selected concern. This section describes the process put in
place and some of the results in relation to benchmarking generally.
Six case studies were chosen to explore how well benchmarking could be undertaken
across a number of different university circumstances and functions, and at the same
time ensure a good coverage in dealing with student admission and student complaint
processes, these being a specific requirement of the project.
Universities were invited to participate in the project following presentations by the
Department of Education, Science and Training (DEST) to an Australian ViceChancellors Committee meeting of Deputy Vice-Chancellors (Research) and Pro
Vice-Chancellors (Research), and a workshop presentation on the project by the
authors at the 2003 Australian Universities Quality Forum (refer: www.auqa.gov.au ).
After a period of negotiation, the following six universities were included as case
studies in the project:

Curtin University of Technology (WA)

Griffith University (Qld)

Monash University (Vic.)

Royal Melbourne Institute of Technology (Vic.)

Swinburne University of Technology (Vic.)

University of the Sunshine Coast (Qld).

Each of the six universities identified areas of their activity they believed needed
some improvement and where they wanted to apply benchmarking processes. Table 1
shows the areas of focus for each university in the project. In summary:

Four universities chose to examine their student complaint and grievance


processes (Curtin, Swinburne, Monash and Sunshine Coast).

Three universities chose to examine their student admission processes (Monash,


Swinburne and Sunshine Coast).

Three universities wanted to examine their engagement relationships with their


local and regional community (Royal Melbourne Institute of Technology,
Swinburne and Sunshine Coast).

24

Two universities wanted to explore their teaching and learning activities in the
light of the added focus to this given in the governments reform package for
higher education (Swinburne and Sunshine Coast).

One university chose to examine student examinations and assessment processes


and policies (Curtin).

One university examined its research office support functions (Griffith).

3.3.2

The workshop process

Each of the six university case studies took part in a four-stage discussion process
designed to facilitate learning how a benchmarking regime that would lead to
improved processes and outcomes in the university situation might be put in place.
Within this program they also reviewed whether the framework put forward by
McKinnon et al. would assist this aim.
This method was adopted as an innovative approach considered likely to elucidate a
richer picture of the circumstances of benchmarking practice. An integral element of
the process design was to create a learning situation where participants had an
opportunity to collect and consider information, share their understandings, and arrive
at a consensus view about future benchmarking goals and directions. Participants in
the process included a broad spectrum of stakeholder interest in the function being
reviewed, at all levels of authority in the university.
The role of the Australian Universities Quality Forum workshop in June 2003 was
very helpful in refining ideas and in obtaining data to use in the process. The Project
Steering Committee managed by DEST also facilitated a discussion of aspects of the
eventual process.
The four-stage process referred to above is discussed below in more detail.
3.3.2.1

First workshop

The first workshop lasted between three and four hours and explored the internal and
external context in which the university was operating. It facilitated an agreed
understanding within the group about language, definitions, and concepts associated
with benchmarking. Participants at the first workshop also discussed the directions
benchmarking might go in the university and the role that the McKinnon et al. manual
might play in this.
At each university between 15 and 25 participants took part in the first workshop
which generally followed the program at Appendix A.2.
Shortly after the first workshop in each of the six universities, a written report of the
workshop proceedings, along with a discussion kit that drew on the emerging issues
that were identified in the first workshop and in the literature, was produced and
provided to the group.

25

3.3.2.2

Ongoing discussion

The discussion kit was tailored to cover the issues of each university that were
identified at the first workshop. The kit comprised a number of modules, including an
introduction and generic subject matter about benchmarking, with tasks designed to
stimulate further more specific small group discussion and learning within each
university without the direct assistance of an external facilitator (Appendix A.4).
The internal and ongoing group discussions using the kit were over a four- to sixweek period following the first workshop. Our assessment of this process, however, is
that considerably more time needs to be made available for this phase of discussion
because of the practical difficulties in getting all stakeholders together into working
groups to fully formulate agreed directions. To be most effective, the process does
require a significant commitment of time from people who are already stretched
because of their ongoing work. Working group sizes for this phase varied from a low
of five participants up to ten.
3.3.2.3

Second workshop

Following the kit-based discussion period, a second workshop, again for three to four
hours, was held to discuss the findings from the working group discussions and to
agree on a benchmarking program that could be taken forward within the university,
including a very practical look at the impediments and opportunities that needed to be
addressed in the organisation to give it effect (Appendix A.3). A second objective of
this workshop was to assess how effective the McKinnon et al. manual might be in the
university's benchmarking formulation exercise and to work towards drafting a
benchmark assessment template for the areas each had identified as a priority.
An average of 10 to 20 participants took part in the second workshop series.
3.3.2.4

Combined discussion

The final element in the case study process of discussion about university
benchmarking was to share experiences across the six case studies. Difficulties with
participant availability at the time of the year (November) meant that this had to be
done through a teleconference to meet the project timetable.
The purpose of this element of project methodology was twofold:
1. to identify common issues and views across the sector in relation to the
application of benchmarking generally in the university and in relation to the
McKinnon manual in particular
2. to see whether there was a common view about the conclusions being formed
from the project, and how an alternative approach to university benchmarking
might be implemented in the sector.

26

3.3.3
3.3.3.1

Case study universities


University of the Sunshine Coast

The University of the Sunshine Coast asked to be involved in the project for several
reasons: it wanted to learn more about benchmarking as its experience to date was
limited; and it wanted to further develop its strategic agenda in relation to its regional
development objective, its teaching and learning efforts, its student complaint and
grievance processes, and its student admission processes.
Fifteen staff from senior management, academic staff across the three faculties, and
general staff at various levels from several areas of administration dealing with
student admission and complaint processes took part in the facilitated workshops.
Participants found the workshops and discussion process helped them better
understand the difference between benchmarking for assessment and for
organisational improvement and clarified much of the mythology and corporate
speak that traditionally accompanies benchmarking. They also acknowledged there
were benefits for themselves from the process in regard to undertaking their particular
jobs and their own actions.
The university committed itself to implementing the outcomes of the workshops and
discussion. Reports from the workshops and discussion were provided to the
University's Teaching and Learning Committee, Research Committee and Audit
Committee, and the profile and understanding of benchmarking in the university have
been raised accordingly.
3.3.3.2

Swinburne University of Technology

The benchmarking workshops were centred on the Lilydale campus of the university
and involved 15 academic and administrative staff including senior management, as
well as desk officers. The campus of the university wanted to develop a better system
for handling student admissions and complaints, as well as to further develop its
teaching and learning strategy and regional engagement practice with the local
community.
The Academic Unit review being undertaken within the campus is committed to
developing a benchmarking agenda for 2005 and embedding it in the campuss
various programs. As a result, the campus is now interested in exploring
benchmarking other areas of university activity such as student examinations and
assessments and learning from other universities about data choices.
3.3.3.3

Monash University

Participants in the benchmarking workshops at Monash University numbered


25 and comprised staff with responsibility for administering student admission and
complaint processes. The Centre for Higher Education Quality encouraged the
involvement of the university in the project as many faculty and central support areas
were interested in internal benchmarking, using agreed performance indicators. The
group developed a table of best practices and performance indicators for student

27

admission and student complaint-handling processes, providing the foundation for


internal and external benchmarking.
3.3.3.4

Curtin University of Technology

More than 25 participants took part in the benchmarking workshop process at Curtin
University where the focus was on the student complaint process and in getting
consistency in student examinations and assessment procedures and processes across
the university. The workshops and discussions included senior and junior
administrative staff, senior academic staff from a number of faculties, students,
relevant external agencies and the student union body.
A report based on the benchmarking project workshops is currently being put to
senior university management. The Teaching and Learning Committee of the
university are already taking up the student complaints benchmarking issue.
3.3.3.5

Royal Melbourne Institute of Technology

Around 20 participants took part in the workshop process at Royal Melbourne


Institute of Technology. Participants included staff in administration and faculties,
staff from the university's Hamilton Centre, as well as community representatives
from the City of Hamilton and North Melbourne communities where the university
has a presence. The workshops were organised within the university by the Centre for
Regional Partnerships to build on work they were already doing in relation to
developing university and regional community engagement indicators.
3.3.3.6

Griffith University

Twelve participants took part in the benchmarking workshop process at Griffith


University. The focus for the project at Griffith was the Office for Research. The
Office was exploring how it could develop meaningful benchmarks and performance
indicators to assess its effectiveness in supporting research at Griffith and the
universitys objective of becoming a top 10 research university.
3.3.4

Evidence from the case studies

During the first workshop in each of the six case studies, groups were asked what they
thought were issues that had arisen with their universitys efforts at applying
benchmarking processes to date. What follows is a summary of the issues identified
and recorded by participants. These summaries are a compendium of the results taken
from the issues raised in all workshops.
3.3.4.1

What does benchmarking mean?

There is considerable uncertainty as to what benchmarking is really all about and


what its benefits for the organisation and its staff and students might be. There is
confusion between benchmarking, quality auditing and the need for quantitative key
performance indicators.

28

For some universities, in its crudest form benchmarking is simply about identifying
some key performance indicators and downloading a range of DEST sector-wide data
to support the measures. In such processes, many felt the university senior
management team simply wanted to get results that showed the organisation in the
best light. With universities having this view, any benchmarking exercise was seen as
an additional task and an additional cost, rather than being a natural part of the
organisations ongoing activity and an investment in improvement. There was
therefore some confusion in concepts and terminology between the objectives of
benchmarking, quality auditing and performance indicators.
Workshop responses suggested that the current quality assurance auditing program
and the accountability environment had caused universities to see benchmarking as
quantitatively assessing and reporting on performance to outside agencies, at the
expense of focusing on the universitys own structures, processes, and behaviours that
determine the performance outcome.
In summary, what we found was a widely held view that benchmarking was about
assessing performance against a set target. However, we also found a view that
benchmarking should be about the underlying practice behind the performance
identified as needing improvement.
3.3.4.2

What to benchmark against

There was a view among workshop participants that there was so much difference
between universities, and the functional units within them, that it was meaningless to
make serious inter-organisation performance assessment comparisons for
benchmarking purposes even where universities broadly saw themselves as part of a
group or network of like universities. This view about differences became more
pronounced as the workshop process evolved.
3.3.4.3

Use of data

When data was used for benchmarking, the experience of universities was that most
readily obtained data was often made to fit the evaluation circumstances, and that the
use of non-quantitative information might convey a better description of the situation.
Qualitative information and stories tend not to be included because they take time to
collect, are more expensive to collect than easily downloaded statistics, and are not
easily used for comparative purposes. This attitude stems from the view that
benchmarking is an external control tool and an extra task to add to those already
being undertaken.
3.3.4.4

Commitment by senior management

While university staff recognise that benchmarking is a useful improvement tool and a
necessary part of the accountability environment in universities, they are deeply
sceptical about current applications of benchmarking. Benchmarking is seen by some
staff as an excuse to be able to reduce possible options for improvement to a lowest
common denominator. It brings to a head the conflict between quality and quantity in
education.

29

There was a belief that some senior managers pay only lip-service to benchmarking
processes aimed at improvement and that they are only interested in performance
indicators from a numbers perspective to prove how well they are going. There
appears to be no commitment to implement reform highlighted by benchmarking
outcomes. The view of the workshop participants was that benchmarking tends to be
driven by individuals in the university with a keen interest in the subject and so there
are different approaches by different parts of the university. While benchmarking may
be discussed at the highest levels of the university, the nature of the discussion seems
to focus on the productivity statistics rather than the quality improvement aspects of
the process.
A change of culture is needed in relation to benchmarking and the question is how to
bring this about. There needs to be strong leadership from senior management for
across-the-board improvement as a learning organisation.
The workshop process highlighted the need to have the leadership of the university
committed to making the necessary changes for the organisation to improve. The
involvement of university leadership in the workshop and discussion process varied
across case studies. Where senior staff were directly involved in the workshops, there
was an immediate commitment to implement new approaches. In other situations
participants were able to identify pathways to bring their recommendations to senior
management for attention.
3.3.4.5

Resources for benchmarking

In the initial workshop, benchmarking was seen by participants as a management tool


that has a cost which can be minimised and, as a result, tends to be only superficially
applied. For most staff, despite some interesting examples to the contrary, it is seen as
just another task to be performed rather than something that can be an integral part of
day-to-day operations that, over time, can bring fundamental improvement to what is
done. Staff contend they do not have the quality time that is necessary to step back
from their daily work to reflect upon existing practice and suggest positive reforms.
For those staff that might make the effort to benchmark their own work, there is a lack
of recognition and reward. It is important that benchmarking as a process is owned by
all staff and does not just remain a silo activity undertaken by a specialist unit within
the university or be the responsibility of senior management. All staff have a role to
play in improvement.
3.3.4.6

Being in or of the organisation

The workshop process resulted in an understanding that real organisational change


and improved outcomes come about from such factors as:

learning and understanding across functional areas and levels of responsibility

involving other relevant stakeholders, such as students, industry and government

using a team approach.

Such an inclusive approach enables people to feel they played an important role
within the organisation and could make a real contribution to its improvement.
Workshops included stakeholders across functional areas (administration and

30

academic) at all levels within the university and, in some cases, students, the student
union, and external stakeholders from the local and regional community, state
government and industry. This participation was a strong base from which to
emphasise the importance of the need for inclusiveness.
3.3.4.7

The long haul

An effective benchmarking approach to organisational improvement, aimed at


building a sustainable organisation, results from a commitment to continuous review
and implementation of findings. Sustainable organisational improvement does not
result from short-term bursts of energy that are required on an ad-hoc basis to satisfy
the periodic accountability requirements of an external agency.
The workshop and discussion kit process was put in place over a four-month period.
This period allowed participants some time for deeper reflection of the issues
associated with benchmarking and led participants to recognise that organisational
improvement is a long-term process that involves changing behaviour and culture,
ensuring consistency across the organisation, building confidence in addressing and
managing change, supporting team building and committing resources.

31

BENCHMARKING TEMPLATES FOR STUDENT ADMISSION AND STUDENT


COMPLAINT PROCESSES

4.1

Background

One of the requirements of this project was to add specific elements to the McKinnon
et al. manual dealing with university complaint and admission procedures for
students, as these were not considered in the original publication.
Each of the six universities participating in the project was asked to identify
functional areas with which they felt they needed assistance to develop a
benchmarking for improvement program. Four universities included student
complaint processes and three universities included student admission processes
among the functions they wanted to explore from a benchmarking perspective.
All universities were asked to consider the appropriateness of the current template
approach provided in the McKinnon et al. manual. Comments about the manuals
approach to university benchmarking are detailed in the following chapter. This
chapter presents McKinnon et al. type templates for these two important university
functions based on the workshop process.
4.2

Student complaint processes

4.2.1

Universities involved

The following four universities were involved in the benchmarking workshop process
to explore improvements in their student complaint processes:

Swinburne University of Technology

Monash University

University of the Sunshine Coast

Curtin University of Technology.

4.2.2

Issues in student complaint benchmarking

There were many issues raised during workshops in these four universities about the
way they deal with student complaints and grievances. The following four issues were
raised consistently in the workshops:

What to measure
The main issue here is to be sure to measure what is actually happening in
terms of the complaint process, and not just rely on an end-result measure
because there is ready data available. New measures that highlight the
underlying processes associated with a student complaint may be necessary. It
is important to be aware of all the various channels or gateways through which
a student complaint or grievance may be made to enable data gathering to
occur. Because different universities use different complaint reporting and

32

monitoring processes it is difficult to make inter-university comparisons


across the whole of the complaint process.

Learning about where improvement can occur


It is important to fully appreciate the underlying circumstances that relate to
the particular area of complaint that is being made. Collecting statistics about
the number of grievances or complaints made is by itself unhelpful for
improvement. There needs to be some dialogue to learn why a breakdown
between the student and the university is occurring. Where there is a
breakdown it may be the result of a policy, procedure or practice in place in
the university, or it may result from a cultural issue or some outside force that
is not directly related to the university for example, a visa situation for an
international student.
In relation to policies and procedures about complaint processes, questions
that can be asked include whether suitable policies and procedures exist for
dealing with student complaints, how effective they are, in what way
stakeholders were involved in their development, and whether all relevant
staff know about them. In relation to the implementation of policies and
procedures, it is important to know whether there are induction processes for
new staff, or mentoring/training programs for existing staff to ensure their
effective implementation.

Data
It is important to know what data is currently being collected to assess
complaint processes and why, how records of complaint details are being kept
and what is done with them to ensure improvement can occur. The focus on
the data should not be on the simple recording of statistics at a particular point
but, importantly, how a solution to the complaint can occur. It is not worth
collecting the data unless there is a process that can use the data to enable
improvement to occur. There may be a hesitation by some universities to
assiduously collect complaint data as, in a superficial accountability
framework, complaints may reflect on the integrity of the university.

Good practice
Good practice was seen where there is a culture for learning and the
opportunity for improvement, where staff have a chance to be heard, are
valued, and are supported at senior level and through training and mentoring.
Good practice was seen where there is no fear of retribution by the
complainant. It was also seen to occur when staff is not threatened by
receiving feedback when senior management are involved, where there are
regular meetings with student and staff and where complaints are not seen as
necessarily a bad thing but rather an opportunity to improve things.

4.2.3

Benchmarking template

In accordance with the aim to add student complaint procedures to the McKinnon et
al. pro-forma framework, the following proposals emerged from the workshop and
discussion process:

Benchmarking rationale

33

Every university needs a transparent and effective complaint policy and


procedures that cover all aspects of students involvement with the university.
Students must be confident that they have access to an independent
mechanism and process that handles any dissatisfaction or grievance they
might have with any element of policy, procedure or practice of the university
and that a meaningful resolution can be reached within a reasonable set time
period.

Sources of data
This might include student and staff feedback through questionnaires, routine
service feedback forms, focus groups, and analysis of the records of
complaints lodged and responses to them across all service areas. Issues to be
canvassed should include complaint numbers logged, area of complaint,
numbers resolved, appeals processes undertaken, complaint turnaround time,
clarity of process, communication and advice for both students and staff, and
mutually agreed satisfaction with outcome.

Good practice
Good practice complaints policy and procedure should be clear for both
students and the staff. Statements about the practice should be readily
accessible and informative. It should offer both formal and informal ways for
the student to feel confident in making a complaint, whatever their home
country culture, and in any subsequent appeal avenues.
The process of complaint handling should be efficiently resourced and staff
should receive appropriate training and support in carrying out this role. There
must be effective desk procedures for receiving complaints, as well as for their
recording, processing and reporting. There must be processes for discussing
and following up trends in complaints, for measuring the effectiveness of, and
student satisfaction with, policy and procedures, and for translating this
feedback into policy and procedure improvement.

Table 4.1 contains the benchmarking template created in the workshops and
discussions. The template is based on McKinnon et al.s performance rating scheme,
which ranges from a low level (1) through to a high level (5).

34

Table 4.1:

Template for benchmarking student complaints


Level 1
2
Level 3

Absence of any formal


complaints policy and
procedures.

Complaints policy and procedures


are in place, but no evidence of an
associated framework of
improvement.

Staff and students are unaware


that a complaints policy and
procedures exist within the
university.
Feedback from students that
their legitimate complaints are
not being addressed.

Half of all staff and students are


aware of a complaints policy and
procedures within the university.

Feedback from students


indicates that complaints are
not being translated into
improvements in policy,
procedure and practice on the
ground within the university.
Complaints-handling staff at
all levels at the university do
not feel they are adequately
trained or supported by
management in carrying out
their functions in this area.
Complaints-handling staff do
not believe their views about
what should be good practice
count in any improvement
agenda in this area of the
university.

4.3

Feedback from students indicates


all areas of the university have
mechanisms in place for receiving
and collating complaints.
Complaints are reported and
discussed within an appropriate
forum on a regular basis.

Some complaints-handling staff at


the university feel they have been
given appropriate training and are
supported by management in
carrying out their function in this
area.
Some complaints-handling staff
believe their views about what
should be good practice are valued
and included in any improvement
agenda in this area of the
university.

Level 5
Complaints policy and procedures
are in place that are easily
accessible and have an associated
framework that enables
improvements over time based on
information feedback mechanisms.
The majority of staff and students
are aware of the complaints policy
and procedures within the
university.
Feedback from students that they
are confident their complaint in
any area of the university is being
handled effectively.
A process for ensuring complaints
are acted on appropriately and
translated into improved policy and
practice in the designated area.

All complaints-handling staff


believe they have been given
appropriate training and
management support in carrying
out their functions in this area.
The majority of complaintshandling staff believe their views
about what should be good practice
are highly valued and are included
in any improvement agenda in this
area of the university.

Student admission processes

4.3.1

Universities involved

The following three universities were involved in the benchmarking workshop


process to explore improvements in their student admission processes:

Swinburne University of Technology

Monash University

University of the Sunshine Coast.

4.3.2

Issues in student admission benchmarking

There were many issues raised during the workshops and discussions relating to the
processes employed by universities in dealing with student admissions. The following
issues were most consistently raised:

Where to start and end with admission processes

35

Different universities saw the student admission process beginning and ending
at different points. For some, admission processes included initial marketing to
potential students, potential student inquiries, registering for admission, fees,
scholarships, graduation, and post-graduation services. Others saw the process
restricted to the formal process of admitting students to the university only.
These differences between institutions make university comparisons in this
area difficult.

Centralised and decentralised admissions


For some universities, the admissions office dealt with the whole student
admission process on behalf of the faculties. In other universities the faculties
played a larger role. In these circumstances the issue of consistency of
treatment of admissions across the university becomes important.

The tension of quantity and quality in outcomes


On the one hand, the university seeks to market itself to attract a greater
number of students but, on the other hand, it wants to get the best students so
its completion and retention rates will look better. Benchmarking needs to
address the issue of the particular returns the university seeks to gain from an
improvement program before embarking on it.

Good practice
In a benchmarking for improvement program good practice was seen as
having:
! effective staff training programs in place
! effective student admission policies and procedures in place (desk, phone
and web)
! good channels of communication with all areas of the university
! accurate and relevant information
! quick turnaround times
! regularly reviewed admission policies and procedures
! test audits of the turnaround times of responses to inquiries of all kinds
! surveys of student and staff opinion of admission staff services.

4.3.3

Benchmarking template

In accordance with the aim to add student admission procedures to the McKinnon et
al. pro-forma framework, the following proposals emerged from the workshop and
discussion process:

Benchmarking rationale
Every university needs efficient core student administrative services covering
enquiries, admission, progression, fees and dues, graduation and scholarships.

Sources of data
Data can come from test audits of the speed, accuracy and comprehensiveness
of responses to personal, telephone and written enquiries and applications, and
from data needed for student administration and reporting purposes. There
may be surveys of student opinion about the responsiveness of the university
to their enquiries and applications.

Good practice

36

There should be speedy, accurate, clear and complete responses by


administrative staff to student, potential student (and their relatives), past
student and staff enquiries and there should be further follow-up to ensure
complete enquirer satisfaction. Assessment turnaround should be kept to a
minimum by having, across all relevant areas of the university, systems based
on well-defined channels of communication that anticipate students needs and
indicate key dates and clear pathways for them to follow for their application.
There should be computerised data recording and reporting, and systems in
place that automatically highlight important problem areas in the admission
process that can be attended to quickly.
There will be well-established systems to ensure appropriate and ongoing
levels of staff training and knowledge, and there will be regular reviews and
updates of university admission-based materials (including via the web).
Table 4.2:

Template for benchmarking student admissions


Level 1
2
Level 3
Long response time and poor
Response is variable and
information provision to
information supplied to
admission enquiry.
admission enquiry does
not fit all client needs.
Application assessment time is
Application assessment
slow.
time is reasonable.
Poor or reactive
Variable channels of
communication within
communication within
university to responding to
university in responding
admission requirements.
to student admission
requirements.
Feedback from students and
Feedback from students
staff on admission service
and staff on admission
provided by the university
service provided by the
indicates it is unfavourable.
university indicates it is
equivocal.
Student needs to make several
Most information is
contacts with admission
provided on first enquiry
service to obtain necessary
and applicant (or potential
information.
applicant) is mostly
provided with clear steps
to follow.
No follow-up service of any
Follow-up service is ad
kind provided by university,
hoc and not systematic,
with the applicant expected to
and the enquirer may need
do the following up in other
to go to other areas of the
areas of the university.
university to satisfy their
particular requirements.
There is little or no
Half of all relevant staff
appropriate staff training in
in the admission process
relation to handling student
are appropriately trained.
admission procedures.
There are only limited relevant
Information bases
information bases that can be
supporting the admission
readily accessed by staff and
process are not fully
students that support the
complete.
admission process.
There is no review of the
Review of the admission
admission process carried out
process is uncoordinated
to see where improvements
and irregular.
can occur.

Level 5
Fast response times on the same or
next day and information provided to
admission enquiry is comprehensive.
Application response time is a
priority and fast.
Established service agreements in
place across university that enable a
proactive approach to anticipated
student admission requirements.
Feedback from students and staff on
admission service provided by the
university indicates it is of a high
standard.
All necessary and anticipated
information is provided and clear
steps are defined for applicant to
follow, including dates.

There is systematic follow-up on all


information requirements as part of
the service provided in the admission
process.

There is appropriate and ongoing


training and development for all staff
involved in the admission process
across the university.
There is easy access to extensive and
relevant information bases by staff
and students to assist with the
admission process.
There is a regular and ongoing
review of the admission process
involving all relevant stakeholder
interests including staff and students.

37

4.4

Comments

In this chapter we have presented benchmarking templates, following the McKinnon


et al. pro-forma approach, for student complaints and for student admission processes.
As the following chapter will show, we are of the view, following the university
survey and the case study workshop and discussion process, that the McKinnon et al.
approach is insufficient and puts limits on the improvement that can occur in the
university through its topdown template-based framework.

38

39

5
5.1

COMMENTS ABOUT THE MCKINNON MANUAL


General perceptions

Feedback from the university survey, the six university case study workshops and the
associated discussion kit process highlighted the following points about the usefulness
of the McKinnon et al. benchmarking manual for university improvement.
On the positive side, the manual was seen as:

an instrument that stimulated more widespread interest and awareness in


benchmarking across the university sector than did international benchmarking
organisations, which involved only a few of the larger universities

a reference document that could be referred to from time to time by management


to get ideas for performance assessment measures in particular functional areas

a tool that could assist in the quality audit preparation process

a manual that could stimulate a more structured and consistent sector-wide


approach to benchmarking at a practical level.

However, it was recognised that the environment in which universities now operate
has changed and will change further particularly following the reforms outlined in
Our universities: Backing Australias future (Nelson 2003). These changes mean that
a different approach to university benchmarking is required. Such approaches need to
recognise the diversity among universities, the need to move beyond simple periodic
reporting of results, an increasing societal requirement to not simply be told about
audit results but to be involved in their determination, and the need to deliver better
and sustainable outcomes to a wide range of relevant and interested stakeholders.
While a large number of universities indicated in the survey they knew about
McKinnon et al.s benchmarking manual and had used it superficially, very few
believed their university performance had actually improved as a result of using it.
Many found the manual difficult to comprehend, complex in detail and difficult to
apply to their specific set of circumstances. If such a document and approach was to
be promoted or required, most universities believed there was a need to make it
simple and much more accessible.
There was a view that the McKinnon et al. manual had its prime focus on accounting
for performance, rather than encouraging better practice and change through organic
internal reflection, collaboration, learning and action over the long term. This view
was held despite one of the stated purposes outlined on page 1 of the manual, viz:
It provides senior staff with tools to ascertain performance trends in the
university and to initiate continuous self-improvement activities. (p. 1)
[emphasis added].

There was a general view that the manual significantly fails in the second part of this
objective namely, to initiate continuous self-improvement. There was also a view
that the first part of the objective, in limiting involvement to senior management,

40

ignored the equal involvement of a wide range of relevant and interested stakeholders
within and outside the university.
According to one comment in the workshops, the manual:
leaves you hanging at the performance assessment stage with no guidance to
help you improve what you do.

Other comments about the McKinnon manual from the university survey included:
that [the manual] was overly prescriptive. The manual makes too many
assumptions about how things should/must happen, that there is a need for an
allowance to be made for the range of ways that the manual will be used, and that
the overall presentation could be more dynamic.
The major criticism from [name of university] perspective is that the manual
was geared to the traditional university experience and did not serve to capture
the context of a regional/newer/flexible learning/non-traditional student
base/non-research intense institution in many areas.
The key issue is clearly on what institutions want out of benchmarking. How is it
going to help us improve? Thus good practice should be an essential component.
What we don't want is simply numbers and figure tables.
The university believes that the essential character of benchmarking should be
internally focused. It should aim to facilitate and enhance the management and
effective utilisation of resources and the quality of outcomes, and not be regarded
as a mechanism through which sector-wide assessments of performance or
rankings of institutions are contemplated. This view would therefore inform our
position regarding any further development of the [McKinnon et al.] manual.

There was a view that the manual was a one size fits all tool that did not allow for
diversity either between universities or across functions within the one university.
Where the manual was used for example, such as in benchmarking library facilities
management and technical support areas it had to be rebuilt from the ground up.
Two-thirds of higher education respondents to the survey claimed they had used the
McKinnon et al. manual in some way in their institutions. The remaining third had not
used the manual at all. Some of the universities that had not used the manual believed
it was more suited to the needs of the larger universities than the smaller nonmetropolitan institutions.
Universities saw the manual as useful in helping them prepare for the Australian
Universities Quality Agency quality audit process by enhancing awareness among
staff of the issues involved in benchmarking. However, there is limited evidence of
the use of the manual in the portfolios presented to audit panels, according to the
Executive Director of the Australian Universities Quality Agency (Martin 2003). The
manual has been little used for other purposes such as programs of organisation
improvement, and partnership building.
Most universities saw the McKinnon manual as a peripheral rather than a core tool for
improvement purposes, saying it did not fit their particular circumstances.
Nevertheless, respondents did state that they felt the manual provided a stimulus to
implementing a logical measurement regime. There has been no wholehearted or
consistent adoption of the manual by any of the universities responding to the survey.

41

Table 5.1 demonstrates ways universities who claimed they used the manual believed
it could be improved to enhance its usefulness to the higher education sector.
Table 5.1: Improving the effectiveness of the McKinnon et al. manual
Category
Training program
Referral service
User guide
More functionally specific detail
Specifically tailored to the needs of the institution

Proportion (%)
83
78
72
56
44

Source: University benchmarking survey, 2003

Most users responding to the survey felt there was a need for some form of additional
assistance in implementing a benchmarking regime in the form of a training program
or access to a central referral service or guide, rather than making design changes to
the manual itself. Universities believed their individual circumstances were so
different from each other that a prescriptive tool like the manual was only of generic
and superficial use rather than of practical and useful guidance.
In the workshops, which facilitated an in-depth discussion of the manual pro forma,
real concerns were expressed about the approach to benchmarking as outlined by
McKinnon, and these concerns are elaborated below.
5.2

Specific perceptions

This section addresses specific concerns regarding use of the McKinnon et al. manual.
5.2.1

Performance versus improvement

The most significant complaint about the McKinnon et al. manual is that it focuses, in
its pro-forma approach and language, on providing an off-the-shelf tool for readily
assessing relative performance, rather than focusing on engendering a behaviour of
collaborative improvement within the organisation. This view is supported by the
workshops and the results of the university survey.
From the university benchmarking usage survey it is clear that where the manual has
been used, it was for a limited range of management functions. These included use as
a reference to gain ideas and criteria for developing key performance indicators for
performance reporting, identifying areas for course evaluation, and for financial
planning. Most universities saw the manual as peripheral for organisational
improvement.
The manual did not encourage deeper thinking about what really is involved in
benchmarking and where improvement can best occur to get the best return on outlay.
In the workshops, comments included the importance of sharing knowledge and
experience in areas of practice, not being part of a prescriptive and rigid process with
implications tied to funding, and fostering an experience based on flexibility and
commitment, connectivity and reflection towards doing a professional job. There was
a need to change the culture of the organisation through collaboration, knowledge
exchange, learning, and recognition and reward. While the manual was seen as a tool

42

that could assist senior management respond to outside accountability demands, there
was a concern that such use would lead to staff cynicism about what accountability
organisations wanted to use the information for.
5.2.2

Reductionism

In the name of consistency, McKinnon et al. reduce a complex and diverse


organisation to a template of 11 functional areas, 67 benchmark sub-elements and
three (leading, learning and lagging) types. This was seen as too much segmentation
of both function and process. It detracts from the real connectivity that exists in reality
and which needs to be further fostered across the organisation and with its outside
stakeholders, and the need to ensure these connections are incorporated and enhanced
within the organisation improvement process. Clients and customers of universities do
not see this segmentation of the university as important, only the outcomes and the
processes employed to achieve them. Importantly, by reducing the capacity for
connectivity across stakeholder interests, the compartmentalised template approach
was seen as limiting learning and access to new knowledge that can assist with
functional improvement.
There was also a general feeling that this type of categorisation of university activity
ran the risk of being too prescriptive and insular an approach for a diverse sector in a
changing world. The manual therefore tended to exclude more than it included, and
was not helpful when it came to tackling the process of change.
The end result was that, to be comprehensive, the McKinnon manual would have to
be extremely large to capture all university activity. That is, the logic of the manual is
that there are an ever increasing number of (probably) smaller activities to be
considered for assessment, all of which would require their own template. For a
complex and changing organisation like a university, such an approach becomes just
not feasible. On the other hand, in reality the process of change, while difficult, could
achieve much with just a few process steps and by building connections within and
across functions and with other stakeholders.
5.2.3

Language

The language used in the manual was seen as turgid, jargon-based, and off-putting to
the wide spectrum of university stakeholders (i.e. those outside senior university
management), as well as to relevant external stakeholders who would have a role in
an organisations improvement process. Culture, behaviour and participation in an
organisation are seen to be as important as (if not more important than) any structures
that are put in place. That is, many found the manual difficult to follow and therefore
tended to dismiss its usefulness on that basis.
5.2.4

Leading, learning and lagging benchmarks

The distinction between what was a leading, lagging or learning benchmark was seen
by all workshops as unclear, confusing and generally unnecessary. The whole process
of organisation improvement has to do with learning, and this should not be
compartmentalised into just a few sub-elements of what should be a holistic process
of review. The general feeling was that this differentiation should be dropped.

43

Benchmarking is more likely to be accepted as a tool for improvement if it is kept


relatively simple.
5.2.5

Good practice versus better practice

McKinnon et al. define good practice on pages 7 and 8 of the manual and use the
concept as a proxy for so-called best practice, believing this term to be too
controversial. However, in the manual there is no indication of how the descriptions
of good practice contained in each benchmark are actually arrived at and what
justifies them being regarded as good practice. There is also no reference to relevant
practice that might exist outside the university system; nor is there any recognition
that good practice might change over time, given new systems, new technology and
new policy directions.
Our preference here is to use the concept better practice because it is immediately
relevant to the circumstances at hand and is about improvement from whatever might
be the particular base level that is relevant to the university in its development phase.
The question arises as to what should happen in an organisation if the benchmarking
process suggests that good practice has been attained? Use of the term better
practice connotes a never-ending improvement process.
A related question here is about who determines what the description of good practice
should be? It might be that this assessment process should include input by a critical
friend (not unlike what has been occurring in trial audits which prepare universities
for the Australian Universities Quality Agency audit process), and from relevant
institutions and customers from outside the university system. In the McKinnon-type
templates presented for student admission and student complaint processes in Chapter
4, the good practice definition comprised the agreed views of around 100
stakeholder participants in the workshop process for this project with a range of
perspectives and roles associated with the university.
5.2.6

Assessment levels

The question was raised during the workshop process of what attaining a level 5 in the
McKinnon manual actually meant for the process of organisation improvement. Is it a
time to celebrate and do no more? Is it particularly relevant for all stakeholders or
simply to make senior management feel good? Is the path between level 1 and level 5
a straight line course of action, and how can this occur when the rate of learning will
vary from area to area and from time to time, depending upon a whole range of
external and internal circumstances?
The view coming out of the workshop process is that a level 5 can only be achieved
when learning is exhausted, which of course should never happen in an improvement
process. A level 5 therefore puts an artificial ceiling on improvement that ought not be
there. There should be continuous review and continuous learning about ways to
make the organisation function better. As a result, the term better practice was
designed to measure whether improvement had occurred rather than what (artificial)
level should be attained. Again, the issue of what is good practice as measured by a
rating of 5 might be fixed to a specific point in time, but changed circumstances will

44

alter this notion considerably and so the concept of good practice, or a rating of 5, is
a static notion in a dynamic environment.
5.2.7

Pro-forma approach

The pro-forma approach of the manual was seen in the workshops as too systematised
and rigid: in reality the approach needs to be flexible and cognisant of individual
circumstances. The manual was seen as a one size fits all, topdown approach, rather
than one that enables a flexibility to respond to the many and diverse circumstances
found in the operations of a university.
The idea of a pro-forma also implies that comparisons might be made using these
same pro-forma. As stated earlier, the project has found that a comparison among
universities in any functional area is very challenging, as they are so different.
5.2.8

Combined workshop comments

Participants in the final combined teleconference workshop felt that there was a need
to make a distinction between a how to process and a what to process in a
benchmarking manual. There was not a need for a good practice definition, but
rather a need for a process for achieving good practice, or more appropriately,
better practice. They also believed it was necessary to avoid a divisive and
threatening environment to carry out benchmarking.

45

6.1

REVISING THE APPROACH TO BENCHMARKING UNIVERSITIES:


LEARNING FOR IMPROVEMENT

Background

The workshop and discussion process carried out in the six case study universities
revealed that the McKinnon et al. benchmarking manual was not user-friendly and did
not address all stakeholder interests in the university to bring about improvement on
an ongoing basis. In response to this, through the exchange facilitated in the second
round of benchmarking workshops, the following principles were identified as
underpinning an approach to organisation improvement through evaluation:

leadership, including at the most senior levels, and resources be made available
for both the review process and the resulting improvement initiatives

comprehensive dialogue and collaboration across a broad range of relevant


stakeholders with an interest in the function needing improvement, both within
and outside the organisation this principle ensures meaningful and purposeful
connectivity

an attitude and associated process of learning for improvement based on


reflection, exchange of information, knowledge and sharing understandings and
experiences.

Based around these principles, a good program for quality improvement within the
university was seen as comprising the following characteristics:

a clear understanding of the universitys stakeholder expectations in relation to


the specific area targeted for improvement and the environment in which it
operates

goals, policies and procedures that are accessible and understood by all relevant
staff, students and other stakeholders participating in the process of improvement

a flexible, holistic process to enable active involvement by relevant stakeholders

measures of performance for the function, with mechanisms for both internal and
external data support, including from non-university comparisons that are
consistent with agreed improvement goals and the changing environment in which
the function has to perform

an agreed recognition by all stakeholders that practice can be improved

leadership and commitment from senior management for the drive and the
resources to assist with an improvement program

evidence that improvement has resulted

learning that feeds into continuous improvement on a wide scale.

A generic approach to a comprehensive improvement program, comprising initial


review, strategic planning, reflection, action and evaluation is presented in this
chapter. These five phases see an approach to benchmarking as a holistic and ongoing
process leading to real improvement through learning, connectivity and leadership

46

commitment. It is an intrinsic and ongoing part of the operating environment and not
a one-off statistical exercise based only on the collection of comparative performance
indicators.
6.2

Phases in benchmarking for improvement

Five phases were articulated as making up one generic cycle of working towards
better practice. A number of sub-elements to each phase might be suggested;
however, this would vary for each functional circumstance.
Figure 6.1 shows these five phases as:

comprehensively reviewing the current situation and environment as it relates


to the targeted function

undertaking a process of strategic planning targeted at improvement

a program of implementation with the resource commitment of senior


management

a process of review to establish the degree to which improvement has occurred

a recognition that learning from the previous phases can lead to further
improved approaches in an ongoing process.

The underlying principles of collaboration (or connectivity), leadership and learning


are seen as influencing each of these five phases.
Figure 6.1:

Phases for an improvement cycle in university functions

Learning for improvement


Phase 5
Learning for
continuous
improvement

Phase 1

Current
situation

Collaboration

Phase 4

Reviewing
progress

Learning
Strategic
planning

Leadership

implementation

Phase 3

Phase 2

47

6.2.1

Phase 1: Reviewing the current situation and environment

As Figure 6.2 shows, the first phase provides a view of the operating environment
(internal and external) impacting on the university with respect to the particular area
in which improvement is being sought. Those in the workshop identified that much
data may already be available to portray this environment. In many cases, however,
this data is neither in a simple nor accessible form, or in a form that enables an
overview of existing practices.
The purpose of this first phase is, therefore, to gather and make accessible to
stakeholders all available information that facilitates identifying the external and
internal factors at work (drivers and impediments), and the way they shape and
influence the present operating environment for the university and targeted functional
area. This material may include: policy and procedure documents; staff, stakeholder
and student surveys and views; staff recruitment programs; budget implications; and
wider factors and influences. An analysis of this data may highlight gaps to be filled.
There may also need to be additional input from stakeholders, including from external
groups such as professional and accrediting agencies, government, business, and local
and regional communities at this point.
Participants in the improvement program will need to reach agreement about the
nature of the operating environment impacting on the function being reviewed. At this
point there will need to be a clear statement by senior management of the importance
of the activity and a clear indication of the practical support provided for the
evaluation.
Figure 6.2:

Reviewing the current situation

Breaking it
down

1. Current situation
Objective:
An agreed statement by all relevant
stakeholders about:
The effectiveness of existing policy
& practice as it relates to the targeted
function.
The existing internal & external
environment (strategic, regulatory,
policy, societal, etc) in which the
targeted function sits.
Inputs:
Information collection concerning
the targeted function (eg surveys,
focus groups, policies, procedures,
workshops, etc.
An inclusive discussion so as to gain
general agreement to the current
situation, to identify impediments,
opportunities, drivers, etc that impact
on improvement

48

6.2.2

Phase 2: Strategic planning

There is a need to develop a strategic plan of action, along the lines indicated in
Figure 6.3, so that all stakeholders know in which direction they will collectively head
with respect to their actions. Before this can occur, however, there needs to be
agreement among the participating stakeholders about what practice is a better
practice to aim for. This may be defined by the participants, rather than being some
definition borrowed from an outside agency or group with little knowledge of the
particular circumstances being reviewed.
This phase is envisaged as an inclusive process involving all relevant stakeholders
(including those who are external to the organisation) and is initially about sharing
understandings and being comfortable about the future vision particular goals,
language, concepts, culture, constraints, impediments and opportunities as it relates
to their perspective on matters to do with the targeted area for improvement.
The actual process may vary according to the particular situation so that it may be a
facilitated workshop process, a web-based communication process, questionnaire
approach (less favoured) or some other approach that facilitates an exchange of
perspective and understanding among stakeholders. A written and agreed document
that spells out and justifies areas for action, responsibility, indicators of success, and a
timetable for implementation will be an objective from this phase.
Figure 6.3:

Strategic planning

Breaking
it down

6.2.3

2. Strategic planning
Objective:
An agreed written strategy to improve
existing policies & processes for the
targeted function that spells out areas for
action, responsibility, timetable, key
performance indicators, better practice
targets etc.
Inputs:
An inclusive process of all relevant
stakeholders that can lead to a common
view about goals, definitions, language,
culture, concepts, impediments,
opportunities etc.
Review of Phase 1 information.
Commitment from senior management
to provide resources & to implement

Phase 3: Implementing actions

The third phase is to make sure the energies and ideas that have been previously
developed and agreed to in the first two phases are put into practice. Support from
senior management will be critical at this stage.

49

A commitment is required of resources and leadership support from senior


management to reinforce the implementation of the identified and agreed reform
areas. This may involve the allocation of time for otherwise busy staff, training and
development of operating staff, information and awareness programs, and trial
programs. This process should evolve so that the tasks to be undertaken are not extra
tasks but integrally part of identified staff work programs.
Quantitative and qualitative indicators around the reform measures need to be
reviewed and reported on regularly in a transparent way to all participant
stakeholders.
Figure 6.4:

Implementing actions

Breaking
it down

3. Implementing action
Objective:
The details of the agreed strategy are put into
practice in a meaningful way, e.g. staff advised,
training & information programs in place, review &
recording mechanisms strengthened, performance
measures in place, management agreement to ensure
process is carried out.
Inputs:
Appropriate instructions from Phase 2 strategy
are conveyed to all stakeholder interests
concerned about bringing about improvement in
targeted function.
Stakeholders understand the task
Time, guidance, information, training, etc. given
as necessary.

6.2.4

Phase 4: Review of progress

The process implemented in Phase 3 needs to show, after an agreed time lapse, that
improvements are actually being generated and that they are consistent with the wider
changing environment. Therefore, a regular review and reporting (Figure 6.5) of the
changing operation impact of the function, as well as a review of the surrounding
environment which has an impact on it, as identified in Phase 1, is required to see
whether there are new imperatives that need to be factored into the improvement
process.
This new information needs to be reviewed through the forum of stakeholders to
ensure that change is actually taking place in the direction required.

50

Figure 6.5

Review of progress

Breaking
it down

6.2.5

4. Review progress
Objective:
Collect evidence to demonstrate
improvements are occurring
successfully from the original baseline
situation identified in Phase 1.
Inputs:
Baseline data from Phase 1,
strategy plan from Phase 2 and
actions implemented from Phase 3.
New data collection re operation of
function, student and staff surveys,
feedback from external stakeholder
agencies etc.
A forum/ process for reviewing the
newly collected data.

Phase 5: Learning for continuous improvement

A key element of the total process is that there is a point where the experiences gained
in reviewing and reforming existing practices generate new or better understanding
that need to feed back to ensure improvements continue to occur. Therefore, this
phase, as shown in Figure 6.6, should involve a formal feedback process, which
reflects upon the entire process (phases 1 to 4) and highlights the lessons and
experiences gained which can then be fed back into the process (particularly phases 2
and 3) to ensure effectiveness is reinforced.
Figure 6.6: Learning for continuous improvement

Breaking it
down

5. Learning for continuous


improvement
Objective:
Identify what has been learnt from
implementing phases 1 to 4 and apply this
knowledge as part of a continuous
improvement program.
Inputs:
A feedback process that contains
statements about what has been learnt.
A process to discuss these insights in
the light of their cost-effectiveness.
A process to implement the learning
from each of phases 1 to 4 when
appropriate.

51

6.3

Comments

The implementation of an improvement program along the lines outlined above


requires a significant outlay of time resources for the participants. However, by seeing
improvement processes of this kind as an investment with a long-term contribution to
viability and competitiveness, university leadership may be more willing to commit
the necessary resources. Similarly, participants will want to contribute more of their
private time resources to generate improved results for the work environment and for
their stakeholders.

52

53

7 SECTOR IMPLEMENTATION
7.1

Survey evidence

From the survey results, most universities believed the McKinnon benchmarking
manual could only be a limited tool in the organisation improvement process. There
was a view it should not be used as a process for ensuring standards across the sector.
Several suggestions were proposed in the survey to see if improvements could be
made to the existing manual to increase the depth and breadth of its uptake among
universities in their institutional improvement agenda. These included making the
manual more function-specific in its detail, tailoring it to specific improvement needs
of the individual institution, having an accompanying user guide, establishing a
centralised referral service (including web-based) that would coordinate and
disseminate benchmarking experiences to institutions, and put in place training
programs on how to work with the manual.
Rather than make any design changes or additions to the existing manual to make it
more accessible, most respondents to the survey believed there was a need for some
form of closer contact and advice to better appreciate, understand and implement the
full benefits of a benchmarking program for organisational improvement. Training
programs, an independent central referral service or guide were suggested as the main
mechanisms that might assist with this tailored advice to encourage greater uptake
from the current low levels, as Figure 7.1 below shows.
Figure 7.1: Ways to improve effectiveness of the McKinnon et al. benchmarking manual
16

Survey respondents

14
12
10
8
6
4
2
0
ni
ai
Tr
ng

to

t
nc
fu

s
ed
ne

lit
na
io

of
n

il
ta
de

io
ut

it
st
in

if i
ec
sp
Source: University benchmarking survey, 2003

vic
er
ls
rra
fe
Re

e
or

e
id
gu
er
Us

d
re
i lo
Ta

Improvement methods

54

7.2

Workshop evidence

The six university workshops and the associated discussion processes also reinforced
the need for closer hands-on guidance to stimulate the uptake of benchmarking. Issues
raised during the workshops that impact on the uptake of benchmarking for
improvement, apart from senior management commitment mentioned earlier, include:

the different needs of different functions within different universities at different


times

lack of available time to design own processes for benchmarking that meet their
own circumstances

increasing complexity of issues and the external environment impacting on the


university

need for wide-ranging stakeholder communication and collaboration

need for consistency and clarity in language

need to see both personal and organisational benefit.

The results from the six university workshops suggest a benchmarking guide by itself
would be insufficient to build a momentum for organisational change through
stakeholder collaboration and learning.
The final teleconference involving all six university case studies agreed that the best
means for implementing an enhanced benchmarking regime to generate sector-wide
improvement was a central referral centre that was web-based. A web page, either
attached to the DEST web page or through some other university sector-wide
organisation or an independent group, could provide case studies of benchmarking
applications, background to the history and use of benchmarking in higher education,
links to other relevant benchmarking sites, and similar. It could also provide feedback
to questions from those seeking to implement their own benchmarking for
improvement program. It was also suggested that to ensure such a mechanism has a
high uptake there may be a need for some incentive from the government to
encourage greater benchmarking activity among universities.

55

8 PROCESS EVALUATION
As a final element in this study, each of the six case study universities was asked how
they thought the process of discussion used in the project, as outlined in earlier
sections of the report, worked as a means of better understanding how benchmarking
might occur for their university. This question is important for it provides an insight
into the means of stimulating dialogue and learning across stakeholder interests,
whatever they might be. Most importantly, the process stakeholders were asked to
comment upon goes well beyond the simple information dump workshop that we
judged from our investigations is more commonplace in the university environment.
Improvement requires involvement, understanding and commitment across a
spectrum of interests working together toward a common goal.
The advantages of the process implemented in this project were seen to be:

It was clear that the subject warranted attention, especially when it was initiated or
supported by key stakeholders.

It encouraged dialogue among a group of stakeholders who might not ordinarily


come together, including those from outside the university, to discuss matters of
mutual concern about the university environment and how it might improve.

The initial workshop enabled those with a concern and even doubt about the
subject of benchmarking to find out what others think and gain a better
appreciation of how it might work in their situation.

It enabled time for reflection and for group members to meet informally between
workshop sessions to discuss issues of relevance to the project.

Having a workshop twice in each university was seen as an advantage as it


enabled the group to go beyond simply absorbing a dump of information from the
first workshop. Participants were able to make their own contribution by adding
their own input based on what they had heard in the workshops, as well as what
they knew concerning their involvement in the university function being focused
on.

It is suggested that these processes need to come from within both the university
system at large and the individual institution, so they have the commitment of senior
management; otherwise they run the risk of all the good work not being acted on. It is
also difficult fitting in such time-intensive processes within an already busy university
schedule for staff where there is little time given over for reflection and improvement
planning.
The discussion kit was seen as something that potentially could be quite helpful. We
learnt that the kit needed to be well-designed and to include both generic and specific
subject area examples. The generic modules were important to allow many an
opportunity to gain a wider understanding of the subject, but also the eventual need is
for something that staff can address which is directly relevant to their daily work
tasks. It was suggested that a web version would enable an interactive approach to
stimulating conversation around the issues across a wider group of the university,
rather than being limited to the participants in the workshop.

56

Another potential aid was the use of an outside facilitator or critical friend. Such a
resource enables some of the difficult and culture-related issues to be brought out into
the open for discussion.
Finally, the project allowed only around four months participation in the workshop
and discussion process by each case study university and its stakeholders. This was
not enough time to fully judge the outcomes that can come from a process like this,
although the early indications of commitment to change were very positive.

57

9 CONCLUSIONS AND RECOMMENDATIONS


9.1

Conclusions

When this project began it had two objectives. The first was to add specific elements
to the McKinnon et al. benchmarking manual dealing with student admission and
student complaint processes. These two functions of university responsibility were not
addressed in the original publication. The second objective was to review the use of
benchmarking generally among universities and to suggest how it might be a more
effective tool in the light of the external pressures and changes now impacting on the
sector, and the reforms contained in the government's package on higher education.
The project found that the McKinnon manual was not seen very positively as a
benchmarking tool that could assist universities with their improvement agendas. Its
uptake among universities has been low. Where it has been used, it was mostly for
peripheral purposes. Some areas of university activity, such as facilities management,
libraries and technical services, appear to have gained some benefit from the manual
as a framework for their benchmarking work. There is little evidence the manual has
stimulated university-wide organisational improvement, particularly in core areas of
university activity.
Generally, the manual has been seen as being too prescriptive, its language confusing
and generally not attuned to the diversity between and within university operations.
The manual has a one size fits all approach that might have some relevance to the
large university, but generally of minor use to others. It was seen as too topdown and
not inclusive of those people in the university or even relevant stakeholders outside
the institution responsible for making things happen on a daily basis. By not
building these people connections from the start, the improvement process suffers
from a diminished knowledge exchange and learning environment. Facilitating these
environments in institutions is important for stimulating the creative outcomes that
take the institution to a new level of improvement, competitiveness and viability. The
McKinnon et al. approach was seen as an anathema to what a learning organisation
should be yet learning is what university core business is supposed to be. It offered
no instruction to assist universities reflect on their functions with stakeholders and
customers.
The manual in a number of respects has added confusion to the subject of
benchmarking. First, universities found the leading, lagging, and learning benchmarks
difficult to distinguish in a practical sense. In the study it was argued that all
improvement ought to be about learning so there should be no need to make this
three-way distinction. Second, segmenting university activity into 67 sub-elements,
then omitting many others which universities felt should be included or expanded,
was a divisive approach to what should be a collaborative approach across university
silos and with other stakeholders. Third, a self-assessment rating scale of 1 to 5
against a definition of good practice was seen as not taking the organisation forward
on an improvement agenda. The manual does not answer the question of what a score
of 5 means the reality is that learning about improvement should never be
exhausted. Not unrelated to this is the way the manual is laid out, and it is difficult to
see a causally linked improvement process in it. Fourth, the language of the manual

58

was seen as turgid and off-putting to the wide spectrum of stakeholders who ought to
be involved in processes of organisational improvement.
Chapter 4 of this report provides template pro forma for student admissions and
student complaints, as required by the original project brief. However, all six case
study universities in the project had great difficulty being able to construct a
benchmarking pro forma along McKinnon lines that would be helpful for their area of
chosen priority.
More important than any of these concerns about the manual, however, is that for
many participants in this study the manual contributed to uncertainty about reviewing
ones own performance with a view to improvement in the university environment.
There is a concern about revealing performance in case it is penalised, rather than
seeing benchmarking as the process of learning to improve in a collaborative way. In
such an environment, needing to improve is somehow seen as a weakness, rather than
judged as demonstrating the leadership character of wanting to be better.
Benchmarking is seen by many in universities as an additional task with resource
costs, rather than a natural part of the way one operates and, as such, is seen in a
superficial or peripheral way. Benchmarking, based on the McKinnon et al. way of
doing it, is seen as a tool for senior management or outside funding and regulatory
agencies to do over the organisation, rather than one that can build teamwork.
Nevertheless, despite all of this, the McKinnon manual has at least put the subject of
benchmarking more to the fore amongst the many things that universities are required
to deal with in establishing themselves as viable and responsive organisations. The
task is to take benchmarking from being perceived as a short-term hard datadriven
assessment exercise for regulatory purposes to one concerned with fundamental
improvement predicated on collaboration and learning. An increasing involve me
requirement by society in the evaluation and review of institutions like universities
will be a result of an evolving global knowledge world.
For all of the reasons stated above, we do not support simply adding elements for
student admissions and student complaints using the McKinnon manual framework.
Nor do we support any other means for enhancing the basic McKinnon et al.
framework to assist universities with their improvement programs. Organisational
improvement is more personal than the template approach advocated by McKinnon et
al. It needs to incorporate leadership, commitment, learning and collaboration. To get
universities to fundamentally improve what they do requires a different approach to
benchmarking than we have so far seen. It needs to be simple and it needs to be
inclusive.
In this project we have adopted an approach to inquiring into benchmarking that
stressed dialogue (learning to think together) among stakeholders, as it was clear this
was a significant missing element from the model put forward by McKinnon et al.
Over a four-month period, two facilitated workshops with a range of university
stakeholders, a discussion kit and a teleconference were used with each of six case
study universities that wanted to improve what they were doing in certain areas.
Ideally a process of open and mutual inquiry and opinion forming like this should
occur in-house, but it might also gain from some carefully measured outside

59

assistance, and ideally it should extend over a much longer period of time as opposed
to the four months that the project allowed.
Nevertheless, the results achieved in this project and the feedback received from the
participants over this relatively short period suggest we were on the right track with
the approach that was put in place. Already a number of participants in the project are
progressing their benchmarking through the learning for improvement approach.
Therefore, we have proposed an approach to organisational improvement that
comprises five phases of activity. In summary these phases involve: reviewing the
current environment impacting on the area where improvement is being sought;
agreeing to a strategy plan to implement initiatives and agreeing upon a performance
assessment regime; being committed to implementation; reviewing progress; and
learning for continuous improvement. We have provided a generic template for
improvement along these lines. Institutions and their stakeholders, with some outside
assistance, should design their own specific program for improvement around the
generic approach and principles suggested in this report.
Our feeling is that a simple approach to organisational improvement along these lines,
built on principles of dialogue, collaboration, reflection, leadership commitment and
learning, has a better chance of encouraging a whole-of-organisation approach to
improvement. It would also go some of the way to overcoming much of the negative
attitudes associated with the approach to university benchmarking and the auditing
agenda that currently exist.
For these reasons we also suggest that access to hands-on learning and activity about
benchmarking for improvement needs to be enhanced to give universities the
confidence to take it on with greater enthusiasm. Web page access, via a central
facility, is suggested. Such a web page might include background and literature about
benchmarking in the university environment and case studies of successful
benchmarking in higher education, as well as provide guidance to those undertaking
benchmarking for improvement.
Increasingly, the Australian Universities Quality Agency quality audit process is
asking universities to explore benchmarking for improvement. We feel the process
outlined in this report could assist this agenda.
9.2

Recommendations

(a) That the McKinnon et al. manual not be revised or enhanced as a tool that can
assist universities with their benchmarking for improvement objectives.
Benchmarking for improvement in universities requires an approach that is more
personal and tailored to individual institution and function circumstances, involves
a wide cross-section of relevant stakeholders, and is based on learning.
(b) That consideration be given to providing direct assistance to those universities that
want to put in place their own programs of organisational improvement based on
principles of learning, collaboration and leadership, following the approach
outlined in Chapter 6. This assistance could be in the form of workshop and
discussion facilitation and a central web-based advisory service.

60

(c) That the discussion kit compiled for this project as an initial effort be developed
into a more comprehensive tool for either web or hard copy use by interested
universities.
(d) That the approach to university improvement outlined in this report be trialled and
evaluated over a 12-month period in a university situation, as the four-month
period allowed in this project was judged too short to be fully effective.
(e) That the approach suggested in this report be presented as a workshop at a
forthcoming Australian Universities Quality Forum, perhaps including progress
reports from universities that participated in this project, to elucidate responses
across a wider array of institutions and functions.

61

REFERENCES
Anderson, D., Johnson, R. and Milligan, B. 2000, Quality assurance and
accreditation in Australian higher education: An assessment of Australian and
international practice, EIP Report No. 00/1, Department of Education, Science
and Training, Canberra.
Anderson, D., Johnson, R. and Saha, L. 2002, Implications for universities of the
changing age distribution and work roles of academic staff, Department of
Education, Science and Training, Canberra.
Butcher, J., Howard, P., McMeniman, M. and Thom, G. 2002, Engaging community
service or learning?: Benchmarking community service in teacher education,
Department of Education, Science and Training, Canberra.
Charles, D. and Benneworth, P. 2001, The regional contribution of higher education:
A benchmarking approach to the evaluation of the regional impact of HEI, Centre
for Urban and Regional Development Studies, University of Newcastle upon
Tyne.
Clark, B. 1983, The higher education system: Academic organisation in crossnational perspective, Pergamon, New York.
Clark, B. 1995, Places of inquiry: Research and advanced education in modern
universities, University of California Press, Berkeley.
Clark, B. 1998, Creating entrepreneurial universities: Organisational pathways of
transformation, Pergamon, Oxford.
Coaldrake, P. and Stedman, L. 1998, Australia's universities confronting their futures,
University of Queensland Press, Brisbane.
Commonwealth Higher Education Management Service (CHEMS) 1998,
Benchmarking in higher education: An international review, CHEMS.
Cunningham, S., Ryan, Y., Stedman, L., Tapsall, S., Bagdon, K., Flew, T. and
Coaldrake, P. 2000, The business of borderless education, Evaluations and
Investigations Program, Report No. 00/3, Department of Education, Training and
Youth Affairs, Canberra.
Delfgaauw, T. 2000 The Shell story A journey towards sustainable development, in
the presentation series From sustainability to Dow Jones, organised by Edmonds
Management, Melbourne, Australia.
Fielden, J. 1997, Benchmarking university performance, CHEMS monograph No. 19.
Fielden, J. and Carr, M. 2000, CHEMS International Benchmarking Club,
Chapter 15 in N. Jackson and H. Lund (eds), Benchmarking for higher education,
The Society for Research into Higher Education, Buckingham.

62

Harman, G. and Meek, L. 2000, Repositioning quality assurance and accreditation in


Australian higher education, Evaluations and Investigations Program, Report
No. 2, Department of Education, Training and Youth Affairs, Canberra.
Higher Education Funding Council for England (HEFCE) 2001, The regional
mission: The regional contribution of higher education, HEFCE, London.
Jackson, N. 1998, Book review, Benchmarking in higher education: Adapting best
practices to improve quality (by J. Alstete), Studies in Higher Education, Vol. 23,
No. 1, pp. 107109.
Jackson, N. and Lund, H. (eds) 2000, Benchmarking for higher education, The
Society for Research into Higher Education and Open University Press,
Buckingham.
Lund, H. 1997, Benchmarking in UK universities, CHEMS Monograph No. 22.
McKinnon, K., Walker, S. and Davis, D. 1999, Benchmarking: A manual for
Australian universities, Department of Employment, Education, Training and
Youth Affairs, Canberra.
Martin, A. 2003, Universities Quality Agency: 2002 institutional audit reports,
analysis and comment, AUQA Occasional Paper (web address:
http://www.auqa.edu.au/qualityenhancement/occasionalpublications/index.shtml).
Massaro, V. 1998, Benchmarking in Australia, Chapter 4 in Benchmarking in higher
education: An international review, CHEMS.
National Board of Employment, Education and Training 1998, Quality
implementation and reporting in Australian higher education, 1997, Department
of Education, Science and Training, Canberra.
Nelson, B. 2003, Our universities: Backing Australia's future, Commonwealth
Government, Canberra.
Office of Technical Services 2003, A report on the implementation of Technical
Services Benchmarks Project, Griffith University, Brisbane.
Robertson, M. and Trahn, I. 1997, Benchmarking academic libraries: An Australian
case study, Australian Academic and Research Libraries, Vol. 28(2),
pp. 126141.
Stevenson, S., Evans, C., Maclachlan, M., Karmel, T. and Blakers, R. 1999, Regional
participation in higher education and the distribution of higher education
resources across regions, Department of Education, Training and Youth Affairs,
Occasional Paper No. 99-B, Canberra.
Trow, M. 2000, From mass higher education to universal access: The American
advantage, paper to the International Symposium on the Future of Universities,
University of Newcastle upon Tyne, 2829 September.

63

Universitas 21, http://www.universitas.edu.au/members.html.


Urquhart, J., Ellis, D. and Woods, A. 2002, Benchmarking guidelines for university
technical services, Griffith University, Brisbane.
Weeks, P. 2000, Benchmarking in higher education: An Australian case study,
Innovations in Education and Training International, Vol. 37(1), pp. 5968.
Wilson, A. and Pitman, L. 2000, Best practice handbook for Australian university
libraries, EIP Report No. 00/10, Department of Education, Science and Training,
Canberra.
Wilson, A., Pitman, L., and Trahn, I. 2000, Guidelines for the application of best
practice in Australian university libraries: Intranational and international
benchmarks, EIP Report No. 00/11, Department of Education, Science and
Training, Canberra.

64

65

APPENDICES
A.1 University benchmarking survey (to all Vice-Chancellors)
Professor Paul Thomas
Vice-Chancellor
University of the Sunshine Coast
Maroochydore DC Qld 4558

Dear Professor Thomas


DEST University Benchmarking Project
The Department of Education, Science and Training (DEST) are reviewing aspects of
the publication Benchmarking: A manual for Australian universities by McKinnon,
Walker and Davies (1999), and are seeking input from universities on their
benchmarking experiences. Consultants Regional Knowledge Works have been tasked
to assist the Department with the review.
The review aims to add specific elements to the manual dealing with university
admission and complaint processes for students and to explore prospects for
enhancing the usefulness of university benchmarking generally in the light of current
and emerging policy parameters.
The project is guided by a Steering Committee comprising representatives of DEST,
the Joint Committee for Higher Education, AVCC, and HERDSA. Presentations
about the project have been made by DEST to the AVCC Committee for DVCs and
PVCs Research, and by the consultants at the recent AUQA Forum in Melbourne.
Following a general invitation to all universities to take part in the project as case
studies, six are now actively involved. The case studies explore a number of different
university functional responsibilities from a benchmarking perspective, along with
student admission and complaint processes, aimed at building a culture for
improvement and partnership.
As part of the review, we are keen to find out the nature and extent of university
experiences with benchmarking and in particular their experiences in working with
the McKinnon Manual. In this regard we are aware of work carried out last year by
CAUL which paints a patchy picture of use of the Manual. We are also aware of some
function-specific applications of the Manual of note, such as in the areas of technical
support, facilities management, and libraries at various universities.
Importantly we are also interested in whether universities see merit in the further
development of the McKinnon Manual as a tool for institutional/ functional
improvement in the current operating environment, and where they see these
developments and improvements might best be made.

66

The attached survey asks some particular questions about university benchmarking
experience that would be helpful for the project. The AVCC have supported the
gathering of this information.
The initiatives flowing out of the Governments higher education policy framework,
Our Universities Backing Australias Future, as well as observations being made in a
number of completed audits undertaken by AUQA highlight a need to move quickly
in exploring options for supporting universities develop their benchmarking
capability. The present project will provide input into this.
The benchmarking review project is due for completion towards the end of
November. We are therefore keen to learn about your experiences by Monday 20
September this year.
Should you require further information either about the project or about this
information request contact either myself or Dr Claire Atkinson, Quality Unit, DEST,
ph: 02 6240 5397, email: claire.atkinson@dest.gov.au
Yours sincerely

Dr Steve Garlick
Director
20 August 2003

67

DEST Benchmarking Project


1. Please summarise your Universitys experience with (tick relevant box)
a) Benchmarking generally:
Not used

Used (provide detail below)

b) The McKinnon benchmarking Manual


Not used (go to Q. 4)

Used (provide detail below)

2. How helpful has the McKinnon Manual, been as a:


a) Management improvement tool (explain):

b) Partnership improvement tool (explain):

c) Other uses (explain):

3. Thinking about the McKinnon benchmarking Manual, what would you


change to make it a more effective tool for universities under the following
headings?

68

a) Made more functional-specific and expanded in its detail


Yes
No

b) Tailored to the specific improvement needs of the individual institution:


Yes

No

c) Production of an associated user guide:


Yes

No

d) A referral service to coordinate benchmarking experiences nationally and provide


ongoing advice, information and support, including about international
experiences:
Yes

No

e) Availability of training programs to assist institutions implement their own


benchmarking initiatives:
Yes

No

4. From your perspective, what are the main purposes for benchmarking in the
university environment? (mark relevant boxes)
a) University-wide improvement
b) Reflection and strategy planning
c) Building organisation-wide commitment to goals
d) Substantiating achievements
e) An aid in transparent planning
f) Particular functional area improvement
g) Opportunity identification
h) Partnership strengthening
i) Dissolving boundaries within and between universities
j) Strengthening service support links
k) Building performance assessment into learning and teaching
l) Research performance
m) Research training performance
n) Keeping ahead of competition
o) Enabling internal performance-based resource allocation

69

p) Obtaining external performance-based funding


q) Copying-in best practice
r) League ladder assessment
s) Staff development
5. Other Comments

This is the end of the questionnaire. Thank you.

70

71

A.2 First workshop program (example)

Draft Swinburne workshop program


1.

Objectives of the workshop


"
"

"

2.

Share understandings about the issues and directions for university


benchmarking generally.
To review the current McKinnon benchmarking approach for its effectiveness
as a tool for the University in support of its learning and teaching role in the
light of
(a) the learning and teaching agenda highlighted in the post-Crossroads
reforms for HEI; and
(b) the specific connections the University is seeking to make through its
engagement with students, jobs, service delivery and those in community
institutions and enterprises. In particular, this could focus on monitoring
performance, identifying 'good practice' demonstration in teaching and
learning, and the dissemination of outcomes in this regard.
To make specific recommendations relating to benchmarking university
admissions and complaints functions (for national and international students)
and processes and how a set of benchmarks might look for Swinburne in this
area.

Program

13.00 Workshop Introduction


13.10 University overview

13.15 Project overview

Facilitator Mr Geoff Pryor


Regional Knowledge Works
Professor Barbara van Ernst, Deputy
Vice-Chancellor, Swinburne University,
Lilydale
Dr Steve Garlick
Regional Knowledge Works

Plenary session - brainstorming


13.25
Question 1
What are the trends and changes over the next 10 years regarding
benchmarking in the learning and teaching agenda highlighted in
the post-Crossroads reforms for HEI?
13.40

Break (20 mins)

Question 2
What issues have arisen for Swinburne as a result of its
benchmarking efforts?

72

Small groups in depth work


14.00
Question Groups A & B
Focusing on Swinburne student
admissions and complaints processes (including for international
students) as an example, what is required for an effective
Swinburne benchmarking regime?
14.40

Plenary session - group reports

15.00
Question Group A

Question Group B

What is an effective Swinburne benchmarking regime for


the learning & teaching objectives of the University in
relation to the post-Crossroads reform agenda?
What is an effective Swinburne benchmarking regime for
the learning & teaching objectives of the University in
relation to its engagement relationship with the
community?

15.50 Plenary session - group reports


16.10 Plenary session
Learning Circle Kit explanation and discussion; next workshop date
16.30 Finish

73

A.3

Second workshop program (example)

Sunshine Coast follow-up benchmarking workshop program


Thursday 16 October 2003, 10.00am to 1.00pm

1.

Objectives of the follow-up workshop

An almost completed benchmarking program that could be taken forward


within the University of the Sunshine Coast as it relates to student admissions
and complaints processes, teaching and learning, and regional development
activities.
An agreed pro-forma approach to implement the above program
Generally agreed understanding of how benchmarking might best be applied
more broadly within USC to achieve these objectives, including what
impediments and opportunities need to be addressed and pursued in the
organisation to give effect to them.
Lessons for the DEST project in relation to the effectiveness of the
McKinnon benchmarking approach as a tool for the University in fostering
improved policy and practice.

2.

Program

10.00 Workshop Introduction


10.05 Project overview

Facilitator Mr Geoff Pryor


Regional Knowledge Works
Dr Steve Garlick
Regional Knowledge Works

Plenary session
10.10
Presentation by group on discussions held since first workshop and
proposals that have been identified
The group has been furthering its understanding of the application
of benchmarking in relation to student admissions and complaints,
teaching and learning and regional development at USC with the
assistance of the Discussion Kit. The group has been identifying
ways benchmarking might be made a more explicit part of
University improvement in these areas and what the issues are that
need resolving in making this effective.
10.25

Plenary discussion on presentation

10.45

Small groups. Consider the draft benchmarking proposals that have


emerged from the group in the discussion process
What are the merits of the benchmarking proposals for university
improvement performance? What steps need to be undertaken for
the benchmarking initiatives to be implemented?

11.30

Plenary presentation by small groups

74

11.50

Small groups. (a) Put together a draft pro forma approach to


benchmarking student admission and complaint processes, teaching
and learning and community engagement for USC and identify the
various elements that should be included in it. (b) Identify the steps
that should comprise a strategy to implement the benchmarking
framework.

12.30

Plenary presentation by small groups.

12.50

Review of project
Include feedback from participants on the use of the two rounds of
workshops and discussion kit process in the University of the
Sunshine Coast.

1.00

Next steps and finish


The combined case study workshop

75

A.4 Discussion kit (example)

DISCUSSION CIRCLE KIT


University Benchmarking Project

Swinburne University of
Technology
Stage One

Produced by Regional Knowledge Works


for the
Department of Employment Science and Training

76

How to use this Kit


The purpose of this Discussion Kit is to assist groups in the University to focus on
important issues to do with the way they benchmark their various activities. This
particular Kit has been tailored to assist participants in the Department of
Education, Science and Training (DEST) benchmarking project develop their
responses to issues specifically related to their uptake of benchmarking practice.
The issues raised for discussion come from the results of the workshop held at
each of the participating universities in the project, and the material available
from the McKinnon et al (1999) publication Benchmarking: A manual for
Australian universities.

Introduction

This module contains a succinct discussion on a Discussion Circle


approach and hints on how to make group discussion most effective.

Module 1

Provides background to the project its origins, design, implementation


and expected outcomes.

Module 2

This module addresses broader benchmarking issues and acts as a


potential vehicle to subsequently considering specific university issues.

Module 3

This module addresses University admissions procedures for both national and
international students.

Module 4
This module addresses University complaints procedures for both national and
international students.

Module 5

This module makes specific recommendations relating to benchmarking university


admissions and complaints functions (for national and international students) and
processes and how a set of benchmarks might look for Swinburne in this area.

Module 6

This module makes specific recommendations relating to benchmarking university


teaching and learning functions and processes and how a set of benchmarks
might look for Swinburne in this area.

Module 7

This module makes specific recommendations relating to benchmarking university


regional engagement functions and processes and how a set of benchmarks might
look for Swinburne in this area.

Attachment

The attachment contains the workshop outcomes

77

Introduction
Purpose of the Discussion Circle Kit
This Discussion Kit comprises a number of modules with tasks designed to
promote small group discussion and participation around issues relating to the
University Benchmarking Project that are relevant to the circumstances of the
individual university. It is a task-oriented guide to facilitate dialogue among
stakeholders and the collection of additional information.
The initial modules contain the broad issues which have been identified by DEST
as important ones to think about. Subsequent modules deal with issues
considered as being especially relevant to each university.

Making the most of a Discussion Group


How do you make your Discussion Group work? People in your group are to
discuss Benchmarking issues. The issues raised will be catalysed by some key
questions found in later modules.
One objective here is to learn from one another by sharing information about
their experiences. Another is to develop group views on benchmarking in your
university.
This kit is not dependent on experts although they may be part of a group.
Everyone in the group has something to contribute.
Groups may progress at their own pace, bearing in mind the broad timetable
agreed to at the initial workshop.

Starting the ball rolling


Tips for facilitators or coordinators
The group will need to have a facilitator (coordinator or 'animateur'), and the
information which follows aims to help them start the ball rolling. The role of
facilitator is one to share around. In the first instance the facilitator may be the
person who gets the group together.
A facilitator of a group helps the group clarify what it wants to focus on and
then helps keep discussion productive. Facilitators are not expected to be
'experts' or know more than others in the Discussion Circle. If you become a
facilitator and you do have expertise in some of the topics covered in this kit, be
careful not to be drawn into the role of teacher.

Preparation
As a facilitator, or the person first getting the group together, it is important to be
organised and familiar enough with the issues to help discussion flow. Going
through the material beforehand and thinking a little about it will help. Remember

78

that some of the points below will apply more to larger groups than to a small
group of colleagues.
Your job as facilitator also includes coordinating and stimulating (but not
necessarily doing) the practical organisation - making sure the group has what it
needs for the session (e.g. photocopies of the relevant section of the kit, pens
and butchers paper if you plan to use it for taking notes) and encouraging
dialogue.
As a facilitator, you play a vital role in helping the group work well together - for
example, setting a positive tone and letting others have their say before
expressing your own opinions. The group may decide to share the role of the
facilitator, so everyone has a chance to develop their skills in this area. If so,
make sure people read this guide before they start to facilitate.
There are three key points to determine at your first meeting:
the group's priorities for discussion;
how much time people are prepared to spend on the group in the first
instance; and
who will record what is said and how and when it will be fed back to the
project team.

79

Module One
Background
Genesis
This project is part of a review being undertaken by the Department of Education,
Science and Training (DEST) of the McKinnon, Walker and Davis (1999)
publication Benchmarking: A manual for Australian universities
www.dest.gov.au/archive/highered/otherpub/bench.pdf
The manual is an important tool for Australian universities to measure and
improve performance. Since its production, there have been changes in Australia
and internationally that make it necessary to consider the currency of the
document. To ensure its ongoing relevance in the rapidly changing global
educational environment, a revision of the manual and of the implementation of
benchmarking initiatives is now an important priority.
A quick environmental scan suggests that a range of national and international
activities have significance for updating the current manual. While some of these
may simply signal the need for some unpacking a broad benchmark described in
the manual, others challenge the way particular university functions are
portrayed within the benchmarking framework.
Among the issues that need consideration in updating the manual are:
the critique of the current benchmarks
recent relevant research and activities related to the university
benchmarking agenda
the experience of those who have been using the benchmarks
the challenge to identify stronger leading indicators
links to the Governments objectives and priorities for higher education
While uptake nationally of the McKinnon Benchmarking Manual generally appears
patchy, there is now greater awareness by universities of benchmarking as a
management tool. Some areas within universities have developed aspects of the
Manual to a greater degree to suit their specific needs. There have also been
requests for enhancements to the Manual in the specific areas of complaints and
admissions procedures for domestic and international students that were earlier
omitted (see Chapters 7 and 10 of McKinnon et al), and in the area of community
engagement.
This project is a review which aims to: (a) add specific elements to the Manual
dealing with university complaints and admissions procedures for domestic and
international students; and (b) explore prospects for generally enhancing the
usefulness of the Manual in the light of the changed environment for universities,
university diversity, and university benchmarking experiences. The report
produced from responses under aim (b) will assist DEST decide whether to
undertake a more extensive stage two update of the Manual.

80

Project design
This project is being implemented at two levels. At one level, general
information is being collected about university experiences with
benchmarking in Australia and internationally and in using the McKinnon
manual. At another level, a process of self-exploration is being
undertaken in collaboration with six universities through a series of
workshops and discussions. This kit aims to assist this process.
Each of the six case study universities in the project will participate in a
four-stage process of knowledge exchange. The four stages include:
An initial workshop of three to four hours that explores and gains
agreement amongst the stakeholders to the design characteristics
of an effective benchmarking framework for designated university
functions.
A period of information collection and resolution formation to
assess university performance and to identify impediments. This
period will last around four to six weeks. The Discussion Kit is
particularly designed to assist with this phase of the project.
A second workshop to agree on the performance assessment and
to identify and agree where improvement can be made.
A third workshop comprising representatives from the six
participant universities that will identify common issues across the
sector relating to university benchmarking performance.
The project includes an opportunity for specific university objectives to be
addressed. These will vary according to the particular issues of concern to the
university. Thus this kit is also to be used by groups to cover the prime interests
of the university rather than the more general issues of McKinnon et al, although
responses are being sought from each participating university on these issues as
well.

Project implementation
The table on the next page identifies the six universities participating in
the project. They are:
1.
2.
3.
4.
5.
6.

Curtin University of Technology


Griffith University
Monash University
Royal Melbourne Institute of Technology
Swinburne University of Technology
University of the Sunshine Coast

81

Table 1

Universities participating in the Benchmarking Project.

University/ functional
area
Curtin University of
Technology

Area of focus

Griffith University

The focus here is to identify benchmarking processes, 'good practice' definitions and current
performance assessment in the provision of research office support to faculties and senior
executive in relation to research policy, grants, statistics, publications, senior officer advice, etc.
A particular focus here is on student admissions and complaints processes, given the size of the
student population, the high proportion of international students, and the multi-campus nature of
the University.
The focus here is on relationships the Community & Regional Partnerships Group is fostering
between the University and the local and regional communities in which the RMIT has a presence.
The Office is already in the process of developing performance-based indicators of these
relationships and so this project could assist in further enhancing these indicators. The approach
could either be specific to a particular location (eg Hamilton/ Grampians, or North Melbourne,
Inner Melbourne or East Gippsland) or it could be more generic drawing on these areas to give
specific focus to what is developed in keeping with the work of the Community Indicators
Working Group.

Monash University
RMIT

There were two areas of focus for the project at this university.
Approaches to more efficiently handle complaints and grievances raised by domestic and
international students.
Approaches that will enhance consistency and timeliness in policy, procedures and practices
relating to student examination and evaluation.

82

Swinburne University of
Technology

There are two areas of focus:


A general look at how benchmarking could be used effectively as a tool to assist the
University position itself in relation to the learning and teaching agenda highlighted in the postCrossroads reforms for HEI. In particular, this could focus on monitoring performance, identifying
'good practice' demonstration in teaching and learning, and the dissemination of outcomes in this
regard. A specific examination of how learning and teaching at the University can relate to its
"learning town strategy" and engagement with students, jobs, service delivery and those in
community institutions and enterprises.
A specific look at admissions and complaints processes and how a set of benchmarks might
look for Swinburne as a metropolitan university with a decentralised campus administration.

University of the Sunshine Coast

There are fours areas of interest here.


1. A general approach to the way benchmarking might be of assistance to improving the
University's performance across various functions, taking note of those functions the University
might see as requiring particular attention consistent with emerging issues in Higher Educational
Institutions as result of the post-Crossroads policy environment.
2. A specific look at the issues of admissions and complaints from the perspective of a smaller
and newer university that sees its connections with the local community as being particularly
important for its future.
3. The University's regional development objectives
4. Positioning the University in the context of the Learning & teaching agenda of the Nelson
reforms.

83

Expected outcomes
Project-specific outcomes

In keeping with the general objectives of the project there will be two sets of
outcomes:

supplementary data in the form of new benchmarks for university complaints


management processes and admissions procedures for national (McKinnon
Chapter 7 - Student Support) and international (McKinnon Chapter 10
Internationalisation) students.
a draft scoping statement for a complete revision of the current McKinnon
benchmarking manual, supported by a research paper arguing the case for
revision and the principles, strategies and processes that need to be pursued.

This Kit is a tool to use to assist in collecting the data that will address these two
objectives.

Swinburne University outcomes

Each participating university may have its own areas where it wishes to consider
benchmarking practice. This kit allows for these circumstances, and this section will
be used to spell out the desired outcomes required by each participating university.
For Swinburne University, the outcome will be:

"
"

To make specific recommendations relating to benchmarking university


admissions and complaints functions (for national and international students) and
processes and how a set of benchmarks might look for Swinburne in this area.
To make specific recommendations relating to benchmarking university learning
and teaching functions in a post Crossroads environment, and how a set of
benchmarks might look for Swinburne in this area, and in relation to its
community engagement activities.

NOTE: It will be useful for all participants in these Circles to have access to
copies of the McKinnon et al report. Refer:
www.dest.gov.au/archive/highered/otherpub/bench.pdf

84

Module Two
Broad benchmarking issues
This module addresses broader benchmarking issues and acts as a potential vehicle to
subsequently considering specific issues for the University. The general issues are:
benchmarking experiences and initiatives undertaken by Australian universities,
challenges associated with designing and implementing effective benchmarking
regimes,

Background
In Australia the adoption of benchmarking has become more prominent on the back
of the emergence of knowledge as a significant driver of global competitiveness.
Government has also required increased quality in teaching and learning, greater
applicability in research undertakings, greater efficiency in institutional operation,
and greater prudential responsibility for the public funds that are provided.
Consumers are also expecting more from their universities both as students
investing in their own human capital, and more recently a contribution to the
community through their presence as community organisations. Stakeholders are
less interested in how 'their' institution rates against others, and more interested in
the increasing returns it can generate over time.
The report Universities at the Crossroads (2002) has stressed the importance of
monitoring and continuous improvement in the way universities deliver
improvements in quality, efficiency and probity in a way that involves external
scrutiny from stakeholders and regular reporting of outcomes to the community. A
benchmarking regime can provide a framework for this to occur.
Benchmarking has two objectives. First it is used as a means for evaluating the
quality and cost of organisational activities, practice, and process in the context of
industry-wide 'good' or 'best practice'. Second, it can be used as a diagnostic
management tool to achieve continuous improvement in the organisation over time.
While a number of universities have pursued the former objective in various ways,
considerably fewer have chosen to adopt benchmarking as a longer run management
tool for continuous improvement.
Organisations such as the North American College and Business Officers (NCUBO),
the Commonwealth Higher Education Management Services (CHEMS) club, the
HEFCE Good Management Practice programme, the EMSU, the Association of
Commonwealth Universities (ACU) and others have all facilitated higher education
benchmarking in one way or another. Several Australian universities have been part
of some of these initiatives.

85

Two broad issues appear to underpin the ongoing adoption of a benchmarking


regime by universities as a continuous improvement management tool as opposed to
the more simple task of being interested in the identification of performance
indicators and copying-in 'good practice' from other institutions and other locations.
The first issue is that there appears no consistency of approach, or clarity of purpose
in method and process in the application of benchmarking objectives, thereby
making comparators, even within the same national system of higher education, a
difficult exercise. There is also concern about the methods for defining 'good' or
'best' practice, consistency in understanding the categories of activity to be
benchmarked, data collection methods, and performance rating methods.
The second broad issue is that there has been little institutional leadership
commitment to ensure 'good practice' becomes embedded in the way the
organisation does its business.
Highlighted examples of good practice are meant to build enthusiasm and confidence
to implement and embed self-designed and initiated solutions to meet particular
institutional circumstances and to generate uniqueness, rather than bring about a
convergence to institutional homogeneity. There needs to be greater management
commitment to pursue continuous improvement through a measurement regime.
In Australia, the benchmarking manual prepared by McKinnon, et al (1999) was
designed to identify "the most important aspects of contemporary university life in
changing times and to find ways of benchmarking them." (p.1). The manual
identifies 67 benchmarks in nine areas of university activity The manual identifies
'good practice' performance descriptions and sets out an approach to assess outcome
(lagging), process (drivers), and rates of change (learning) achievement as a
'balanced scorecard' approach to measuring university activity.
The McKinnon manual represents an attempt to bring a standardised approach to
benchmarking practice, and in this respect it responds to the first goal of
benchmarking, i.e. identifying performance. However, it appears that the take-up of
the manual has been patchy to date with only a handful of universities trialling the
manual and several others trialling aspects.
Alstete (1995) provides a useful starting description of benchmarking:
"Benchmarking involves the analysis of performance, practices and
processes within and between organisations and industries in order to obtain
information for self-improvement. The methodology enables individuals,
organisational units and whole organisations to 'think outside the particular
boxes' which they inhabit, and to compare and question, in a structured and
analytical way, their own activities with those of similar or dissimilar
enterprises." (Jackson 1998. p. 108).
At one of our workshops, a participant proposed the following are aspects of
benchmarking, not in any particular order. Benchmarking is:
1. An opportunity to establish a culture of critical reflection
2. An opportunity to evaluate current procedures and strategies and to compare
these with others and also to the standards set by oneself
3. An opportunity to look for improvement in opportunities
4. A mechanism to celebrate good practice where it occurs.

86

TASKS
Task 1

Discuss and determine whether the group feels the definitions or


comments above assist in their understanding of the proposed subject
of benchmarking at their University.

At an AUQF workshop held in Melbourne in June 2003, the subject of university


benchmarking was discussed. In one of the plenary sessions the following question
was asked:" Is the benchmarking of performance or process?
Another question was whether the process of benchmarking is only at a point in
time or is it part of a continuous process?
A further question relates to performance comparison. Could it be against other
universities with similar characteristics in Australia or overseas, should it be against
other non-university enterprises, or should it be against some self-determined goal
based on the circumstances and life-stage of the university at the time?
Task 2

How would your group respond to the questions asked above?

At the AUQF workshop mentioned above, other points about benchmarking were
made as follows:
A key issue is whether the McKinnon et al document is supposed to be
prescriptive or advisory there is uncertainty about this! There are built in
assumptions within the document.
It is important this benchmarking is not just to set up a process for DEST to use
There is a need to foster a culture of discourse and debate about benchmarking
within the university
Task 3

Briefly summarise the situation in your University with respect to


benchmarking practice?

Task 4

What does your group believe are the challenges associated with
designing and implementing effective benchmarking regimes in your
University?

McKinnon says at page 3 of the report Benchmarking a manual for Australian


Universities:
All too often outputs (or outcomes) measuring the success of past
activities have been the only performance measures used. While such
lagging indicators provide useful information, there is also a need for
leading indicators, that is, measures of the drivers of future
performance, and learning indicators, measures of the rate of change
of performance.
Task 5

Does the group believe the concepts of lagging, leading and


learning indicators are useful to any benchmarking process?

Task 6

McKinnon et al have developed a pro-forma framework for reporting


benchmarking performance. A copy of this framework has been
reproduced below for one of McKinnon's benchmark sub-categories.

87

The task is to review this pro-forma and its structure for its usefulness
when applied to the situation of your University.
Task 7

McKinnon et al have identified nine major and 67 benchmarking subcategories for university performance assessment. The nine categories
cover governance, planning and management; external relationships;
financial and physical infrastructure; learning and teaching; student
support; research; library and information services;
internationalisation; and staffing.
The task is to review these categories and sub-categories as to their
relevance for your University and identify additional/ replacement
elements.

Task 8

McKinnon et al provides a performance rating scheme from a low level


of one through to a high level of performance against the identified
'good practice' benchmark of five.
The task is to decide whether this represents a flexible enough scheme
given the circumstances for benchmarking at your University.

Task 9

In Task 8 above, the words 'good practice' were used. How would
your group determine what is good practice?

88

Example of a McKinnon Proforma


Benchmark: 7.1: Student Services
Area:
Element:
Type:

Student services
Student administrative services (See also Benchmark 3.6)
Lagging

Benchmark rationale:
Every university needs efficient core student
administrative services covering enquiries, admission, progression, fees and other
dues, graduation, and scholarships which are oriented to student service and which
are efficient and economical.
Sources of data:
The data will come from test audits of the speed, accuracy and
comprehensiveness of responses to personal, telephone and written enquiries, and of
the reporting of data needed for student administration and reporting purposes.
Surveys of student opinions of student administrative service attitudes and
responsiveness.
Good practice:
Administrative staff service attitudes and competencies, characterised by speedy,
accurate and complete answers to enquiries, both external and internal, and further
follow-up which ensures complete enquirer satisfaction, whether from potential
students or relatives, current students, past students, or staff members. Sufficiently
efficient processes at enrolment, graduation and other peak times to avoid long
queues and delays. Student services linked efficiently to financial services for prompt
acknowledgement of payment of fees, billing etc. Prompt provision within the
university of class lists, room allocations, examination requirements and other
student data ensuring efficient student administration. Sufficiently modern hardware
and software to provide immediate on-demand responses about individual students
and groups of students to those who need to know (that is, for students themselves,
course coordinators, heads of departments and university executives).
Good practice will also provide automatic computerised student data reports and
flagging of problems regarded by the university as important (e.g., overdue
payments, particular student success records, students not meeting progression
requirements, classes not meeting minimum enrolment requirements, mismatches of
room allocation and class sizes, etc.).
Levels:
1
Unfavourable student opinion
of services. Limited
unresponsive services.
Limited function hardware.
Slow non comprehensive
services. No benchmarks.
Spasmodic monitoring
without comparisons.
Occasional adjustments but
not Systematic.

3
Equivocal student opinion of
services. Waiting times. No
systematic monitoring or
follow up
of service levels. Limited
hardware functionality.
Limited benchmarks. Regular
monitoring and regular
adjustments.

5
High student opinion of
service. Low waiting times.
Systematic tracking and
follow-up of enquirers. Full
service computers with
modem availability.
Comprehensive benchmarks
of the quality of services.
Use of objective standards
for monitoring all key
services.

89

Self assessment :
Check assessment :

Module Three
Admissions procedures for
national and international students
This module addresses issues associated with University admissions procedures for
both national and international students.

Background
The Commonwealth Department of Education, Science and Training (DEST) identified
gaps in the McKinnon Benchmark manual with respect to the management of
admission and grievance procedures. The Department has commissioned this
Benchmarking Project to explore the development of benchmarks on university
complaints management processes and university admissions procedures, for both
national and international students. Issues such as admission practices are emerging
currently as concerns for several western governments. For example, the UK
Government recently asked a team to identify good practice in university admissions
with a view to subsequently issuing a statement of principles about admissions, for
adoption by universities.
At the recent AUQF workshop held in Melbourne in June 2003, the subject of
benchmarking was discussed. In responding to a question regarding Benchmarking
of admissions of university students, this issue included the following:
Why should we benchmark? The following are a number of reasons provided (with
some editing to make the points clearer):
In support of a university strategy
A gap in management has been identified
To differentiate the university
To learn about the situation at the university
To improve the existing processes in the university
To ensure integration into existing systems within the university
To support key performance indicators in place
Incorporate into strategic plans of the institution
As a feedback loop
Tasks are listed on the following page.

90

TASKS
Task 1

How would your group respond to the question of why should


benchmarking be implemented in your University?

In one of our workshops the comment was made that in terms of the student
admission process, commencement starts very early.
Task 2

Do you agree the comment above is an issue with respect to student


admissions?

In another of our workshops the following question was asked when considering
student admission benchmarking processes: What principles do we use in setting
and applying benchmarks?
Task 3

How would your group respond to the question above?

Task 4

What are the parameters for defining a 'good practice' benchmark


regarding admissions procedures at the Swinburne University for both
national and international students?

In small group discussions regarding the collection of data, participants suggested


there were many potential sources of data that might be useful as part of a process
for developing a benchmark in this area of student admissions.
Task 5

What are the data sources (formal and informal) that might help in
making an assessment of current performance against the benchmark?

Task 6

Using the answers to the two previous questions, can you develop a
pro-forma for Swinburne University along the lines of that supplied by
McKinnon?

Task 7

Can you identify specific differences between managing national and


international students in this benchmarking issue?

91

Module Four
University complaints procedures
This module address issues associated with University complaints procedures for
both national and international students.

Background
At the recent AUQF workshop held in Melbourne in June 2003, the subject of
benchmarking university complaints processes was discussed. One report considering
this issue included the following list if points. [Note: some editing has taken place to
make the points clearer]:

What are we discussing - an audit?


What is a complaint? More than just being aggrieved!
What are the assumptions in such a process eg all students are not on campus!
We will not know which students miss out. What is the informal process in
action, because it might well be operating successfully!
Who is the system for? For example, for students or for learning outcomes, to
give idea of course experience? Are we dealing with process or outcomes
Today, these issues are getting attention because of legal implications and
impacts and this in turn reflects a changing system.

Tasks are on the following page.

92

TASKS
Task 1

The group could review these statements and come to their own
conclusions

Other comments from the same AUQF workshop about benchmarking a complaints
system were as follows
How do we set up good systems? What is a good system characterized by?
It is not just a list of factors but a systemic approach eg capture complaint record
it correct the problem.
Qualities of a good system include: complaints are heard, the issue surfaces and is
acted upon, the system is transparent, accurate, accessible, contains natural justice,
has included in it the separation of roles, is right, is consistent, is efficient,
is dynamic! Is linked in as part of an overall effective system
Task 2

What are the parameters for defining a 'good practice' benchmark


regarding University complaints procedures?

Task 3

What are the data sources (formal and informal) that might help in
making an assessment of current performance against the benchmark?

Task 4

Using the answers to the two previous questions, can you develop a
Swinburne pro-forma along the lines of that supplied by McKinnon

Task 5

Can you identify specific differences between managing national and


international students in this benchmarking issue?

93

Module Five
Specific Swinburne issues
Admissions and complaints
This module follows up the issues of benchmarking university admissions and
complaints functions (for national and international students) and processes and to
propose how a set of benchmarks might look for Swinburne University in this area.
Note that Modules three and four above provide some core materials on
these subjects and were partially covered in the August Benchmarking
workshop.

TASKS
Task 1

We suggest that the group review Modules three and four to ensure
that they have covered the general issues of benchmarking raised in
them.

What follows are specific issues drawn from the August workshop to enable a more
detailed drilling down process to take place.
Among responses to the initial workshop questions regarding future trends and
changes over the next 10 years and issues which have arisen for Swinburne as a
result of its benchmarking efforts, a number of related points were raised about the
context in which Swinburne will operate in attracting students. These included the
following:
(The general) University stranglehold on pre-university education will change
(Swinburne is) looking at the transition from secondary education to first year
students
Word-of-mouth (among prospective students and their parents about the
University) is significant
Recognition of prior and concurrent learning will grow requires links to others
including the private sector
Collaboration with other organizations and sectors will increase
Task 2

The group might flesh these thoughts out and develop a coherent
statement as to the environment in which Swinburne believes it will
operate over the next 10 years in attracting students.

In considering future trends and reflecting upon past experiences in the plenary
session of the August workshop, the comment was made that Administration
system challenges will increase.
Task 3

The group could elaborate upon what these future challenges will be
for Swinburne bearing in mind such later comments as Understand
the limitations of technology recognise systems might not always do
what is expected, things outside the norm do exist.

94

In the early plenary session of the August workshop, a number of comments were
made regarding benchmarking. Some of these are:
The definition of benchmarking is complex
(What is) comparative reflection compared to benchmarking
What are we going to benchmark against?
Task 4

What responses would the group provide to the issues raised in these
three comments when applied to student admissions and complaints
for Swinburne?

In response to the question: Focusing on Swinburne student admissions and


complaints processes (including for international students) as an example, what is
required for an effective Swinburne benchmarking regime?, the following is one
small group summary.
Understanding of goals and customer expectations
1. Clear and explicit policies and procedures
a. Open to revision and refinement
b. Communicated to staff and students
c. Used in Training staff through accessible techniques such as manuals
2. Standards developed
a. Some idea of what constitutes good practice know what is realistic!
b. Derived internally, externally (and for any organisation dealing with an
admission procedure process
c. With these standards need credible, relevant measures
d. These measures need to relate to operation of policy
3. Evaluation
a. Measure through staff and student perceptions
b. Measure through client surveys
c. Note time taken, accuracy of the process, use of services
d. Number of complaints etc, language, counselling, advice
e. Exit poll
4. Feedback will influence
a. Policy and procedures
b. Staff acknowledgement
c. Staff training
d. Improved process
e. Student expectations
f. Marketing
Task 5

The group is to review this summary and determine whether it


provides a sufficient basis to developing a benchmarking system for
Swinburne regarding student admissions.

Another small group at the workshop highlighted the importance of the


Swinburne Student centre. The group said Student centre in particular is (the)
first place for students to go and where everything is needed but (even) if it
operated very well, could we replicate this if the university grew to three times its
size..
Task 6

In practical terms, how do the operations of the Student Centre fit into
the framework outlined above?

95

Module Six
Specific Swinburne issues
Teaching and Learning
This module addresses benchmarking Swinburne university teaching and learning
processes and it suggests how a set of benchmarks might look for Swinburne
University in this area.

TASKS
In one of the plenary sessions of the August workshop, regarding future directions,
the following comment was made that there will be Increased appreciation of quality
of learning, and this will include other dimensions of learning and outcomes outside
traditional forms. Can look around the world for examples of this happening!
Task 1

This issue should be teased out further and its implications for
Swinburne identified.

The issue of staff at Universities in the future was also raised in the plenary session
regarding future directions. The following three points are examples:
Credentialism among staff is rising
To be an academic will require higher qualifications there is also a shift in
the way these qualifications are measured eg PhDs using other ways than
long written dissertations
There will be an increase in an emphasis on teaching and where does it fit in
all of these changes
Task 2

In terms of teaching and learning, the group might discuss and identify
the expectations that will be placed upon staff over the next decade

Before moving into small group discussions, a number of points were made about
what might be components to benchmarking. The following are two of these:
Academic review is a good example of performance against own experience,
strategy plan or university expectation
need to be honest, confirmatory, (and) provide useful feedback
Task 3

The group might discuss these points to determine what Swinburne


could achieve from a benchmarking exercise

During the discussions at the August workshop, many comments were made
about Benchmarking. Some of these are set out below:
need adequate data & records for evaluations
data that exists is used because it exists
there is a trap of measuring what you can so then the data is fitted to suit the
purpose

96

what tools are required for evaluation - but probably (at Swinburne we) dont
have these at the moment
students are representatives in decision-making processes
imitation (by others) is a good indicator of success
benchmarking implies resources some expectations are different compared to
what can be provided
and what we want to provide
(there ought to be) external validation
need to be honest, confirmatory, (and) provide useful feedback

Additionally, the small group looking at a benchmarking regime for teaching and
learning produced the following
A hybrid of our own benchmark initiatives and those against other campuses eg
Hawthorne, including some second guessing of national perspectives
Strategic initiatives to documented
Staff development
Quality of teaching student satisfaction surveys of one sort or another, focus
groups
Got to compare like with like such as a similar University eg UWS perhaps best
if discipline based moderation, comparison
Issue is entry level ie entry scores or other measures
Retention rates what are we measuring here? Interpretation needs to be in
context.
Measure of employability graduate attributes
Collaboration and partnerships eg other universitys and industry
All through the process must have consistent measures
Appropriate policies and procedures
Fitness of courses constant review, second guessing what industry and
government is happening
What do you put in place courses, technology etc
What is the experience?
Course advisory committees more industry etc feedback re our yardsticks
especially in disciplines
Evaluating CEQ types of experiences parallel data, make use of open ended
questions software assistance available soon?
Subject surveys on line?
Areas of external accreditation to assist with benchmarking eg psyche,
accounting, computing, engineering
Flexibility
o how to measure this
o Approach to T&L plans
o Objectives
o Systems and resources
Integration equal input from all groups eg library, ITS, Admin, academics etc
Reward teaching awards, service awards, student rewards prizes etc
Task 4

Considering all this information, develop a Swinburne benchmarking


pro-forma that can be practical and effective for Swinburne regarding
student teaching and learning procedures

97

Module Seven Specific Swinburne University issues:


Engaging with the Community
This module addresses the specific recommendations relating to benchmarking
engagement with the regional Community, and to propose how a set of
benchmarks might look for Swinburne University in this area.

TASKS
[In undertaking these tasks the group may wish to consult with a further number of
the University's external community stakeholder interests.]
Regarding the relationship of Swinburne to its engagement with the regional
Community, it appeared that this issue was a significant one not clearly resolved.
In fact the small group looking at this issue stated: There is the issue of
recruitment of local students or becoming a regional university to our regional
students.
Provision of education to outer eastern region
Learning and teaching and appropriateness to our region
Correct practice community numbers on CACS
Task 1

A clear position is required on this issue before any benchmarking


exercise can be satisfactorily undertaken. The task is to establish the
position of the university in regard to its engagement with the regional
community.

Task 2

The group might review Module 2 above regarding broad issues of


benchmarking and their applicability to the regional engagement
benchmarking exercise

During the discussions at the August workshop, many comments were made
about Benchmarking. These covered issues dealing with data, stakeholders,
consultation processes, connectivity between community objectives and University
programs, aspirational culture, engagement processes, etc. Some of the specific
comments are set out below:
need adequate data & records for evaluations
data that exists is used because it exists
there is a trap of measuring what you can so then the data is fitted to suit the
purpose

98

what tools are required for evaluation - but probably (at Swinburne we) dont
have these at the moment
students are representatives in decision-making processes
imitation (by others) is a good indicator of success
benchmarking implies resources some expectations are different compared to
what can be provided
and what we want to provide
(there ought to be) external validation
need to be honest, confirmatory, (and) provide useful feedback

In the small group discussion regarding a benchmarking regime for Swinburnes


engagement relationship with the regional community, it was agreed that
Swinburne needs
to identify the objectives of having a regional component embedded in our
learning and teaching program.
to develop a mechanism to ascertain three things
1. University view of how the community can contribute to our learning
and teaching
2. What are the communitys actual needs and wants?
3. What are the governance arrangements in place?
an internal process to respond to the results of this sort of research
problem of not having have a physical presence in the region in reality because of
a thoroughfare gap a real challenge and need to work harder at this problem
a regional component mainstreamed into curriculum at the undergraduate and
post graduate levels not gone far even though trying
integration of regional actions throughout learning and teaching programs so not
just the prerogative of one area
to assess the constraints working against increase our regional action to
Swinburne consider the desirability and possible strategies to increase numbers
of regional student participation
to explore strategies of enhancing bi-sectoral advantage role more effectively to
enhance regional student opportunities
Task 3

Considering all this information, develop a benchmarking pro-forma


that can be practical and effective for Swinburnes engagement
relationship with the regional community

Вам также может понравиться