Вы находитесь на странице: 1из 13

This article was downloaded by: [King Fahd University of Petroleum & Minerals]

On: 29 November 2014, At: 12:06


Publisher: Routledge
Informa Ltd Registered in England and Wales Registered Number: 1072954 Registered
office: Mortimer House, 37-41 Mortimer Street, London W1T 3JH, UK

Quality in Higher Education


Publication details, including instructions for authors and
subscription information:
http://www.tandfonline.com/loi/cqhe20

The value of student engagement for


higher education quality assurance
HAMISH COATES
a

The University of Melbourne , Australia


Published online: 16 Aug 2006.

To cite this article: HAMISH COATES (2005) The value of student engagement for higher education
quality assurance, Quality in Higher Education, 11:1, 25-36, DOI: 10.1080/13538320500074915
To link to this article: http://dx.doi.org/10.1080/13538320500074915

PLEASE SCROLL DOWN FOR ARTICLE


Taylor & Francis makes every effort to ensure the accuracy of all the information (the
Content) contained in the publications on our platform. However, Taylor & Francis,
our agents, and our licensors make no representations or warranties whatsoever as to
the accuracy, completeness, or suitability for any purpose of the Content. Any opinions
and views expressed in this publication are the opinions and views of the authors,
and are not the views of or endorsed by Taylor & Francis. The accuracy of the Content
should not be relied upon and should be independently verified with primary sources
of information. Taylor and Francis shall not be liable for any losses, actions, claims,
proceedings, demands, costs, expenses, damages, and other liabilities whatsoever
or howsoever caused arising directly or indirectly in connection with, in relation to or
arising out of the use of the Content.
This article may be used for research, teaching, and private study purposes. Any
substantial or systematic reproduction, redistribution, reselling, loan, sub-licensing,
systematic supply, or distribution in any form to anyone is expressly forbidden. Terms &
Conditions of access and use can be found at http://www.tandfonline.com/page/termsand-conditions

Downloaded by [King Fahd University of Petroleum & Minerals] at 12:06 29 November 2014

Quality in Higher Education, Vol. 11, No. 1, April 2005

The Value of Student Engagement


for Higher Education Quality
Assurance
HAMISH COATES
The University of Melbourne, Australia
Quality
10.1080/13538320500074915
CQHE107474.sgm
1353-8322
Original
Taylor
102005
11
Centre
HamishCoates
00000April
for
and
&
inArticle
Francis
Higher
the
(print)/1470-1081
Francis
2005
Study
Group
Education
Ltd
of Higher
Ltd (online)
EducationThe Univeristy of MelbourneVictoria 3010Australia

ABSTRACT As the principles and practices of quality assurance are further implanted in higher
education, methodological questions about how to understand and manage quality become increasingly important. This paper argues that quality assurance determinations need to take account of how
and to what extent students engage with activities that are likely to lead to productive learning. The
idea of student engagement is introduced. A critical review of current possibilities for determining the
quality of university education in Australia exposes limitations of quality assurance systems that fail
to take account of student engagement. The review provides a basis for suggesting the broad relevance
of student engagement to quality assurance. A sketch is provided of an approach for factoring student
engagement data into quality assurance determinations.

Keywords: student engagement; measuring educational quality; performance indicators;


quality assurance; Australian higher education

The Value of Student Engagement Data for Quality Assurance


Interest in the quality of university education has grown considerably over the last decade
or two. Although the specification, assurance and enhancement of quality is often complex
and problematic, strong interest in the phenomenon has been stimulated and maintained by
a range of factors. Students need accurate information about educational quality to help
them choose between different courses of study. Academics and university administrators
need information to help them monitor and improve their courses and programmes. Institutions need information about quality to help them benchmark and market their performance.
Governments and other bodies need information to assist with funding, policy development
and accountability. For these and other reasons, quality assurance has become part of the
fabric of many higher education systems.
As the principles and practices of quality assurance become more and more embedded in
higher education, methodological questions about evaluating quality become increasingly
important. The methods used for quality assurance need to be examined in the light of
ongoing change in the phenomena being measured, new understandings of quality assurance and feedback from the quality assurance system itself. Performance indicators shape
quality considerations in many ways and are an important focus for such analyses. There is,
ISSN 1353-8322 print; 1470-1081 online/05/010025-12
DOI: 10.1080/13538320500074915

2005 Taylor & Francis Group Ltd

Downloaded by [King Fahd University of Petroleum & Minerals] at 12:06 29 November 2014

26 H. Coates
accordingly, an ongoing need to examine the cogency of such indicators and to ensure that
they are salient, sufficient and sound. While universities routinely collect a considerable and
often increasing amount of data for the purposes of quality assurance, it is, at the same time,
important to keep reviewing the indicators and other measures that are at the heart of such
routines.
This paper argues for the importance of factoring information about student engagement
into determinations of the quality of university education. After introducing the idea of
student engagement, the importance of taking account of such information is established
through a critical review of quality assurance mechanisms in Australian higher education.
This review exposes limitations with quality assurance approaches that, even after 20 years
of development, exclude information about student engagement. The review suggests, in
particular, that there is too much emphasis on information about institutions and teaching
and not enough emphasis on what students are actually doing. Working from this context,
the paper turns to examine the broad value of student engagement data for quality assurance. It then outlines an approach for factoring information about student engagement into
quality assurance activities.
Most discussions of the quality of higher education show little explicit concern about how
students are interacting with their universities and with the practices that are most likely to
generate productive learning. This is apparent in the UK, Australia and the USA (Brennan et
al., 2003; Department of Education, Science and Training, 2004; US News and World Report,
2004). While universities are not legally responsible for how students participate in postcompulsory education, contemporary research indicates that the way students go about
constructing their knowledge is what really influences learning (Johnson et al., 1991; Ramsden, 1992). Reflecting this, a good deal of research has investigated how best to incorporate
student-level process factors into quality evaluations (Ewell & Jones, 1993, 1996; Pascarella,
2001). This research has sought ways to use information about the student experience both
as direct measures of educational processes and as indirect or proxy measures of educational outcomes. Treated in an appropriate way, such information has the potential to
enhance the accuracy and sensitivity of quality assurance determinations.
In recent years research into student engagement has drawn together insights about
which activities tend to generate high quality learning. The concept of student engagement
is based on the constructivist assumption that learning is influenced by how an individual
participates in educationally purposeful activities. Learning is seen as a joint proposition
(Davis & Murrell, 1993, p. 5), however, which also depends on institutions and staff providing students with the conditions, opportunities and expectations to become involved.
However, individual learners are ultimately the agents in discussions of engagement, and
primary focus is placed upon understanding their activities and situations. Thus, while the
idea of student engagement draws together considerations about student learning, institutional environments, learning resources and teachers, it maintains a focus on students and
on their involvement with university study. In essence, therefore, student engagement is
concerned with the extent to which students are engaging in a range of educational activities
that research has shown as likely to lead to high quality learning. Such activities might
include active learning, involvement in enriching educational experiences, seeking guidance
from staff or working collaboratively with other students.
An example illustrates the idea. To begin with, institutions and teachers need to provide
students with the appropriate resources and opportunities to make possible and promote
specific kinds of interactions. This may involve academic staff making themselves available
for consultation outside class time, campus libraries having sufficient space for students to

Student Engagement 27

Downloaded by [King Fahd University of Petroleum & Minerals] at 12:06 29 November 2014

work collaboratively, curricula and assessment that compel certain standards of performance
or activities and events around campus that prompt students to reflect on the ethics and
practices of their learning. However, students also need to interact with these conditions and
activities in ways that will lead to productive learning. Students need to expend a certain
quality of effort (Pace, 1979), to challenge themselves to learn, to interact with new ideas
and practices and to practice the communication, organisational and reflective skills that
should help them learn and will form an important part of what they take from university
education.
A Critical Review of the Current Approach in Australia
Given the basic educational importance of student engagement, it is surprising that it has
been relatively overlooked in discussions about the quality of university education. Quality
assurance mechanisms in Australian higher education are a case in point. In Australia national
quality determinations are based largely on indicators about institutions and teaching and
on measures of student outcomes. This focus, it is argued below, weakens the capacity of the
system to accurately determine the quality of university education. The main argument in
the analysis to follow is that although Australian higher education operates as a national
system, there is currently no national measure of student engagement. This analysis exposes
the limitations of quality assurance approaches that fail to account for such information.
Quality assurance in Australian higher education is multidimensional, and it is necessary
to mark out the scope of the current analysis. The system involves institutions, states and
territories, the federal government and the Australian Universities Quality Agency
(AUQA). Rather than embrace the full complexity of the quality assurance system, attention
below is focused on investigating the cogency of performance indicators that are, or may be,
used in a recurrent way to measure the quality of university education at a national level.
The review is intended to be strategic, rather than exhaustive, in scope and to focus on the
main dynamics in the system. These indicators are largely quantitative in nature and are
calculated in the main using institutional census data, but also from surveys of graduate
cohorts. Many of these indicators were developed during the early 1990s (Linke, 1991) to
support the allocation of funding, institutional management and the monitoring of teaching
and learning standards. Since then data has been collected and reported on an annual basis
for many of the indicators (Department of Education, Science and Training, 2002) and,
despite its limitations, is used to inform many operational, strategic and personal decisions
about university education.
Limitations With Using Information About Institutions for Determining Quality
While not factored into formal quality assurance mechanisms in Australia, institutional
resources and reputations are frequently read as important informal proxy measures of the
quality of university education. It is presumed that the level of an institutions resources is
related to the quality of the student learning experience. With necessary resources institutions can furnish high quality spaces for students to learn and can build a supportive educational infrastructure outside the class. It is perceived that institutions names are built on
traditions of quality of education and that receiving the stamp of an established institutional
brand is an important educational good in itself. Thus although institutional resources and
reputations are not codified in formal quality assurance procedures, they do form a subtext
of much discourse about the value of the educational experience.

Downloaded by [King Fahd University of Petroleum & Minerals] at 12:06 29 November 2014

28 H. Coates
Unfortunately, despite their frequent popular emphasis, institutional resources are only
contingently rather than necessarily related to the quality of university education. Although
institutional capital and financial resources are commonly used measures of higher education quality, many have little to do with pedagogy. As Kuh (2002, p. 24) wrote, students can
be surrounded by impressive resources and not routinely encounter classes or take part in
activities that engage them in authentic learning. Human resources, and in particular
academic staff qualifications, may be slightly more related to student learning, although
they are frequently more related to research and consulting experience. Beyond certain
minimum levels of provision it is difficult to see how institutional resources are linked with
learning. Although they offer a prominent and parsimonious means of seemingly addressing questions of educational quality, their lack of causal connection with student learning
means that they may oversimplify or perhaps even misguide analysis.
Institutional reputation is another extremely common, although by itself unsatisfactory,
means of evaluating the quality of university education. Indeed, discussion of institutional
reputation, status and brand name is one of the most popular topics of colloquial discussions
about higher education. As Astin (1985) suggested in an influential critique, evaluations of
institutions made according to reputation are frequently based on beliefs and stereotypes
rather than evidence and can be nuanced by marketing and publicity. They can be insensitive
and not responsive to variations over time or between different areas within institutions.
Importantly, as with resources, they are frequently based on criteria like history, tradition,
research performance or institutional location, which are only indirectly related to student
learning. Despite the popularity of discussions about reputation, they do not say much about
individual students and their learning.
Limitations With Using Information About Teaching for Determining Quality
A great deal of energy in quality assurance is focused on academic staff and their teaching.
This most likely stems from the instructivist assumption that university teaching staff hold
much responsibility for student learning. This, in turn, is based on the assumption that high
quality teaching will lead to high quality learning. As well as being a primary source of
standards and expectations, teachers typically select material given to students, determine
how students work and set formative and summative assessments. Along with the accountability of teaching staff to their institutions, such assumptions imply a line of control that is
seen to support this approach to quality assurance. In this view, with institutions assuring
the quality of pedagogy and teachers assuring the quality of student learning, educational
quality is assured.
The quality of university teaching can be monitored at a national level in a range of ways.
One option may be through review of teaching qualifications held by academic staff.
Although increasingly emphasised as a part of academic work, there are no requirements in
Australia for teaching staff at universities to have formal teaching qualifications. If this
approach were to be pursued it would be necessary to address the further issue of developing
a means of grading different kinds of qualifications. It would be necessary to develop
infrastructure to enable and support staff to develop their qualifications and to provide
incentives that would be sufficient to lure them from potentially more lucrative research
opportunities. It would be necessary to develop a response to the once dominant perception
in higher education that discipline knowledge is sufficient for quality teaching. These are
significant and substantial issues, however, they will take considerable effort and commitment to achieve.

Downloaded by [King Fahd University of Petroleum & Minerals] at 12:06 29 November 2014

Student Engagement 29
The evaluation of university teaching is most commonly focused on teaching processes.
The Course Experience Questionnaire (CEQ) (Ramsden, 1991) has been used in Australia since
1991 as the primary means of measuring the quality of tertiary teaching. The instrument was
developed to gather students perceptions of the extent to which they were being exposed to
the kind of teaching that, according to the pedagogical theory underpinning the instrument,
is likely to generate deep learning. In general, CEQ data is accepted as providing an accepted
measure of the quality of teaching (McKinnon et al., 2000). Although reinforced through years
of national administration in Australia, this is an assumption which is open to question.
A major limitation of the CEQ in generating data for the purposes of determining the
quality of university education is its exclusive focus on teaching. As noted above, contemporary constructivist theories suggest that learning rather than teaching is what really
matters in education. This perspective suggests that the quality of teaching is dependent
upon rather than independent to the quality of learning that it engenders, that is good
teaching is taken to be that which leads to good learning. From this perspective the most
significant index of the quality of teaching is whether it has, by some means, generated high
quality learning. Given this, a measure which focuses on teaching alone would provide a
significant, although insufficient, index of the quality of education.
Another major limitation of the CEQ is its core focus on what teachers do in formal
instructional settings. Recent research has challenged the notion that what students do
inside the classroom is the only or most significant part of their educational experience (Kuh
et al., 1994; McInnis et al., 2001; McInnis, 2002). Such research has emphasised the direct
educational benefits of beyond-class experiences, the value of considering a more holistic
understanding of the student experience, the value that beyond-class experiences add to
formal learning activities and the importance of understanding emerging dynamics of
student behaviour. This research has challenged the validity of the distinction between inclass and out-of-class. Given an increasingly large, flexible and open higher education
environment with ever diversifying types of students, understanding how students spend
their time outside class is being seen as increasingly important. With only information about
how students spend their time in-class, institutions are limited in their capacity to explicitly
manage the student experience and to leverage out-of-class time to enhance learning.
Limitations of Current Student Level Indicators for Quality Assurance
Although quality assurance procedures in Australian higher education include a number of
student level measures, none consider the processes by which students engage in their
study. It is argued here that this deficit limits the explanatory power of the current system,
despite the strengths of some of the indicators.
A much used measure of student learning in Australian higher education is the student
progress rate. This rate reflects the proportion of subjects passed of all those a student has
attempted. Student progress data provides a measure of the extent to which students have
passed the subjects in which they were enrolled and the extent to which they are progressing
through the system. However, student progress data is limited as a measure of academic
performance. Student progress is relative not only to students, but also to courses. Student
progress rates that indicate a high level of student movement through the system may also
be indicative of low academic standards or demands. A further limitation of the current
student progress measure is of greater concern. While students receive summative assessment via numerical marks and letter grades, student progress is only recorded as a binary
measure. In this context students are measured as successful so long as they have only barely

Downloaded by [King Fahd University of Petroleum & Minerals] at 12:06 29 November 2014

30 H. Coates
passed a subject. In turn, the current measure of student progress provides little incentive for
institutions to enhance the quality of students academic performance. Where the student
progress rate is used as a measure of academic success it needs to do more than measure and
encourage minimal academic standards.
Student retention is calculated as a national measure of student persistence. The retention
index provides a basic and necessary measure of the quality and productivity of university
education. Although the measure reflects the extent to which students have fulfilled the
requirements of their course in the year preceding measurement, it is not without its limitations. In an increasingly flexible or distributed higher education environment, which aims to
attract students from diverse educational, social and cultural backgrounds, it is not entirely
clear how persistence should be most appropriately measured. Should it take into account
the time taken to complete a course? What happens if students defer for long periods of time?
How should retention indices count students who articulate into more advanced courses or
transfer between courses? Should retention be adjusted for demographic and contextual
variables? How can the lagging retention indicator be best interpreted in contexts and to
cohorts which may have changed markedly over intervening years? These issues aside,
student retention is no doubt a critical, although blunt, indicator of the extent to which
students are involved in higher education.
While student demand provides a measure of student entry into higher education, graduate destinations have been a commonly used measure of student outcomes. The Graduate
Destination Survey (GDS) (Graduate Careers Council of Australia, 2002) has been administered as an exit survey since 1972 to obtain data on graduate starting salaries, the proportion
of graduates in full-time study and the proportion of graduates in full-time employment.
There is no doubt that graduate destinations represent one of the more important outcomes
of higher education, indicating the extent to which university education has equipped
students with qualifications, skills and experiences required for employment. Despite this,
there are a number of problems with using graduate destinations as a measure of the quality
of university education. Data on destinations is a lagging indicator which is difficult to
link with current practices and programmes. More importantly, however, employment
outcomes can be influenced by a range of non-educational factors, such as institutional
reputation, personal networks and labour market conditions. Employment outcomes data
provides a measure only of the utilitarian function of university study, which may be only
contingently linked with the broader less vocationally specific educational roles of tertiary
study.
A recent effort to measure the outcomes of university education in Australia is the
Graduate Skills Assessment (GSA) (Australian Council for Educational Research, 2001). The
GSA is a test of what are often referred to as generic skills. The test provides measurement
of four domains: critical thinking, problem solving, interpersonal understanding and
written communication. The test has been designed to measure these generic skills in a
generic way that can be replicated at the start and end of a students course. This, in theory,
would make possible calculation of the extent to which an institution has added value to a
students cognitive skills in a way that is generalisable across fields of study. However, if
system-wide uptake of the test provides any index of its perceived value and utility it seems
to have been rated poorly by the higher education community.
Given current levels of emphasis on the GSA in policy circles, if not within institutions
themselves, it is worth noting possible reasons for its low uptake. There may be a perception
that the test provides an insufficient measure of what students are expected to have gained
from their university study. From one angle it might be seen that the GSA is basically a test

Downloaded by [King Fahd University of Petroleum & Minerals] at 12:06 29 November 2014

Student Engagement 31
of general intelligence that is confounded by individual maturation or innate ability. From
another perspective the GSA may be seen as measuring constructs that are seen as
secondary, derived or even ancillary to what a student has been studying. Another difficulty
with the GSA may rest with interpreting its context-neutral results in a way that informs the
improvement of policies and practices. While the data may provide summative information
for employers, it may be of little value to institutional managers or teachers. A range of
measurement issues raise other issues of concern. These stem from questions about whether
the phenomena being measured are transient or enduring and, if enduring, whether they
can be sufficiently linked with university study and context. More probing concerns may
relate to whether the targeted phenomena can be measured at all. Some may be concerned
that to the extent that there is any divergence between disciplines and the skills being tested,
this could result in a drift in programmes or teaching towards the generic skills being tested.
However, the most substantial concern may rest with the qualification of generic skills
themselves. While perhaps valid at a heuristic level, there may be scepticism that the nature
or measurement of good communication in the humanities may not be the same as good
communication in the medical sciences. Such a position may be underpinned by doubts
that there is a set of skills that can be considered as generic or ubiquitous outcomes of
university education.
Each of the student level indicators discussed above focuses on outcomes which in higher
education can be extremely difficult to specify, measure and interpret. In general, the selection of outcomes is a complex activity that can be highly value laden. Even once specified,
outcomes can be very difficult to define both conceptually and operationally. Particularly
important educational outcomes may be difficult to measure because of their diffuse, localised, uncertain, transient or slowly evolving nature. Measurement considerations may be
so critical in the analysis of outcomes that they may be the decisive factors underpinning
the selection and definition of indicators. From a quality assurance perspective, outcomes
can also be difficult to interpret. While information about outcomes may provide a snapshot of what is happening at a certain point in time, the information may not be sufficient to
direct quality management and improvement activities. The relationships between
outcome information and student behaviours or institutional processes may be multidimensional or ambiguous. In changing organisational environments the lagging information
from outcome indicators may lack relevance by the time it is available for interpretation. It
is also important that outcome information is read in context. Unless outcomes are adjusted
by input and environmental factors, then attempts to construct measures of the extent to
which an educational process has added value will be confounded. This, however, raises
contentious questions about how to adjust performance measures and about determining
psychometrically appropriate input measures.
The Value of Student Engagement Data for Quality Assurance
Given a situation such as that in Australia, there are a number of reasons why student
engagement information would be inherently valuable for quality assurance. Most directly,
student engagement data has the potential to provide a highly sensitive index of the extent
to which students are going about the kinds of things which are likely to generate high
quality learning outcomes. The right kinds of engagement are certainly necessary for
students to learn, even if an institution is reputable, well resourced, has impressive teachers,
teaches the right content and has well-regulated governance and management systems.
Furthermore, even without these institutional or pedagogical resources or practices the right

Downloaded by [King Fahd University of Petroleum & Minerals] at 12:06 29 November 2014

32 H. Coates
kinds of participation may well be sufficient to bring about productive learning. With the
assumptions of constructivism in mind, therefore, student engagement comes close to
providing necessary and sufficient information about student learning.
Data on student engagement has the advantage of providing information on what
students are actually doing. While this may appear self-evident, it has a broader significance
for the management of institutions, students and academic programmes. Rather than work
from assumptions or partial anecdotal reports about student activities, institutions can make
decisions based on more objective information. Information about student activities would
provide institutions with valuable information for marketing and recruitment and help
them become more responsive to student learning needs. Only with accurate and reliable
information on what students are actually doing can institutions move beyond taking
student activities for granted.
Student engagement data provides a means for determining the productivity of university
education. Johnstone (1993) argued that the most significant and sustainable productivity
advances in education will result from enhancing learning outputs rather than through further
manipulation of structural factors or cost side productivity. Johnstone (1993, p. 4) wrote that:
We need to focus more on the student and his or her training, and to be a little less
preoccupied with, and critical of, the faculty (and all the rest of the administrative,
professional, and clerical support staff of our colleges and universities) in our
question for more productivity.
To the extent that the productivity of education is centred around student learning, it is
obviously important that students and institutions are doing things that are likely to
maximise individual academic performance. As noted, timely data on student engagement
could be used diagnostically to fine tune the management of student learning and also to
provide information for making summative judgements about such productivity.
Ultimately, the main significance of student engagement data may be that it focuses
discussion about the quality of university education on students and their learning. Focusing on how students are learning emphasises an essential aspect of education which can
often seem to be overshadowed by discussion of pragmatism, resources and institutional
reputations. It seems unlikely that, at least in the near future, engagement will overshadow
considerations of institutional resources, reputations, selectivity or student outcomes in
discussions of the quality of education. It can and should, however, claim to provide an
informed and relevant perspective that counterbalances these views.
How Might Student Engagement Data be Factored into Quality Assurance
Determinations?
If the above analysis is accepted, then there is a clear place for taking account of data on
student engagement in determinations of the quality of university education. The above
analysis contends that student engagement data can provide key insights into the quality of
education, which should be used to balance determinations of quality that focus on institutional and teaching factors and on student level outputs. If the importance of student level
process factors is taken into account, then there remains a need to develop an approach for
measuring, interpreting and reporting how students are engaging with their university
study. This section exposes substantive and operational issues that might be important to
consider in determinations of how this is done.

Student Engagement 33

Downloaded by [King Fahd University of Petroleum & Minerals] at 12:06 29 November 2014

Substantive Considerations
If student engagement data is to be factored into quality assurance determinations, it is
necessary to mark out which phenomena are important to consider. It is essential that
selected activities and conditions be derived from research into university student learning
and development, that they are sensitive to relevant theoretical advances and that they are
relevant to practical contexts.
In the last few years much work has gone into drawing together findings from decades of
research into student learning to develop an integrated understanding of student engagement. While recent work has been conducted in Australia (Coates, 2005), the major focus of
this activity has been the USA National Survey of Student Engagement (NSSE) (2003). Beginning in 1999, the survey has been conducted at over 700 institutions in the USA. The NSSE
groups the good practices which it takes as indicative of engagement into five benchmarks:
level of academic challenge, active and collaborative learning, studentfaculty interaction,
enriching educational experiences and supportive campus environment. Items in these
benchmarks measure phenomena such as whether students ask questions or contribute to
discussions, prepare drafts of their work before submission, interact with students from
different backgrounds, study outside class or do volunteer work. The items address whether
staff and institutions have been available, helpful and sympathetic, emphasised study and
academic work, provided support to help students thrive socially or assessed students in
ways that encouraged them to do their best work. The work undertaken as part of the NSSE
has played a critical role in identifying good practices which provide substance for composite
indicators and in moving the topic of engagement into the realms of public and institutional
policy and practice.
Work on student engagement represents an ambitious attempt to draw together quintessential qualities of a university students educational experience. To be successful student
engagement needs to accord with contemporary knowledge about the neuroscience and
psychology of learning and development, to sustain basic pedagogical principles and effective practices, to be culturally relevant and to relate to changing institutional dynamics
brought about by the influence of new technologies, increasing internationalisation and new
funding arrangements. On a practical level, workable indicators of engagement need to be
consistent with pedagogical, institutional and systemic contexts. They need to be sensitive
and relevant to local contexts yet also need to be generic enough to enable meaningful
comparison across institutions, disciplines and programmes. Striking a balance between
these alternative demands challenges the development of any indicator, yet is essential to
support appropriate forms of monitoring and improvement. However, while the nuances of
learning may vary between instructional and institutional contexts, it is likely that there are
basic and essential processes that are common to many which could be used to constitute an
indicator of student engagement.

Methodological Considerations
In addition to substantive considerations, factoring student engagement information into
quality assurance determinations is dependent on identifying a sound and efficient indicator
methodology. It needs to be possible and feasible to identify cogent indices of student engagement, to ensure that such indices accord with sounds principles for performance indicators
and to obtain relevant data. Although such issues would require much elaboration in practice,
it is important to investigate whether they pose any intractable difficulties from the outset.

Downloaded by [King Fahd University of Petroleum & Minerals] at 12:06 29 November 2014

34 H. Coates
On a basic level, it is important to note that the findings of higher education research over
the last quarter century have made it clear that it is possible and feasible to identify, measure
and interpret salient process factors. A great deal of work has gone into identifying institutional and individual activities and conditions that are highly related to a range of learning
and development outcomes (Astin, 1985, 1993; Pace, 1988; Kuh et al., 1991, 1997; Pascarella &
Terenzini, 1991; Ewell & Jones, 1993, 1996; Tinto, 1993; NSSE, 2003). While the work has been
theoretically driven, empirical validation and practical relevance have been seen as essential
(Pace, 1988; Pascarella, 2001). Processes can be much easier to measure than outcomes. In
contrast to outcomes, process factors are typically proximal, tangible and ready to hand. On
a practical level, the measurement of processes often involves individuals who are currently
involved in a system and is thereby more feasible than tracking down former students.
Student-level process information has the benefit of being relatively easy to interpret, as the
data points directly to specific coincident conditions, activities, procedures and policies. As
the relevance of the phenomena being measured has been established before the acquisition
of data, it is not necessary to make post hoc inferences about the sources of particular results.
Rather, information can be applied directly to guide quality improvement and assurance
activities. Information on the nature of student involvement in specific activities can be used
immediately to pinpoint productive areas and those in need of improvement.
It seems likely, therefore, that student engagement indices could be constructed in ways
that were sound, reliable and valid. In reviewing the possibility of selecting indicators to
allow cross-national monitoring of higher education, James (2003) listed a series of optimal
indicator properties. Setting aside implementation difficulties, a brief review suggests that
there is no reason why indices of student engagement could not meet these standards. As
measures of actual student activity, measures of engagement could at the same time be
transparent and have face validity while being sensitive to changes in practice. Basing the
indices around key dimensions of the student experience would lead to a manageable
number of policy- and practice-relevant indicators with sound measurement properties. As
has become common practice with measures of teaching quality, longitudinal, normative or
standards-based criteria would need to be established for the indicator to function as a
measure of performance. A secure, efficient and timely analytical approach would be
required, as would an interpretive framework to set standards for reporting and to guide
improvement activities. This framework would need to be more than a receptacle for
the delivery of indicator data. It would need to inform stakeholders about the relevance of
the indices, capture and embrace their interests, provide a stimulus for action and contain a
mechanism for evaluating the monitoring process itself.
If it is feasible to identify process factors that can be formed into cogent indicators, then
practicalities of operationalisation need to be considered. Experience with the CEQ and GDS
in Australia supports the administration of questionnaires to students. A questionnaire
approach has the advantage of avoiding more resource-intensive forms of observation, of
giving students a voice in the quality assurance process and of obtaining data that are
commonly perceived to be objective in nature. In the service of measuring student engagement questionnaires have the added advantage of gathering data directly from students as
primary units of analysis, of accessing information which may only be known to individual
students and of providing students with an opportunity to reflect on and evaluate their
academic experiences while completing the form. While building on past experience, there
may be ways of streamlining and revising the survey methodology. Administering the questionnaires to a sample of students may generate more efficient and precise estimates than an
inevitably incomplete set of responses from an entire student population. Unless the trends

Student Engagement 35
and levels of student engagement changed annually, data could be collected at less regular
intervals.

Downloaded by [King Fahd University of Petroleum & Minerals] at 12:06 29 November 2014

Concluding Thoughts
This paper has argued for the importance of factoring considerations about student engagement into determinations of the quality of university education. An analysis of strategic
indicators that underpin quality assurance activities in Australia suggested that determinations about the quality of university education are often made without information about
whether students are engaging with the kinds of practices that are likely to generate
productive learning and about whether institutions are providing the kinds of conditions
that, based on many years of education research, seem likely to stimulate such engagement.
Despite its value, information about what students are actually doing at university is largely
ignored in discussions about the quality of university education. To improve the evaluation
that underpins quality assurance in higher education, there is a need to remedy this omission, to develop means of gathering data that can be used to evaluate whats really going on
in the education of university students, to evaluate what students are actually contributing
towards productive learning. While research on identifying and measuring good practices
of university education has been underway for some time, there is a need to begin factoring
it into quality assurance determinations.
This paper has marked out the broad idea of introducing student engagement considerations into quality assurance. There is much work to be done if student engagement is to be
made a part of quality assurance determinations in higher education. One challenge would
involve reconciling the constructivist perspective underpinning the idea of student
engagement with what is ultimately an institutional responsibility for managing ongoing
improvement. There is a tension here which stems from the fact that although students are
seen as making a direct contribution to the educational process, it is one that is largely
beyond institutional control. Accordingly, institutions would need to develop approaches to
manage and enhance student engagement without having ultimate control over students. If
such approaches were to be developed, they would need to be sufficiently cogent to support
the weighty analyses and duties that tend to be focused on large-scale performance indicators. However, given the nature of the link between students and institutions, could such
approaches ever be sufficiently reliable and valid to justify the distribution of rewards and
incentives? This and similar questions raise complex substantive, operational and political
issues which travel to the heart of ideas about university education. Rather than ignore such
issues, however, this paper has argued that it is time to see that they are addressed.

References
ASTIN, A.W., 1985, Achieving Educational Excellence: A critical analysis of priorities and practices in higher education
(San Francisco, Jossey-Bass).
ASTIN, A.W., 1993, What Matters in College: Four critical years revisited (San Francisco, Jossey-Bass).
AUSTRALIAN COUNCIL FOR EDUCATIONAL RESEARCH (PACER), 2001, Graduate Skills Assessment (Canberra,
Department of Education, Training and Youth Affairs).
BRENNAN, J., BRIGHTON, R., MOON, N., RICHARDSON, J., RINDL, J. & WILLIAMS, R., 2003, Collecting and Using
Student Feedback on Quality and Standards of Learning and Teaching in HE: A report to Higher Education Funding
Council for England (Bristol, Higher Education Funding Council for England).
COATES, H.C., 2005, Measuring and modelling student engagement in online and campus-based higher
education, PhD dissertation, University of Melbourne.

Downloaded by [King Fahd University of Petroleum & Minerals] at 12:06 29 November 2014

36 H. Coates
DAVIS, T.M. & MURRELL, P.H., 1993, Turning Teaching into Learning: The role of student responsibility in the collegiate
experience (Washington, ERIC Clearinghouse on Higher Education).
DEPARTMENT OF EDUCATION, SCIENCE AND TRAINING (DEST), 2002, Students 2002: Selected higher education
statistics (Canberra, DEST).
DEPARTMENT OF EDUCATION, SCIENCE AND TRAINING (DEST), 2004, Learning And Teaching Performance Fund:
Issues paper (Canberra, DEST).
EWELL, P.T. & JONES, D.P., 1993, Actions matter: the case for indirect measures in assessing higher educations
progress on the national education goals, Journal of General Education, 42(2), pp. 12348.
EWELL, P.T. & JONES, D.P., 1996, Indicators of Good Practice in Undergraduate Education: A handbook for development
and implementation (Boulder, Colorado, National Centre for Higher Education Management Systems).
GRADUATE CAREERS COUNCIL OF AUSTRALIA (GCCA), 2002, Graduate Destination Survey (Melbourne, GCCA).
JAMES, R, 2003, Suggestions relative to the selection of strategic system-level indicators to review the
development of higher education, in YONEZAWA, A. & KAISER, F. (Eds.) System-level and Strategic
Indicators for Monitoring Higher Education in the Twenty-First Century (Bucharest, UNESCO-CEPES).
JOHNSON, D.W., JOHNSON, R.T. & SMITH, K.A., 1991, Cooperative Learning: Increasing college faculty instructional
productivity (Washington, George Washington University).
JOHNSTONE, D.B., 1993, Enhancing the productivity of learning, AAHE Bulletin, 46(4), pp. 48.
KUH, G.D., 2002, What were learning about student engagement from NSSE, Change, 35(2), pp. 2431.
KUH, G.D., SCHUH, J.H. & WHITT, W.J., 1991, Involving Colleges (San Francisco, Jossey-Bass).
KUH, G.D., DOUGLAS, K.B., LUND, J.P. & RAMIN-GYURNEK, J., 1994, Student Learning Outside the Classroom:
Transcending artificial boundaries (Washington, ERIC Clearinghouse on Higher Education).
KUH, G.D., PACE, C.R. & VESPER, N., 1997, The development of process indicators to estimate student gains
associated with good practices in undergraduate education, Research in Higher Education, 38(4),
pp. 43554.
LINKE, R.D., 1991, Report of the Research Group on Performance Indicators in Higher Education (Canberra,
Department of Education, Education, Training and Youth Affairs).
MCINNIS, C., 2002, Signs of disengagement? Responding to the changing work and study patterns of full
time undergraduates in Australian universities, in ENDERS, J. & FULTON, O. (Eds.) Higher Education in a
Globalising World: A festschrift in honour of Ulrich Teichler (Dordrecht, Kluwer Academic).
MCINNIS, C., GRIFFIN, P., JAMES, R. & COATES, H., 2001, Development of the Course Experience Questionnaire
(Canberra, Department of Education, Training and Youth Affairs).
MCKINNON, K.R., WALKER, S.H. & DAVIS, D., 2000, Benchmarking: A manual for Australian universities
(Canberra, Department of Education, Training and Youth Affairs).
NATIONAL SURVEY OF STUDENT ENGAGEMENT (NSSE), 2003, Converting Data into Action: Expanding the boundaries
of institutional improvement: National Survey of Student Engagement 2003 annual report (Bloomington, Indiana
University).
PACE, C.R., 1979, Measuring Outcomes of College: Fifty years of findings and recommendations for the future
(San Francisco, Jossey-Bass).
PACE, C.R., 1988, Measuring the Quality of College Student Experiences: An account of the development and use of the
College Student Experiences Questionnaire (Los Angeles, University of California Higher Education
Research Institute).
PASCARELLA, E.T., 2001, Identifying excellence in undergraduate education: are we even close?, Change,
33(3), pp. 1823.
PASCARELLA, E.T. & TERENZINI, P.T., 1991, How College Affects Students: Findings and insights from twenty years
of research (San Francisco, Jossey-Bass).
RAMSDEN, P., 1991, A performance indicator of teaching quality in higher education: the Course Experience
Questionnaire, Studies in Higher Education, 16(2), pp. 12950.
RAMSDEN, P., 1992, Learning to Teach in Higher Education (London, Routledge).
TINTO, V., 1993, Leaving College: Rethinking the causes and cures of student attrition (Chicago, University of
Chicago Press).
US NEWS AND WORLD REPORT (USN), 2004, Americas Best Graduate Schools (Washington, US News and World
Report Inc.).

Вам также может понравиться