Академический Документы
Профессиональный Документы
Культура Документы
Downloaded from https://academic.oup.com/eltj/article-abstract/73/2/197/5382285 by Indira Gandhi Memorial Library user on 17 May 2019
William S. Pearson
Downloaded from https://academic.oup.com/eltj/article-abstract/73/2/197/5382285 by Indira Gandhi Memorial Library user on 17 May 2019
the need for fair, objective, and internationally recognized English
language credentials to predict an individual’s suitability for academia
and migration, such a testing system need not be undemocratic or
inhumane on its test-takers. As such, the article will outline a range of
recommendations for the co-owners to instil in the test democratic and
humane principles.
The rise of IELTS to To understand the language-testing edifice IELTS has become, it
a monolithic global is necessary to look back to its predecessor, ELTS, and examine the
gatekeeper problems it attempted to solve and those it created. Back in 1980, when
structuralist philosophies of language and language assessment were still
From ELTS to IELTS
hegemonic (manifest in the English Proficiency Test Battery), the British
Council and the University of Cambridge Local Examinations Syndicate
(UCLES) introduced ELTS. A direct, communicative language test, ELTS
was designed to serve as a selection tool for foreign students seeking
to undertake tertiary programmes in UK higher-education institutions
(HEIs) (Hamp-Lyons 2000). The co-owners attempted to inject
authenticity, relevancy, and real-life communication into the test (Taylor
and Weir 2012), pioneering the use of the now-ubiquitous individual
spoken interview and product-orientated writing test. These were assessed
by trained raters using criterion-referenced descriptors, innovative for
the time. An additional key feature of ELTS was its emphasis on ESP,
embodied in five subject-specific (and one ‘general academic’) knowledge
fields, utilized in the Study Skills, Speaking, and Writing subtests. Yet,
this ‘tailoring’ and the test’s length created noticeable logistical difficulties
in administration (Davies 2007), posed challenges for the sustainable
design of content and items (Taylor and Weir 2012), and ultimately
increased costs. As a result, the co-owners opted to simplify the design of
ELTS, eventually abandoning the ESP element in its successor, IELTS,
launched in 1989.
The redesign of ELTS, along with the introduction of a ‘General
Training’ variant for non-academic candidates, turned out to be a shrewd
move for the British Council and UCLES. Social trends following the
end of the Cold War, namely rises in global migration, the increased
willingness of students to internationalize their higher-education
experience (Ramia, Marginson, and Sawir 2013), and the entrenchment
of English as the lingua franca of academia (Dearden 2014; Wächter
and Maiworm 2014) magnified the need for international, standardized,
and trustworthy measures of English proficiency. Equally astute was
enlisting IDP in the ownership, design, and management of IELTS,
helping internationalize the test beyond its parochial British scope.
Annual test-taker numbers rose from 500,000 in 2003 to 3 million
in 2016, which followed a tenfold increase over the preceding decade
(Davies 2007). Information from IELTS itself indicates the test is
Downloaded from https://academic.oup.com/eltj/article-abstract/73/2/197/5382285 by Indira Gandhi Memorial Library user on 17 May 2019
IELTS exists to solve a range of second-language evidence and selection
solves and creates issues present in the world today. From the depersonalized concepts of
efficiency, manageability, and scale it can be argued that IELTS performs
its role successfully. It provides a test-user and NNES (non-Native
English-speaking) test-taker with simplified, easy-to-understand, criterion-
referenced, and time-bound evidence of that person’s English proficiency.
IELTS band scores aid in the efficient processing of millions of higher-
education, migration, and employment applications globally, while
evidence from some studies supports the view that IELTS adequately
predicts test-takers’ abilities to function in an academic environment
(Ingram and Bayliss 2007; Thorpe et al. 2017). Mass language testing
will always pose challenges for the testing organization. The co-owners of
IELTS must therefore be credited for managing what is a complex, large-
scale testing operation posing logistical, resource, and security challenges.
Nevertheless, the growth of IELTS into a secure, self-sustaining, and
financially successful testing system undoubtedly has profound global
political, social, economic, and ethical impacts, which grow in magnitude
with the year-on-year increases in test-taker numbers. IELTS has
permeated spheres for which it was not originally designed. It is used
by employers to assess prospective employees’ ability to function in the
workplace, despite none of its tasks closely corresponding with what an
employee would be expected to do in many jobs. Its band scores are used
as benchmarks for migration even though there is limited research into
the suitability of IELTS to do so. Finally, it increasingly plays a role in
language-selection processes for English-medium tertiary programmes
offered by transnational education partnerships in NNES countries
themselves. There appears to be no independent institution or interest
group that can act as a check on the continued growth and expansion of
IELTS, with consequences for millions of NNES individuals globally.
Downloaded from https://academic.oup.com/eltj/article-abstract/73/2/197/5382285 by Indira Gandhi Memorial Library user on 17 May 2019
to recruit linguistically unprepared and underprepared individuals onto
expensive degree programmes and pre-sessional courses.
Consenting victims? Test-takers themselves perform the unwitting role of consenting victims
The position of to the powerful self-perpetuating edifice that is IELTS. As with other
IELTS test-takers tests, public and institutional trust in IELTS represents an unwritten
social contract between the tester (the dominator) and the test-taker (the
dominated) (Shohamy 2001). Blocks on test-taker participation in the
design and management of IELTS perpetuates their dominated status.
The test’s co-owners, as three large organizations, possess considerably
more power than individual test-takers. Lacking both proficiency in the
language and knowledge of the technical process of language testing,
test-takers are assumed not to possess authoritative and credible voices
on the test. As such, they are highly unlikely to have had any input or
participation in the item design of large-scale tests (Hamp-Lyons 2000),
particularly IELTS. Similarly, candidates are spread disparately around
the world and possess different aims and abilities vis-à-vis IELTS. They
therefore lack the logistical means to assemble and engage in collective
bargaining with the co-owners. Finally, once a test-taker has obtained
their required band score, they are inclined to place trust in the validity of
IELTS, since they occupy the privileged vantage point of having achieved
their required linguistic goals. Thus, this asymmetrical power relationship
is indefinitely self-sustaining, unless change is brought to bear externally.
Downloaded from https://academic.oup.com/eltj/article-abstract/73/2/197/5382285 by Indira Gandhi Memorial Library user on 17 May 2019
as IELTS maintains a confidential range of assessment criteria. This may
explain candidates’ perceived need for thorough preparation in approaching
IELTS Writing tasks. Uncoincidentally, the co-owners act as purveyors of
commercial IELTS preparation content, in the form of printed materials and
teacher-led courses, exacting further financial resources from test-takers.
It should be acknowledged that some free online preparation materials are
currently provided by the co-owners on the IELTS website, including some
components of The Road to IELTS. However, full access costs approximately
£38 and is free only for candidates booking their test or a preparation course
with the British Council.
Downloaded from https://academic.oup.com/eltj/article-abstract/73/2/197/5382285 by Indira Gandhi Memorial Library user on 17 May 2019
to interpret the results in this intended fashion (Coleman et al. 2003;
Hyatt 2013). The simplicity and efficiency with which such test scores can
be processed strengthens the perception that IELTS scores are ‘an easy
short cut … concerning admissions to English-medium HE institutions’
(Hall 2009: 327). Thus, test scores are used as ‘hard and fast’ indicators,
weakening claims that ‘you can’t fail the IELTS test’. Although the
partners provide information for test-users to help them interpret IELTS
scores, the testing system is still orientated towards the unquestioned
acceptance of the predictive power of test scores.
Test feedback
With the provision of overall band score results for the four skills only,
IELTS test-takers receive little feedback on their test performance. There
are no band score results provided for performance within the four
assessment criteria of Speaking and Writing. Overall scores may be the
only information a candidate who has achieved their required band score
may wish to know. Nevertheless, candidates who did not achieve what they
needed in one or more subtests are unlikely to gain insights from these
numbers. Test-takers receive no qualitative feedback on their speaking
or writing. It is not indicated to them what might have gone wrong in a
specific subtest, meaning very little about task performance can be learnt,
other than through candidate self-reflections. The ramification is that test-
takers are likely to be unaware if a single aspect of their writing contributed
to underperformance or if there were inherent wider issues. This includes,
for example, the lack of an overview in Academic Writing Task 1, which
limits a candidate from achieving above a band 5.0 in Task Achievement.
Downloaded from https://academic.oup.com/eltj/article-abstract/73/2/197/5382285 by Indira Gandhi Memorial Library user on 17 May 2019
minimum band score, such as band 6.0, in all four components of the
test. Nevertheless, these must be achieved in a single sitting of the test,
and scores cannot be combined across tests. Thus, if a test-taker achieves
band 6.0 in three subtests, but a 5.5 in the other, the three satisfactory
scores that were obtained are invalid, and the whole test needs to be
retaken, at further cost. This is understandably frustrating for candidates.
Although it can be argued that test-takers should be able to consistently
demonstrate performance across multiple test attempts, aspects of
IELTS’s design, particularly the topics in Speaking and Writing and the
amount of information provided in Academic Writing Task 1, can alter the
difficulty level, while affective candidate responses (a lack of confidence,
anxiety, etc.) can impact on test performance.
There are few constraints on retaking the IELTS test. Candidates need
only wait 13 days for their results (modified from a 90-day cool-off period
in 2006), a change likely driven by economic imperatives. Additionally,
test-takers can resit the test as many times as they like. As such, Hamid
(2016) speculates that there may be more test-repeaters on a given day
than first-timers. Considering the current policy and the scoring system,
there is a danger that IELTS may become perceived by test-takers as an
income generator for the three partners. This perspective is accentuated
by the lack of feedback provided to candidates to help them identify what
went wrong and rectify the problem(s).
A democratic, Changes are required to the administration and design of IELTS according
humanistic, and to three overarching principles: democratization, humanization, and the
formative vision institutionalization of formative feedback. Democratization concerns
for IELTS protecting and empowering test-takers by enabling their meaningful
participation and involvement in the assessment process (Shohamy
Democratization
2001). This begins with periodic and large-scale international research
sponsored by the co-owners into the consequential impacts of IELTS on
the lives of test-takers, published for a lay audience. Democratization also
encompasses systems to hold the IELTS co-owners accountable beyond
the technical assessment concepts of content and construct validity. An
independent body needs to be created to provide regulatory oversight. It
could, for example, ensure test fees are set appropriately, gather feedback
from test-takers, provide collective bargaining on their behalf, and even
assume the administrative role of processing EoR checks to ensure greater
transparency. Such an organization could also require HEIs to report on
the success of NNES students relative to IELTS entry scores, to safeguard
test-takers from being exploited by financially-minded HEIs offering easy
access to tertiary programmes.
There are also smaller-scale practical solutions to democratizing IELTS.
More high-quality, free-to-access test preparation content is required to
ensure candidates with limited economic means are not disadvantaged.
Downloaded from https://academic.oup.com/eltj/article-abstract/73/2/197/5382285 by Indira Gandhi Memorial Library user on 17 May 2019
In terms of the test design itself, despite moves away from RP-dominant
voices present in ELTS, IELTS should democratize the Anglophone voices
in the Listening test, incorporating non-native speaker ones. Such a move
confers greater validity on variants to inner-circle English, emphasizing
the reality of the truly international nature of English today.
Humanization Humanizing language testing involves making decisions about a test that
are considerate of its impacts on individuals as human beings. In practice,
this concerns instilling human qualities into the testing system, reducing
its perceived severity, being aware of affective responses and trying to
minimize them, attributing genuine concern for people’s well-being
during the test, and making the test meaningful and useful for candidates
(Hamid and Hoang 2018). One practical humanizing solution would be
to enable previously successful subtest scores to be valid, and then allow
candidates to combine scores obtained across different sittings of the
test within appropriate caveats, such as the time between tests and gap
between scores. Alternatively, candidates could be offered the opportunity
to sit individual subtests rather than all four skills simultaneously.
A further humanizing measure could be for IELTS to provide a guarantee
to candidates that the Speaking test can be undertaken on the same day
as the three other subtests. This could help reduce the costs and pressure
for individuals travelling far to the test venue. Finally, incentives and
resources need to be built into HEIs’ application processes to encourage
IELTS scores to be interpreted in the wider context of the whole applicant.
This could include a bespoke placement test used by a HEI in conjunction
with IELTS and the development of linguistic profiles incorporating
detailed needs analyses.
Foregrounding The co-owners of IELTS should implement appropriate systems for the
formative feedback provision of feedback on test performance beyond current practices.
This could entail providing the separate band score results for the
four assessment criteria in Speaking and Writing, in addition to the
overall band. This would give direction to underperforming candidates
in what to focus on in further test preparation. In the receptive skills
tests, a breakdown of marks could be specified for the four sections
of Listening and three Reading passages, again allowing candidates
a greater understanding if something went wrong. The test owners
could go further, particularly in Speaking and Writing. It is feasible
for the examiner to provide comments on candidates’ performance
using the public descriptors. These need not necessarily be open-ended
comments written by the examiner. The output could be scaffolded, with
the examiner ticking relevant prefabricated comments on a crib sheet.
This process could be carried out by a second examiner listening to the
recorded test, who could also provide a second rating of the Speaking (and
Writing) result. Dual scoring would help not only through the provision
Downloaded from https://academic.oup.com/eltj/article-abstract/73/2/197/5382285 by Indira Gandhi Memorial Library user on 17 May 2019
Conclusion By capitalizing on individuals’ needs to prove their credentials as users of
English, the IELTS partners have accumulated significant global power.
Essentially, unless individuals who aspire to tertiary study abroad or
migration engage with IELTS (or a comparable test such as TOEFL) on
the terms set by the test owners, there are very real limitations placed
upon their lives. Yet, English has become too globalized and IELTS too
hegemonic for test-takers to be expected to submit unquestioningly to
the expertise of the co-owning organizations. This paper offers a range
of proposals to ethically enhance IELTS, according to the principles of
democratization, humanization, and the provision of formative feedback.
The British Council, Cambridge Assessment English, and IDP need to
address the issue of accountability beyond recourse to their expertise in
the theory of language test development and administration. At the very
least, they urgently need to undertake research into the consequential
validity of IELTS and identify methods through which the voices of test-
takers can be incorporated into the design and management of the test.
The resulting information should not be viewed as incidental feedback,
but key to the test’s validity itself. Finally, it must be stressed that IELTS
has vested commercial interests in its continued global expansion and
entrenchment. Consequently, the necessary change proposed in this
article can only occur through pressure brought to bear on the co-owners
resulting from discussion and scrutiny of the design and management of
this powerful, impactful test.
Final version received January 2019
Downloaded from https://academic.oup.com/eltj/article-abstract/73/2/197/5382285 by Indira Gandhi Memorial Library user on 17 May 2019
IELTS. 2007. The IELTS Handbook. Cambridge: 2017. ‘Improving the academic performance of non-
University of Cambridge Local Examinations native English-speaking students: the contribution of
Syndicate, The British Council, IDP Australia. pre-sessional English language programmes’. Higher
Ingram, D. and A. Bayliss. 2007. IELTS as a Predictor Education Quarterly 71/1: 5–32.
of Academic Language Performance, Part 1. IELTS Uysal, H. H. 2009. ‘A critical review of the IELTS
Research Reports Volume 7. Canberra: IELTS Australia writing test’. ELT Journal 64/3: 314–20.
Pty Limited. Available at https://www.ielts.org/ Wächter, B. and F. Maiworm. 2014. English-Taught
teaching-and-research/research-reports/volume-07- Programmes in European Higher Education: The State
report-3. (accessed on 12 February 2019). of Play in 2014. Bonn: ACA Papers on International
McNamara, T. and C. Roever. 2006. Language Testing: Cooperation in Education. Available at http://www.
The Social Dimension. Malden, MA: Blackwell. aca-secretariat.be/fileadmin/aca_docs/images/
Messick, S. 1996. ‘Validity and washback in language members/ACA-2015_English_Taught_01.pdf
testing’. Language Testing 13/3: 241–256. (accessed on 12 February 2019).
Moore, T. and J. Morton. 2005. ‘Dimensions of
difference: a comparison of university writing and The author
IELTS writing’. Journal of English for Academic William S. Pearson is a PhD candidate with the
Purposes 4/1: 43–66. University of Exeter’s Graduate School of Education. His
Ramia, G., S. Marginson, and E. Sawir. 2013. research interests include candidate preparation for the
‘Fast growing, diverse: mapping the business of IELTS test, teacher corrective feedback on L2 writing,
international education’ in Regulating International and the role of pre-sessional EAP programmes in
Students’ Wellbeing (pp. 39–58). Bristol: Bristol preparing students for English-medium tertiary study.
University Press. Email: wsp202@exeter.ac.uk