Вы находитесь на странице: 1из 10

point and counterpoint

Critical perspectives on the IELTS test

Downloaded from https://academic.oup.com/eltj/article-abstract/73/2/197/5382285 by Indira Gandhi Memorial Library user on 17 May 2019
William S. Pearson

The number of individuals undertaking IELTS has continued to grow in recent


years, standing at approximately 3 million candidates per year in 2016. As a
result, this high-stakes, high-pressure test has become firmly entrenched as a
global gatekeeping institution, regulating the international flows of people for
migration and academic study. This paper provides critical perspectives on the
design and administration of the IELTS test from the viewpoint of the often-
ignored, yet key stakeholders in IELTS: the test-takers themselves. It argues
that, with the growth in the need for recognized and trusted evidence of English
language proficiency internationally, the co-owners of IELTS have amassed
enormous global power over the lives of millions of people, with considerable
ethical ramifications. It critiques seven features of IELTS and suggests
implementable solutions to enable IELTS to become a more democratic and
humane testing system, imbued with potential for learning, for the benefit of its
test-takers.

Introduction IELTS sits atop a multimillion-pound global English language testing


industry, one that is firmly rooted to a higher-education industry that is
increasingly globalized and financially driven (Thorpe et al. 2017). As the
owning institutions of ‘a powerful test of a powerful language’ (Hamid
2016: 472), the British Council, Cambridge Assessment English, and
IDP (International Development Program of Australian Universities)
are invested with enormous power to shape the destinies of millions of
people globally. This self-sustaining, non-negotiable power extends to the
overt design of the test tasks, the taken-for-granted assessment criteria
and band score system, and the subtle, normalizing function fulfilled
by the language featured in the test. The wider social, political, and
economic impacts of IELTS on its test-takers, encompassing the concepts
of consequential validity (Messick 1996) and the social dimension of
language testing (McNamara and Roever 2006), have thus far warranted
limited sponsored research from the IELTS partners (e.g. Coleman,
Starfield, and Hagan 2003) or few external empirical studies (e.g. Hamid
and Hoang 2018).
This paper offers critical insights into IELTS beyond the technical,
‘de-human’ assessment concepts of content and construct validity.
It will briefly chart the birth and rise of the test from ELTS (English

ELT Journal; Volume 73/2 April 2019; doi:10.1093/elt/ccz006 197


© The Author(s) 2019. Published by Oxford University Press; all rights reserved.
Language Testing Service), before examining the role IELTS performs
today. Thereafter, the paper offers a critique of the IELTS approach
to managing mass English language assessment in seven aspects: the
Englishes of the test, idiosyncrasies specific to the Writing module,
test fees, the interpretation of scores, test feedback, the management of
challenges to results, and the retake policy. While it is difficult to dispute

Downloaded from https://academic.oup.com/eltj/article-abstract/73/2/197/5382285 by Indira Gandhi Memorial Library user on 17 May 2019
the need for fair, objective, and internationally recognized English
language credentials to predict an individual’s suitability for academia
and migration, such a testing system need not be undemocratic or
inhumane on its test-takers. As such, the article will outline a range of
recommendations for the co-owners to instil in the test democratic and
humane principles.

The rise of IELTS to To understand the language-testing edifice IELTS has become, it
a monolithic global is necessary to look back to its predecessor, ELTS, and examine the
gatekeeper problems it attempted to solve and those it created. Back in 1980, when
structuralist philosophies of language and language assessment were still
From ELTS to IELTS
hegemonic (manifest in the English Proficiency Test Battery), the British
Council and the University of Cambridge Local Examinations Syndicate
(UCLES) introduced ELTS. A direct, communicative language test, ELTS
was designed to serve as a selection tool for foreign students seeking
to undertake tertiary programmes in UK higher-education institutions
(HEIs) (Hamp-Lyons 2000). The co-owners attempted to inject
authenticity, relevancy, and real-life communication into the test (Taylor
and Weir 2012), pioneering the use of the now-ubiquitous individual
spoken interview and product-orientated writing test. These were assessed
by trained raters using criterion-referenced descriptors, innovative for
the time. An additional key feature of ELTS was its emphasis on ESP,
embodied in five subject-specific (and one ‘general academic’) knowledge
fields, utilized in the Study Skills, Speaking, and Writing subtests. Yet,
this ‘tailoring’ and the test’s length created noticeable logistical difficulties
in administration (Davies 2007), posed challenges for the sustainable
design of content and items (Taylor and Weir 2012), and ultimately
increased costs. As a result, the co-owners opted to simplify the design of
ELTS, eventually abandoning the ESP element in its successor, IELTS,
launched in 1989.
The redesign of ELTS, along with the introduction of a ‘General
Training’ variant for non-academic candidates, turned out to be a shrewd
move for the British Council and UCLES. Social trends following the
end of the Cold War, namely rises in global migration, the increased
willingness of students to internationalize their higher-education
experience (Ramia, Marginson, and Sawir 2013), and the entrenchment
of English as the lingua franca of academia (Dearden 2014; Wächter
and Maiworm 2014) magnified the need for international, standardized,
and trustworthy measures of English proficiency. Equally astute was
enlisting IDP in the ownership, design, and management of IELTS,
helping internationalize the test beyond its parochial British scope.
Annual test-taker numbers rose from 500,000 in 2003 to 3 million
in 2016, which followed a tenfold increase over the preceding decade
(Davies 2007). Information from IELTS itself indicates the test is

198 William S. Pearson


currently administered at approximately 1,100 venues in 140 countries
at a rate of up to four times a month and is recognized by over 10,000
organizations (test-users) globally. In a world where individuals are
increasingly required to provide evidence of their credentials, it seems
this figure is only going to continue rising.

The problems IELTS

Downloaded from https://academic.oup.com/eltj/article-abstract/73/2/197/5382285 by Indira Gandhi Memorial Library user on 17 May 2019
IELTS exists to solve a range of second-language evidence and selection
solves and creates issues present in the world today. From the depersonalized concepts of
efficiency, manageability, and scale it can be argued that IELTS performs
its role successfully. It provides a test-user and NNES (non-Native
English-speaking) test-taker with simplified, easy-to-understand, criterion-
referenced, and time-bound evidence of that person’s English proficiency.
IELTS band scores aid in the efficient processing of millions of higher-
education, migration, and employment applications globally, while
evidence from some studies supports the view that IELTS adequately
predicts test-takers’ abilities to function in an academic environment
(Ingram and Bayliss 2007; Thorpe et al. 2017). Mass language testing
will always pose challenges for the testing organization. The co-owners of
IELTS must therefore be credited for managing what is a complex, large-
scale testing operation posing logistical, resource, and security challenges.
Nevertheless, the growth of IELTS into a secure, self-sustaining, and
financially successful testing system undoubtedly has profound global
political, social, economic, and ethical impacts, which grow in magnitude
with the year-on-year increases in test-taker numbers. IELTS has
permeated spheres for which it was not originally designed. It is used
by employers to assess prospective employees’ ability to function in the
workplace, despite none of its tasks closely corresponding with what an
employee would be expected to do in many jobs. Its band scores are used
as benchmarks for migration even though there is limited research into
the suitability of IELTS to do so. Finally, it increasingly plays a role in
language-selection processes for English-medium tertiary programmes
offered by transnational education partnerships in NNES countries
themselves. There appears to be no independent institution or interest
group that can act as a check on the continued growth and expansion of
IELTS, with consequences for millions of NNES individuals globally.

Test-users as Considerable power is exercised by institutions utilizing IELTS scores,


‘accomplices’ in particularly universities. IELTS offers non-binding recommendations
the exploitation of on the interpretation of test scores in the admittance of NNES students.
test-takers Nevertheless, HEIs have considerable freedom to set their own linguistic
entrance requirements, measured through IELTS. With universities’
economic imperatives to recruit increasing numbers of lucrative, higher-
fee-paying international students, some institutions have engaged in
alleged ‘corner-cutting’, by lowering linguistic requirements (Hyatt 2013;
Thorpe et al. 2017). Perusal of the admission pages of some UK HEI
websites indicates admission for certain postgraduate programmes is
possible with an IELTS score of 4.5 (‘limited’ to ‘modest’ user). Admission
is contingent on the successful completion of a pre-sessional EAP
‘top-up’ course. However, institutions generally do not require candidates
to undertake IELTS as an exit-test to provide evidence of meeting

Critical perspectives on the IELTS test 199


visa requirements (5.5 in the UK). One may speculate that in-house
assessments provide a more lenient approach, compared with IELTS, for
NNES students. This adds doubt to the credibility of short-term intensive
EAP to compensate for linguistically deficient students, highlighted in the
recent study by Thorpe et al. (2017). Hence, IELTS plays a crucial role in
enabling HEIs to generate considerable revenue, by empowering them

Downloaded from https://academic.oup.com/eltj/article-abstract/73/2/197/5382285 by Indira Gandhi Memorial Library user on 17 May 2019
to recruit linguistically unprepared and underprepared individuals onto
expensive degree programmes and pre-sessional courses.

Consenting victims? Test-takers themselves perform the unwitting role of consenting victims
The position of to the powerful self-perpetuating edifice that is IELTS. As with other
IELTS test-takers tests, public and institutional trust in IELTS represents an unwritten
social contract between the tester (the dominator) and the test-taker (the
dominated) (Shohamy 2001). Blocks on test-taker participation in the
design and management of IELTS perpetuates their dominated status.
The test’s co-owners, as three large organizations, possess considerably
more power than individual test-takers. Lacking both proficiency in the
language and knowledge of the technical process of language testing,
test-takers are assumed not to possess authoritative and credible voices
on the test. As such, they are highly unlikely to have had any input or
participation in the item design of large-scale tests (Hamp-Lyons 2000),
particularly IELTS. Similarly, candidates are spread disparately around
the world and possess different aims and abilities vis-à-vis IELTS. They
therefore lack the logistical means to assemble and engage in collective
bargaining with the co-owners. Finally, once a test-taker has obtained
their required band score, they are inclined to place trust in the validity of
IELTS, since they occupy the privileged vantage point of having achieved
their required linguistic goals. Thus, this asymmetrical power relationship
is indefinitely self-sustaining, unless change is brought to bear externally.

Critical perspectives The Englishes of the listening test


on seven features The ‘international’ aspect of IELTS Listening has come under criti-
of IELTS cism for the fact that the test has a tendency to emphasize the linguis-
Test design tic norms of inner-circle Englishes, particularly those of the United
Kingdom, the United States, and Australia (Uysal 2009). This confers
advantages to candidates from certain linguistic backgrounds which
may be more closely associated with inner-circle norms, such as Com-
monwealth countries to British English or Mexico to American English.
Similarly, it appears unjust and unhelpful for candidates to be assessed
using prescriptive language standards of Englishes to which they have
had and will have limited exposure. If IELTS purports to be an interna-
tional test of English, it follows that the co-owners should incorporate
into the test Englishes that are more representative of how the language
is used and by whom internationally.

Idiosyncrasies in the writing test


Although IELTS samples a candidate’s general communicative ability
(Davies 2007), the test features notable idiosyncrasies, particularly in
Writing, which have a washback effect on candidate preparation. Moore and
Morton (2005) demonstrate that IELTS Academic Writing is only partially
representative of the writing undertaken in the academy, having more in

200 William S. Pearson


common with the spontaneous ‘public letter-to-the-editor’ genre. Indeed,
IELTS Writing may represent its own distinct written genre. Specific genre
features include the need for an overview in Writing Task 1, the topic-
driven, rhetorically discursive style required in Writing Task 2, and the
reliance on anecdotal evidence to support points in Task 2. Candidates are
likely to lack familiarity with the detailed norms of this writing, particularly

Downloaded from https://academic.oup.com/eltj/article-abstract/73/2/197/5382285 by Indira Gandhi Memorial Library user on 17 May 2019
as IELTS maintains a confidential range of assessment criteria. This may
explain candidates’ perceived need for thorough preparation in approaching
IELTS Writing tasks. Uncoincidentally, the co-owners act as purveyors of
commercial IELTS preparation content, in the form of printed materials and
teacher-led courses, exacting further financial resources from test-takers.
It should be acknowledged that some free online preparation materials are
currently provided by the co-owners on the IELTS website, including some
components of The Road to IELTS. However, full access costs approximately
£38 and is free only for candidates booking their test or a preparation course
with the British Council.

Test administration The cost of undertaking the test


and management The test exacts a notable economic burden on its test-takers, particularly
those who do not achieve their required band scores first time around.
As shown in Table 1, the test fees are high, although vary significantly
according to the country of administration. China is an especially costly
place to take IELTS, likely owing to Chinese students’ increased willingness
to undertake higher education abroad, especially the UK. There are
also associated costs for test-takers, including transport and possible
accommodation costs (as the IELTS Speaking test can be held on a different
day to the three other subtests). This prolongs the stress and anxiety for
some candidates, with potential consequences on their performance.
Prospective test-takers may feel the need to invest in preparation materials
or classroom courses, further increasing the economic impact of the
test, expanding the revenues of the co-owners, and raising issues of
discrimination based on economic inequality. As an expensive test, IELTS
disproportionately impacts on candidates with lower economic means.
Some individuals do not have the ability to self-fund multiple attempts at
the test, unlike wealthier test-takers. Globally, candidates likely perceive
IELTS test fees as poor value due to the limited feedback received, as well as
a source of anxiety and stress, impacting negatively on their performance.

Country Test fee in local currency GBP equivalent


Brazil BRL 840 173
China RMB 2020 222
Egypt EGP 2600 110
Ethiopia ETB 5800 160
India INR 12,650 132
Japan JPY 25,380 172
Poland PLN 770 158
United Arab Emirates AED 1050 218
ta b l e  1
United Kingdom (UKVI*) GBP 200 200
IELTS test fees in local currency
and GBP in selected countries Data obtained from https://takeielts.britishcouncil.org
as of October 2018 *United Kingdom Visa and Immigration purposes.

Critical perspectives on the IELTS test 201


The interpretation of test scores
The IELTS Handbook (2007) recommends institutions ‘consider a
candidate’s IELTS results in the context of a number of factors, including
age and motivation, educational and cultural background, first language
and language learning history’ (ibid.: 5). Yet, evidence suggests that
university administrators and academics lack the familiarity with the test

Downloaded from https://academic.oup.com/eltj/article-abstract/73/2/197/5382285 by Indira Gandhi Memorial Library user on 17 May 2019
to interpret the results in this intended fashion (Coleman et al. 2003;
Hyatt 2013). The simplicity and efficiency with which such test scores can
be processed strengthens the perception that IELTS scores are ‘an easy
short cut … concerning admissions to English-medium HE institutions’
(Hall 2009: 327). Thus, test scores are used as ‘hard and fast’ indicators,
weakening claims that ‘you can’t fail the IELTS test’. Although the
partners provide information for test-users to help them interpret IELTS
scores, the testing system is still orientated towards the unquestioned
acceptance of the predictive power of test scores.

Test feedback
With the provision of overall band score results for the four skills only,
IELTS test-takers receive little feedback on their test performance. There
are no band score results provided for performance within the four
assessment criteria of Speaking and Writing. Overall scores may be the
only information a candidate who has achieved their required band score
may wish to know. Nevertheless, candidates who did not achieve what they
needed in one or more subtests are unlikely to gain insights from these
numbers. Test-takers receive no qualitative feedback on their speaking
or writing. It is not indicated to them what might have gone wrong in a
specific subtest, meaning very little about task performance can be learnt,
other than through candidate self-reflections. The ramification is that test-
takers are likely to be unaware if a single aspect of their writing contributed
to underperformance or if there were inherent wider issues. This includes,
for example, the lack of an overview in Academic Writing Task 1, which
limits a candidate from achieving above a band 5.0 in Task Achievement.

Querying the results


Given the limited feedback information received by candidates, it is
understandable that many perceive a need to query their results. This
is especially acute for test-takers who fall short of their required band
scores in only one skill. However, the re-mark policy (Enquiry-on-Result
or EoR) discourages candidates from querying their scores. Disappointed
test-takers can request a re-mark of their test within six weeks of the
result in one or all of the four skills. Test-takers pay an administrative fee
(approximately £60 at UK test venues), which is reimbursed should a
candidate’s band score be upgraded. It is unknown whether requests for
re-marking are driven by perceptions of a lack of fairness or accuracy in
marking the more subjective elements of the test, Speaking and Writing.
Those candidates who are aware that these elements are marked by one
examiner only may feel encouraged to query their score, as they might
perceive the examiner’s impressions were wrong. It is unlikely candidates
are aware of the IELTS examiner Professional Support Network (PSN) or
monitoring procedures, put in place to address reliability issues inherent
in single-marking.

202 William S. Pearson


The retake policy
Even though IELTS is not a ‘pass/fail’ test, many candidates do not
meet one or more required subscores in their test. The peculiarities
of the IELTS scoring system adversely impact on ‘failed’ candidates,
especially borderline ones. Test-takers either retake the whole test or have
their ambitions derailed. Test-users usually require candidates to have a

Downloaded from https://academic.oup.com/eltj/article-abstract/73/2/197/5382285 by Indira Gandhi Memorial Library user on 17 May 2019
minimum band score, such as band 6.0, in all four components of the
test. Nevertheless, these must be achieved in a single sitting of the test,
and scores cannot be combined across tests. Thus, if a test-taker achieves
band 6.0 in three subtests, but a 5.5 in the other, the three satisfactory
scores that were obtained are invalid, and the whole test needs to be
retaken, at further cost. This is understandably frustrating for candidates.
Although it can be argued that test-takers should be able to consistently
demonstrate performance across multiple test attempts, aspects of
IELTS’s design, particularly the topics in Speaking and Writing and the
amount of information provided in Academic Writing Task 1, can alter the
difficulty level, while affective candidate responses (a lack of confidence,
anxiety, etc.) can impact on test performance.
There are few constraints on retaking the IELTS test. Candidates need
only wait 13 days for their results (modified from a 90-day cool-off period
in 2006), a change likely driven by economic imperatives. Additionally,
test-takers can resit the test as many times as they like. As such, Hamid
(2016) speculates that there may be more test-repeaters on a given day
than first-timers. Considering the current policy and the scoring system,
there is a danger that IELTS may become perceived by test-takers as an
income generator for the three partners. This perspective is accentuated
by the lack of feedback provided to candidates to help them identify what
went wrong and rectify the problem(s).

A democratic, Changes are required to the administration and design of IELTS according
humanistic, and to three overarching principles: democratization, humanization, and the
formative vision institutionalization of formative feedback. Democratization concerns
for IELTS protecting and empowering test-takers by enabling their meaningful
participation and involvement in the assessment process (Shohamy
Democratization
2001). This begins with periodic and large-scale international research
sponsored by the co-owners into the consequential impacts of IELTS on
the lives of test-takers, published for a lay audience. Democratization also
encompasses systems to hold the IELTS co-owners accountable beyond
the technical assessment concepts of content and construct validity. An
independent body needs to be created to provide regulatory oversight. It
could, for example, ensure test fees are set appropriately, gather feedback
from test-takers, provide collective bargaining on their behalf, and even
assume the administrative role of processing EoR checks to ensure greater
transparency. Such an organization could also require HEIs to report on
the success of NNES students relative to IELTS entry scores, to safeguard
test-takers from being exploited by financially-minded HEIs offering easy
access to tertiary programmes.
There are also smaller-scale practical solutions to democratizing IELTS.
More high-quality, free-to-access test preparation content is required to
ensure candidates with limited economic means are not disadvantaged.

Critical perspectives on the IELTS test 203


This could involve offering free access to The Road to IELTS, and the
provision of more freely available practice tests with model responses
at different levels. Additionally, IELTS needs to offer a more interactive
social media presence, especially on Facebook. It is feasible for IELTS
to employ staff to answer candidates’ questions, a void currently
unsatisfactorily filled by privateer teachers and the test-takers themselves.

Downloaded from https://academic.oup.com/eltj/article-abstract/73/2/197/5382285 by Indira Gandhi Memorial Library user on 17 May 2019
In terms of the test design itself, despite moves away from RP-dominant
voices present in ELTS, IELTS should democratize the Anglophone voices
in the Listening test, incorporating non-native speaker ones. Such a move
confers greater validity on variants to inner-circle English, emphasizing
the reality of the truly international nature of English today.

Humanization Humanizing language testing involves making decisions about a test that
are considerate of its impacts on individuals as human beings. In practice,
this concerns instilling human qualities into the testing system, reducing
its perceived severity, being aware of affective responses and trying to
minimize them, attributing genuine concern for people’s well-being
during the test, and making the test meaningful and useful for candidates
(Hamid and Hoang 2018). One practical humanizing solution would be
to enable previously successful subtest scores to be valid, and then allow
candidates to combine scores obtained across different sittings of the
test within appropriate caveats, such as the time between tests and gap
between scores. Alternatively, candidates could be offered the opportunity
to sit individual subtests rather than all four skills simultaneously.
A further humanizing measure could be for IELTS to provide a guarantee
to candidates that the Speaking test can be undertaken on the same day
as the three other subtests. This could help reduce the costs and pressure
for individuals travelling far to the test venue. Finally, incentives and
resources need to be built into HEIs’ application processes to encourage
IELTS scores to be interpreted in the wider context of the whole applicant.
This could include a bespoke placement test used by a HEI in conjunction
with IELTS and the development of linguistic profiles incorporating
detailed needs analyses.

Foregrounding The co-owners of IELTS should implement appropriate systems for the
formative feedback provision of feedback on test performance beyond current practices.
This could entail providing the separate band score results for the
four assessment criteria in Speaking and Writing, in addition to the
overall band. This would give direction to underperforming candidates
in what to focus on in further test preparation. In the receptive skills
tests, a breakdown of marks could be specified for the four sections
of Listening and three Reading passages, again allowing candidates
a greater understanding if something went wrong. The test owners
could go further, particularly in Speaking and Writing. It is feasible
for the examiner to provide comments on candidates’ performance
using the public descriptors. These need not necessarily be open-ended
comments written by the examiner. The output could be scaffolded, with
the examiner ticking relevant prefabricated comments on a crib sheet.
This process could be carried out by a second examiner listening to the
recorded test, who could also provide a second rating of the Speaking (and
Writing) result. Dual scoring would help not only through the provision

204 William S. Pearson


of formative feedback, but in the improvement of candidate trust in the
fairness and reliability of scoring (Uysal 2009), leading to a reduction in
EoR checks. Naturally, such a change would raise administrative costs.
Nevertheless, with the current high test fees, investment is required by the
co-owners in improving candidates’ perceptions of the test as a learning
experience and its concurrent value for money.

Downloaded from https://academic.oup.com/eltj/article-abstract/73/2/197/5382285 by Indira Gandhi Memorial Library user on 17 May 2019
Conclusion By capitalizing on individuals’ needs to prove their credentials as users of
English, the IELTS partners have accumulated significant global power.
Essentially, unless individuals who aspire to tertiary study abroad or
migration engage with IELTS (or a comparable test such as TOEFL) on
the terms set by the test owners, there are very real limitations placed
upon their lives. Yet, English has become too globalized and IELTS too
hegemonic for test-takers to be expected to submit unquestioningly to
the expertise of the co-owning organizations. This paper offers a range
of proposals to ethically enhance IELTS, according to the principles of
democratization, humanization, and the provision of formative feedback.
The British Council, Cambridge Assessment English, and IDP need to
address the issue of accountability beyond recourse to their expertise in
the theory of language test development and administration. At the very
least, they urgently need to undertake research into the consequential
validity of IELTS and identify methods through which the voices of test-
takers can be incorporated into the design and management of the test.
The resulting information should not be viewed as incidental feedback,
but key to the test’s validity itself. Finally, it must be stressed that IELTS
has vested commercial interests in its continued global expansion and
entrenchment. Consequently, the necessary change proposed in this
article can only occur through pressure brought to bear on the co-owners
resulting from discussion and scrutiny of the design and management of
this powerful, impactful test.
Final version received January 2019

References Dearden, J. 2014. English as a Medium of Instruction:


Coleman, D., S. Starfield, and A. Hagan. 2003. The A Growing Phenomenon. British Council. Available at
Attitudes of IELTS Stakeholders: Student and Staff https://www.britishcouncil.org/sites/default/files/
Perceptions of IELTS in Australian, UK and Chinese e484_emi_-_cover_option_3_final_web.pdf (accessed
Tertiary Institutions. IELTS Research Reports Volume on 12 February 2019).
5. Canberra: IELTS Australia Pty Limited. Available Hall, G. 2009. ‘International English language
at https://www.ielts.org/teaching-and-research/ testing: a critical response’. ELT Journal 64/3: 321–8.
research-reports/volume-05-report-4 (accessed on 12 Hamid, M. O. 2016. ‘Policies of global English
February 2019). tests: test-takers’ perspectives on the IELTS retake
Davies, A. 2007. ‘Assessing academic English policy’. Discourse: Studies in the Cultural Politics of
language proficiency: 40+ years of UK language Education 37/3: 472–87.
tests’ in J. Fox, M. Wesche, D. Bayliss, L. Cheng, Hamid, M. O. and N. T. H. Hoang. 2018. ‘Humanising
C. E. Turner, and C. Doe (eds.). Language Testing language testing’. TESL-EJ 22/1. Available at http://
Reconsidered (pp. 73–86). Ottawa: University of www.tesl-ej.org/wordpress/issues/volume22/ej85/
Ottawa Press. ej85a5/ (accessed on 12 February 2019).

Critical perspectives on the IELTS test 205


Hamp-Lyons, L. 2000. ‘Social, professional and Shohamy, E. 2001. ‘Democratic assessment as an
individual responsibility in language testing’. System alternative’. Language Testing 18/4: 373–91.
28/4: 579–91. Taylor, L. and C. J. Weir. 2012. ‘Introduction’ in
Hyatt, D. 2013. ‘Stakeholders’ perceptions of IELTS L. Taylor and C. J. Weir (eds.). IELTS Collected Papers
as an entry requirement for higher education in the 2: Research in Reading and Listening Assessment (pp.
UK’. Journal of Further and Higher Education 37/6: 1–10). Cambridge: Cambridge University Press.
844–63. Thorpe, A., M. Snell, S. Davey-Evans, and R. Talman.

Downloaded from https://academic.oup.com/eltj/article-abstract/73/2/197/5382285 by Indira Gandhi Memorial Library user on 17 May 2019
IELTS. 2007. The IELTS Handbook. Cambridge: 2017. ‘Improving the academic performance of non-
University of Cambridge Local Examinations native English-speaking students: the contribution of
Syndicate, The British Council, IDP Australia. pre-sessional English language programmes’. Higher
Ingram, D. and A. Bayliss. 2007. IELTS as a Predictor Education Quarterly 71/1: 5–32.
of Academic Language Performance, Part 1. IELTS Uysal, H. H. 2009. ‘A critical review of the IELTS
Research Reports Volume 7. Canberra: IELTS Australia writing test’. ELT Journal 64/3: 314–20.
Pty Limited. Available at https://www.ielts.org/ Wächter, B. and F. Maiworm. 2014. English-Taught
teaching-and-research/research-reports/volume-07- Programmes in European Higher Education: The State
report-3. (accessed on 12 February 2019). of Play in 2014. Bonn: ACA Papers on International
McNamara, T. and C. Roever. 2006. Language Testing: Cooperation in Education. Available at http://www.
The Social Dimension. Malden, MA: Blackwell. aca-secretariat.be/fileadmin/aca_docs/images/
Messick, S. 1996. ‘Validity and washback in language members/ACA-2015_English_Taught_01.pdf
testing’. Language Testing 13/3: 241–256. (accessed on 12 February 2019).
Moore, T. and J. Morton. 2005. ‘Dimensions of
difference: a comparison of university writing and The author
IELTS writing’. Journal of English for Academic William S. Pearson is a PhD candidate with the
Purposes 4/1: 43–66. University of Exeter’s Graduate School of Education. His
Ramia, G., S. Marginson, and E. Sawir. 2013. research interests include candidate preparation for the
‘Fast growing, diverse: mapping the business of IELTS test, teacher corrective feedback on L2 writing,
international education’ in Regulating International and the role of pre-sessional EAP programmes in
Students’ Wellbeing (pp. 39–58). Bristol: Bristol preparing students for English-medium tertiary study.
University Press. Email: wsp202@exeter.ac.uk

206 William S. Pearson

Вам также может понравиться