Вы находитесь на странице: 1из 16

Journal of English for Academic Purposes 26 (2017) 1e16

Contents lists available at ScienceDirect

Journal of English for Academic Purposes


journal homepage: www.elsevier.com/locate/jeap

Interpreting coded feedback on writing: Turkish EFL students'


approaches to revision
Louisa Buckingham a, *, Duygu Aktug
-Ekinci b
a
University of Auckland, New Zealand
b  University, Turkey
Uludag

a r t i c l e i n f o a b s t r a c t

Article history: This study investigates how 32 Turkish elementary and intermediate-level EFL university
Received 12 May 2016 students respond to metalinguistic feedback on the first draft of a timed writing assess-
Received in revised form 8 January 2017 ment. Correction codes were used to indicate problematic linguistic features of each stu-
Accepted 10 January 2017
dent's text, and students redrafted the text with the assistance of a correction code key
(containing model sentences) and a dictionary. Data were compiled through think-aloud
protocols, two versions of students' drafted texts, observation notes, and an exit inter-
Keywords:
view. Students' errors were classified as one of four types: morphological, syntactic, lexical,
Think-aloud protocols
Second-language writing
and orthographic (including punctuation). Lexical errors were the most common error
Metalinguistic feedback type for both proficiency levels, although punctuation errors were the most frequent
Correction code specific error. Correction codes which required no metalinguistic reflection tended to
Process approach promote an automatized response from students, while more indirect correction code
Turkey symbols often resulted in unsuccessful attempts at re-drafting. Students often found
Error correction English-sourced correction codes difficult to interpret and we question the utility of these
in a monolingual setting. At liberty to use their L1 or English throughout, students used
Turkish for metalinguistic reasoning and spontaneously made linguistic comparisons be-
tween English and their L1. The concurrent verbalization requirement may have prompted
greater metalinguistic reasoning, however.
© 2017 Elsevier Ltd. All rights reserved.

1. Introduction

The expansion of English-medium instruction (EMI) in tertiary institutions across a range of countries, including Turkey,
has increased the need for EFL learners to develop a level of English-language writing skills appropriate for academic study.
Such writing-intensive courses necessarily have a general EAP rather than discipline-specific focus and syllabi usually include
the teaching of complex sentences, paragraph structure, bibliographic citations and short essay or report writing. While
written assessments in university courses in Turkey continue to display a product-based approach to writing (i.e., only one
draft is submitted and graded), academic writing intensive courses tend to follow the process-approach to writing devel-
opment. As this approach foresees the production of multiple drafts and the provision of feedback, decisions regarding the
type of feedback to provide on drafts become pedagogically pertinent. As described in Ellis (2009), feedback techniques,

* Corresponding author.
E-mail address: bucklj@gmail.com (L. Buckingham).

http://dx.doi.org/10.1016/j.jeap.2017.01.001
1475-1585/© 2017 Elsevier Ltd. All rights reserved.
2 L. Buckingham, D. Aktug-Ekinci / Journal of English for Academic Purposes 26 (2017) 1e16

whether direct, indirect or metalinguistic, serve differing pedagogical objectives, and contextual factors further influence the
relative appropriateness and effectiveness of feedback types (Ellis, 2009; Lee, 2008).
This study examines how students respond to unfocused (i.e., not restricted to particular linguistic features) metalinguistic
feedback in the form of correction codes on the first draft of a short written text. Previous studies have shown that the uptake
of feedback may be hindered by processing difficulties. For instance, on occasion students may misinterpret written
corrective feedback; alternatively, the feedback may not prompt them to analyse the linguistic issue identified, but may
encourage only surface-level processing (Han & Hyland, 2015). An important impetus for employing coded feedback is the
expectation that this approach will encourage greater focused metalinguistic engagement during the revision process (Ferris,
2011), leading perhaps in the longer term to greater independent monitoring of linguistic forms during drafting and revision.
However, how learners actually interpret and respond to correction code symbols while revising their text has remained
unexplored.
In an endeavour to address this, we investigate how Turkish EFL university students respond to the instances of coded
errors on the first draft of an assessed text during a timed in-class written assessment. In particular, the study focuses on
students' use of the correction code sheet (CC sheet) when interpreting and responding to the coded corrective feedback.
Think-aloud protocols (TAPs) are employed to access students' reasoning processes during the revisions.
The study addresses the following three questions:

1. How do students interpret coded corrective feedback?


2. How does the CC sheet support students in their revision of language errors?
3. What strategies do students employ to resolve their language errors in addition to using the CC sheet?

1.1. Correction codes and second language writing feedback

Correction codes are a form of guided indirect, metalinguistic feedback on learner writing (Ellis, 2009; Ferris, 2011). In
contrast with direct feedback, where the correct form is provided, indirect feedback prompts learners to review their
knowledge about a particular aspect of language flagged as problematic through the use of underlining, highlighting or
editing symbols. Through identifying an error but withholding an explicit correction, learners are encouraged to develop a
reflective, problem-solving approach to text revision (Ellis, 2009).
Such errors may involve aspects of language known to the learner, but which nevertheless occur, due, for example, to
processing demands. When they involve aspects new to the learner, the use of coded feedback may prompt a restructuring of
the learner's language system at a moment in which the learner is required to re-draft or rearticulate their output (Sampson,
2012). However, the intended benefits of coded feedback hinge on the error type being ‘correctable’ at this stage of the
learner's linguistic development. The extent to which using error coding is more effective than other forms of indirect
feedback is uncertain. Ferris and Roberts (2001) found no detectable advantage in using correction codes over simply
underlining errors, for instance.
Previous meta-analyses of research on corrective feedback have signalled that the underreporting of contextual factors
and experimental conditions as well as significant variance (and often weaknesses) in study design preclude any conclusive
understanding of the relative effectiveness of different error correction techniques (Kang & Han 2015; van Beuningen et al.,
2012). For instance, Liu and Brown (2015) point out that almost half of the studies involving coded feedback included in their
metanalysis did not stipulate whether students had access to a correction code key during the revision process, a factor highly
likely to affect students' performance. It is unclear, for instance, whether a key was available to students in studies by Storch
and Wigglesworth (2010) and Sachs and Polio (2007).
Regardless, studies have repeatedly shown that language students generally desire corrective feedback from instructors
(Ferris & Roberts, 2001; Lee, 2008; Leki, 1991) and learners appear to appreciate the provision of metalinguistic clues through
error codes (Ferris & Roberts, 2001; Komura, 1999). As Ferris and Roberts (2001) have shown, however, learners' ability to
address coded errors appropriately can vary depending on the error type. The authors suggest that the potentially greater
complexity of syntactic errors means that measures such as the use of codes or underlining may be inadequate as correction
prompts. This finding may, however, have been influenced by the participants' prior language learning experience. Partici-
pants were migrants in an ESL context, and it is likely that previous experience of grammar instruction varied considerably in
terms of quantity and quality. In some cases, their formal knowledge of the grammar system may have been insufficient to
respond effectively to an indirect prompt.
Van Beuningen et al., (2012) also posit that different corrective feedback approaches may suit particular error types. In
their study, second language users of Dutch (children from migrant families) appeared to benefit more from direct than
indirect corrective feedback (using error codes) in the case of grammatical errors. However, the learners' youth (the average
age was 14), and the fact that learners were drawn from secondary school language-sensitive content courses (rather than
specifically foreign language courses), suggests that learners may hitherto have had relatively limited explicit target language
grammar instruction.
L. Buckingham, D. Aktug-Ekinci / Journal of English for Academic Purposes 26 (2017) 1e16 3

Additionally, the correction code system allows teachers to provide individualized feedback to students on multiple drafts
in a time-efficient manner, which is a vital consideration in the context of heavily subscribed EAP courses at Turkish
universities.

1.2. Think-aloud protocols

The principle source of data elicitation employed in this study involves concurrent verbal reports, or think-aloud protocols
(TAPs). Data derived from TAP transcriptions can provide insight into otherwise covert cognitive processes, such as strategies
used in decision-making, or how learners approach a task (Faerch & Kasper, 1987) and process feedback (Storch &
Wigglesworth, 2010).
Sachs and Polio (2007), however, cautioned against the use of TAPs in second language writing studies. Whilst conceding
that such data may provide instructive insights into internal processes, they found the use of verbal protocols to be reactive
(i.e., to potentially affect thought processes). A corollary from such counsel would be to treat any conclusions drawn from such
data as suggestive in nature. Nevertheless, weaknesses in the study's design invite reconsideration of the use of TAPs. In Sachs
and Polio (2007), the ESL students had a wide variety of L1s (no information on other languages known or students'
educational level is provided) and had spent varying lengths of time in the US (ranging from one month to one year). Despite
patent variation in learners' profiles and in their experience of English, all were required to verbalize their thoughts in English.
As Bowles (2010) cautions, the challenge of verbalizing in an L2 can influence the quality of the data, and Sachs and Polio
(2007) acknowledge this in a footnote. In Storch and Wigglesworth (2010), no mention is made of the language of verbal-
ization or L1 language use. The reader may assume only data in English was taken into account due to the ESL context and the
varied L1 backgrounds of the participants. Explicit reflection on how this might have affected the study's validity was absent.
That multilingual people use all language resources at their disposal when undertaking a cognitive task is well known. The
think-aloud data in Armengol and Cots (2009) and Woodall (2002), for instance, attest to the strategic functions of trans-
languaging practices during the composition or revision of texts, whilst myriad further studies document the translanguaging
literacy practices of multilinguals (see Creese & Blackledge, 2015).
As the verbalization of thoughts may be edited or supressed during the think-aloud process (Wigglesworth, 2005),
combining concurrent verbalization with retrospective insights may be advisable (Bowles, 2010). The use of visual data or
observation notes, compiled while participants are engaged in the task, may aid in probing the participants' thought pro-
cesses retrospectively. The use of an additional data collection technique such as these may strengthen the reliability of the
study design.

2. Methods

2.1. Setting

The study was conducted at the School of Foreign Languages (SFL) (or Yabancı diller yüksek okulu) at a large state university
in northwestern Turkey1. The SFL has around 2000 students enrolled annually, and these receive intensive EFL courses
(between 22 and 26 hours weekly) in preparation for the English language exam. Entrance to selected undergraduate pro-
grammes in the sciences and humanities requires a level of B1 (CEFR), as 30% of courses in the degree plan are English
medium. The writing section of the exam constitutes 20% of the final grade, and requires students to produce a short essay of
at least three paragraphs on one of two topics.
Elementary and pre-intermediate levels receive five hours of writing instruction weekly while intermediate-level students
receive four. The writing classes at each level use a textbook series2 that encourages attention to form and vocabulary for the
purposes of drafting texts in an academic context.
In each 14-week semester, six writing assessments are undertaken in each level: two quizzes (weeks 5 and 10), two
writing assessments (weeks 6 and 12) and two mid-term exams (weeks 7 and 14). The 25-minute quizzes comprise exercises
of a mechanical nature similar to the exercises in the textbook.3 For the 30-minute in-class writing assessments, students
select one of two topics and write a 100e120 word paragraph without consulting resources. The writing prompts are
intended to elicit either a descriptive or an opinion text (the two text types are taught in the textbook), and students receive
unfocused coded corrective feedback on their language-related problems using the correction code employed in all courses at
the SFL. Students are graded on the second draft, which they produce several days later as an in-class 30-minute assessment
with the aid of a dictionary4 and the CC sheet (see Appendix A). Each code symbol on the sheet is accompanied by a sample
sentence of the coded error and the corrected sentence. As the exercise involves re-drafting the original text rather than

1
The university comprises 13 faculties, two schools, 15 vocational schools and a number of academic institutes. Student enrolment surpassed 63,000 in
the 2015/2016 academic year, which included just over 6000 graduate students.
2
The ‘Great Writing’ series by National Geographic Learning.
3
Such tasks included ordering sentences in a paragraph, inserting an appropriate topic sentence, or creating a complex sentence from two independent
clauses.
4
A monolingual or bilingual dictionary may be used.
4 L. Buckingham, D. Aktug-Ekinci / Journal of English for Academic Purposes 26 (2017) 1e16

Fig. 1. Writing assessment rubric used at the SFL.

purely error correction, students have the choice of attending to their errors or omitting or paraphrasing the problematic
passage. This re-drafting task is intended to develop students' compensation strategies, as elucidated by Oxford (1990), in the
sense that students are expected to use the available clues (i.e., the coded corrective feedback) to formulate informed guesses.
Finally, the mid-term exams, lasting 50 min, involve both mechanical tasks (similar to the quizzes) and an opinion essay. Fig. 1
displays the grading rubric used by all instructors for the paragraph and essay writing assessments.
Instructors employ a standardized set of codes provided by the SFL; the symbols are introduced in class in the third week
and are employed throughout the semester. While the correction code system (i.e., either the symbols or the error categories)
used in this study may differ from those used in other studies (e.g., Sampson, 2012), such systems are in essence very similar.
For the ecological validity of this study, it was viewed important to use the set of codes employed by instructors at this
institution.
The second author is an instructor at the SFL and is intimately familiar with the syllabus at the three different proficiency
levels and the proficiency exam requirements. She was not the instructor of the participants of this study, however.
The SFL takes a process-approach to developing students' writing skills in class. This is reflected in class activities and in
the textbook, which teaches and practises the pre-writing stages of brainstorming, planning, drafting and revising. Writing
assessments primarily evaluate the product, however, and no additional time is provided during assessments for pre-writing
activities. The assessment task instructions do not remind students to undertake such preparatory work before drafting;
rather students are expected to follow the procedure previously practised in class. The only reflection of the process-approach
is in the requirement for two drafts in the writing assessment and the provision of implicit corrective feedback on the first
draft.

2.2. Participants

The second author provided a written and verbal description of the study in Turkish to all students in ten elementary
classes and six intermediate classes5 (the lowest and highest proficiency levels at the SLF) and requested volunteers. Students
were not informed that the objective of the study was to understand how students interpreted and responded to correction
symbols, however. An equal number of volunteers was secured at each level: 16 elementary and 16 intermediate-level
students6. Aged between 18 and 22, all were native speakers of Turkish and spoke only Turkish at home; English was
their only foreign language. No students were repeating the preparatory programme.
In accordance with the research ethics stipulations of our university7, written informed consent was obtained from both
the SFL administration and from the individual students.

2.3. Instruments and research design

The data for this study are derived from five sources. The first source comprised the students' first draft of the in-class
writing assignment. The second source were the coded corrections inserted into the draft text. The third source involved
the students' second (revised) draft. The fourth data source comprised the recorded think-aloud sessions conducted while
students revised their original text, and the observation notes taken by the second author during the session. The final source
of data involved interviews held with each student immediately upon conclusion of the TAP session.
Measures were taken to ensure the participants' familiarity with the task requirements and procedures. The format of the
task sheet and the task instructions of the writing assignment were identical to those used by SFL instructors for the writing

5
In this semester, there were 28 elementary classes and 6 intermediate classes. A higher number of elementary classes were approached in this initial
phase, as these students may have lower levels of confidence in their English-language ability and be less likely to volunteer than students at the inter-
mediate level.
6
The equal number of volunteers in each level occurred purely by chance.
7
This study was undertaken while both authors were affiliated to Bilkent University in Ankara.
L. Buckingham, D. Aktug-Ekinci / Journal of English for Academic Purposes 26 (2017) 1e16 5

exam, with the exception of the omitted university logo. The second author conducted individual think-aloud training ses-
sions (described below) with participants, and provided instructions on how to verbalize, following recommendations from
Bowles (2010).

2.3.1. Think-aloud protocol training


Two weeks before students revised their draft texts, the second author conducted a training session with each participant
lasting approximately 15 min to familiarize them with the TAP process. The training session began with a video (8 min), which
the authors prepared in order to demonstrate the think-aloud procedure (see Appendix B). In the video, a person is shown
revising a coded paragraph and concurrently verbalizing her thoughts in Turkish. Subsequently, the student received a
paragraph (see Appendix C), modelled on typical student writing, in which language errors (grammatical, lexical and
punctuation-related) had been flagged using correction code symbols. The student was requested to make corrections to the
text with the help of the CC sheet and a dictionary, while simultaneously thinking aloud as seen in the video. The second
author sat across the room and, during periods of silence, prompted students to continue verbalizing by raising a sign with a
brief prompt in Turkish (e.g., ‘think aloud’ or ‘why?’). Students had the opportunity to ask questions regarding the TAP
procedure upon completing the revision of the text. All the training TAPs were audio recorded; the authors later reviewed
these to check that each participant had understood the procedure. As the second author and the students were previously
unacquainted, the training sessions became an opportunity to establish rapport and build students' confidence. Data from this
training session were not used in the study.

2.3.2. Students' first and revised drafts


The procedure for the writing task in this study followed the steps implemented for in-class writing assessments during
the course. This stage was implemented in a room at the SFL (i.e., in the students' usual environment) one afternoon after
normal class hours towards the end of the semester. The participants were asked to select one topic from the two provided
(these were similar to their usual course assessment topics) and write a paragraph of between 100 and 120 words (see
Appendix D for an example prompt). They had 30 min to write the first draft unaided. The second author collected the texts
and subsequently provided unfocused feedback in the form of correction codes using the standard code categories employed
by the SFL (see Appendix A).
The reliability of the coding of students' errors on the first draft was verified as follows. Four student texts were randomly
selected (two drafts from each level) and a second instructor at the SFL (with experience teaching writing at both levels)
provided corrective feedback on linguistic errors using the error correction code. An inter-coder reliability rate of 97% was
found. Differences in how an error was coded (such as the use of ‘wrong word’ or ‘unclear’) were subsequently resolved
through discussion.
A few days later, the author arranged times with each student to revise their text over a period of two days immediately
following completion of the first draft. This stage was conducted in individual 30-minute sessions8 in the presence of the
second author and the students were given the CC sheet and a dictionary as an aid. Students were asked to verbalize all
thoughts in either Turkish or English or both languages. During the revision, the second author sat across the room to ensure
students verbalized their thoughts throughout, provide reminders by raising a card with a brief instruction during periods of
silence, and take observation notes. These sessions were audio recorded. The duration of each re-drafting session varied
depending on the number of coded errors each student had to address and whether the student read through their text again
before re-drafting, or simply focused immediately on the coded errors. According to the observation notes, only two students
(one at each proficiency level) read through the entire text before starting to re-draft.

2.3.3. Exit interviews


Each student was interviewed immediately after completing the second draft to inquire into the utility of receiving coded
feedback for the purpose of revision, and to gain a retrospective account of their thought processes at points where they had
paused in their verbalization. Each interview lasted around five minutes and was recorded. The students were provided with
their first and revised drafts to assist recall.

3. Results

Each student produced a first draft of between 100 and 120 words. Intermediate-level students made a higher number of
drafting errors (M ¼ 14.31, SD ¼ 3.30) than elementary students (M ¼ 12.5, SD ¼ 5.30); however, greater variance in results
was found in elementary-level group. Tables 1 and 2 display the frequencies of error types across the four categories. The most
numerous error type flagged on students' first draft constituted lexical errors, which involved a wide range of error forms.
This was followed by morphological errors, which most commonly involved verb form, single/plural, and agreement.

8
While the act of verbalizing is considered potentially to lengthen the time required for task completion (Bowles, 2010), it was deemed unnecessary to
extend the time as students frequently finish within 15 min when undertaking in-class revisions. The instructions provided to students in this study thus
did not vary from those received in their usual classes.
6 L. Buckingham, D. Aktug-Ekinci / Journal of English for Academic Purposes 26 (2017) 1e16

Table 1
Morphological and lexical errors.

Morph. Ele.a Int. Lexical Ele. Int.


T 3 (5%) 1 (2%) ʌ 16 (22%) 1 (1%)
VF 16 (29%) 9 (20%) WF 7 (9%) 13 (12%)
S/PL 20 (36%) 11 (24%) WW 9 (12%) 20 (19%)
AG 6 (11%) 24 (53%) PR 14 (19%) 33 (31%)
AR 11 (20%) 1 (2%) / 17 (23%) 26 (24%)
INF 11 (15%) 12 (11%)
R 0 2 (2%)
Total 56 45 Total 74 107
a
‘Ele.’ ¼ elementary level; ‘Int.’ ¼ intermediate level.

Table 2
Syntactic and orthographic errors.

Syntactic Ele. Int. Orthographic Ele. Int.


? 17 (71%) 12 (80%) C 10 (22%) 11 (18%)
WO 5 (21%) 3 (20%) /) 2 (4%) 2 (3%)
Frag. 2 (8%) 0 SP 5 (11%) 9 (5%)
P 28 (62%) 40 (65%)
Total 24 15 Total 45 62

The remainder of this section will discuss the approaches used by students to revise each of the four error categories while
redrafting. To identify the approaches, the following documents were examined: the students' two drafts, the TAPs, and the
observation notes. Information from the exit interviews also contributed to clarifying the authors' interpretation of a stu-
dent's response to the error codes.

3.1. Morphological errors

Table 3 displays the number of occasions that students in each level were successful in redrafting the section of text with a
coded error (column 1), the occasions that they used the CC sheet while redrafting that section of text (column 2), and the
instances when they used the CC sheet and succeeded in redrafting the section of text appropriately (column 3).
Both elementary and intermediate students employed one of two approaches when responding to the coded morpho-
logical errors: either they revised their text without consulting the CC sheet, or they used the CC sheet to help them revise.
While the majority of students consulted the CC sheet when attempting to correct an error, this percentage was higher in the
case of elementary students (68% attempted to use the CC sheet, compared with 53% of the intermediate students). From the
TAPs, it was clear that sample sentences on the CC sheet assisted the students in interpreting the symbol and undertaking the
correction. As can be seen in Examples 1 and 2, the CC sheet prompted an explicit metalinguistic reflection from the student,
and in some cases, as can be seen here, encouraged the student to make a spontaneous comparison to Turkish (Example 2), or
draw on her knowledge of set phrases (Example 3).9

Table 3
Morphological errors.

Correction of Error CC Use CC Use & Correct.

Yes No Yes No Yes No


Ele. 45 11 38 18 34 4
n ¼ 56a (80%) (20%) (68%) (32%) (89%) (11%)
Int. 37 8 24 21 22 2
n ¼ 45a (82%) (18%) (53%) (47%) (92%) (8%)
a
Total number of errors made by students at this level. For a breakdown of error types, see Table 1.

9
All examples from the TAP transcription are in Turkish (translated by the second author), except the underlined text in quotation marks which the
student produced in English. Brief descriptions of actions the student was observed to take and clarifications appear in italics in brackets.
L. Buckingham, D. Aktug-Ekinci / Journal of English for Academic Purposes 26 (2017) 1e16 7

Example 1 (S3 eEle.):

The related transcription from the TAP:


“Maybe I will travelling” I made a “verb form” mistake here [the student had a quick look at the correction code], as I have
done before. The correction should be “I will travel”. I don't know how I made this mistake here, because I wrote “will”
and the bare infinitive is needed here, not the “eing” form. “I will travel”.

Corrected sentence on the final draft:


Maybe I will travel (…)

Example 2 (S29 eInt.):

The related transcription from the TAP:


“it is”, AR, what was this? [The student consulted the code.] Hmm, there's an article mistake. I have to write “a, an, the”. “It
is a rainy day” because it is not like in Turkish, so article mistakes are the ones that I make the most. We have to write “a,
an, the” there.
What is that? VF? [The student consulted the code.] Hmm, there is a mistake with this thing, umm, it should be “you start
reading” because it [the verb] comes after a verb. That [the verb] can't be there,because there are two verbs together.

Corrected sentence on the final draft:


… it is a rainy day and you are at home. Then you sit opposite the window and you start reading.
8 L. Buckingham, D. Aktug-Ekinci / Journal of English for Academic Purposes 26 (2017) 1e16

Example 3 (S5 eEle.):

The related transcription from the TAP:


“I think the best way” e umm, “doing some exercise”, umm [the student was looking at the code] “ways”, “singular e
plural” [the student read the meaning of the symbol and the sample sentences], so “way” should be “singular” because “the
best way”, “only one way”.
For “do”, I made a “verb form mistake” [the student quickly had a look at the sample sentence], so it should be “doing
exercise”.

Corrected sentence on the final draft:


I think the best way of escaping stress is doing exercise.

On many occasions that students successfully made a correction without consulting the CC sheet at all (in this case, two
elementary and nine intermediate-level students), the respective student independently generated her own metalinguistic
explanation for the error when interpreting the error code on the draft (see Example 4).

Example 4 (S4 eEle.):

The related transcription from the TAP:


“Such as reading a book”, I guess. Because the verb here needs to be a “noun”, “reading a book” is a noun.

Corrected sentence on the final draft:


Such as reading a book.

3.2. Lexical errors

Lexical errors constituted the most common error type among both groups. Participants managed to correct most errors,
although the success rate was lower among the elementary students (see Table 4). Students displayed one of three

Table 4
Lexical errors.

Correction of Error CC Use CC Use & Correct.

Yes No Yes No Yes No


Ele. 58 16 24 50 21 3
N ¼ 74 (78%) (22%) (32%) (68%) (88%) (12%)
Int. 99 8 69 39 62 7
N ¼ 107 (93%) (7%) (64%) (36%) (90%) (10%)
L. Buckingham, D. Aktug-Ekinci / Journal of English for Academic Purposes 26 (2017) 1e16 9

approaches to resolving the error: they attempted redrafting without using the CC sheet, they used the CC sheet, and they
consulted the dictionary.
Elementary students consulted the CC sheet less frequently. This omission occurred particularly in the case of errors which
could be corrected automatically (e.g., the ‘/’ symbol), and in the case of symbols which did not clearly indicate how the error
could be resolved (e.g., the ‘WW’ or ‘PR’ symbols). The most common error symbols that were addressed without resorting to
the CC sheet were ‘/’ (all 17 instances for elementary students and all 26 instances for intermediate), and ‘^’ (12 out of 16
instances). The fact that a symbol rather than an abbreviation was used may have made the code more self-explanatory. As
the symbol ‘/’ required no additional cognitive engagement from the students beyond simply deleting the word when re-
drafting, consultation of the model sentences on the CC sheet appeared superfluous. On one occasion, the student under-
took the required correction despite believing her original version had been correct (Example 5). Five elementary and eight
intermediate students automatically corrected the ‘/’ code without any apparent reflection or consultation. Examples 5 and 6
illustrate the automatized approach of some students to making a correction in such cases. No evidence of metalinguistic
analysis occurred in the TAPs and the correction symbol did not appear to prompt any consciousness raising of an underlying
rule or linguistic convention, or a restructuring of their internalized language system. This was not always the case, however,
and in some instances a degree of explicit reasoning was manifested (Example 7).

Example 5 (S27 eInt.):

The related transcription from the TAP:


“People do something for the”, umm I think “the” is correct, but the teacher deleted it. So, not “for the escaping”, but
“for escaping”. For the other sentence, the teacher deleted “the” again, but in my opinion, “the” has to be there. As the
teacher deleted it, I'll delete it. The teacher said it was unnecessary.

Corrected sentence on the final draft:


People do something for escaping stress and difficulties of modern life. In my opinion, spending time with friends is the
best way of reducing stress.

Example 6 (S20 eInt.):

Corrected sentence on the final draft (S4):


I can design it like I want.
Example 7 (S25 eInt.):
10 L. Buckingham, D. Aktug-Ekinci / Journal of English for Academic Purposes 26 (2017) 1e16

The related transcription from the TAP:


“The” is unnecessary here, because after “of”, the verb comes in “eing” form. We changed the verb into a noun.”

Corrected sentence on the final draft:


(…) best way of escaping stress, …
When consulting the CC sheet, the TAPs provided instances of students explicitly noticing or becoming conscious of a
linguistic feature in the act of correcting (see Example 8). This occurred in the case of the ‘INF’ code. According to the TAP
transcripts, seven students (three elementary and four intermediate level) appeared to become aware that contractions
constituted informal language in a university context in the act of using the CC sheet to respond to the coded error.

Example 8 (S2 eEle.):

The related transcription from the TAP:


“People shouldn't stay” [pause while the student read the sentence silently], is this [the symbol] “W”, teacher? Or is it,
umm [the student looked at the code], aha! “Informal”, “I don't think that's true, I do not” [the student read aloud the
sample sentence on the CC sheet]. Hmm, okay. “People should not”. I've learned “informal” now; I mean I learned it at this
moment. I never noticed it before.

Corrected sentence on the final draft:


In my opinion, people should not stay in the same city.
With regard to the third approach employed by students, TAPs and observation notes revealed that students used their
dictionary for errors involving incorrect word choice (e.g., Example 9) or incorrect word class.

Example 9 (S14 eEle.):

The related transcription from the TAP:


“and we focus”, what was “PR” [the student consulted the code], “preposition” mistake. Hmm, what did I write there? “to
other people's lives”, yeah the dictionary is important! [The student consulted the dictionary.] Is “focus” used with “of?
Should I use “on”? Oh, I have to use “on”.

Corrected sentence on the final draft:


(…) and we focus on other people's lives.
L. Buckingham, D. Aktug-Ekinci / Journal of English for Academic Purposes 26 (2017) 1e16 11

Table 5
Syntactic errors.

Correction of Error CC Use CC Use & Correct.

Yes No Yes No Yes No


Ele. 14 10 18 6 13 5
N ¼ 24 (58%) (42%) (75%) (25%) (72%) (28%)
Int. 12 3 3 12 2 1
N ¼ 15 (80%) (20%) (20%) (80%) (67%) (33%)

3.3. Syntactic errors

As displayed on Table 5, elementary-level students were much more likely to consult the CC sheet than intermediate-level
students. As in the case of lexical errors, students employed three strategies during the revision of syntactic errors: they
attempted redrafting without using the CC sheet, they used the CC sheet, and they consulted the dictionary.
This section of the CC sheet contained three error types only. For both levels, the symbol ‘?’ (unclear) was the most
frequent (elementary level: 17 out of 24 errors; intermediate level: 12 out of 15 errors). For elementary-level students, this
error type proved more difficult to resolve, presumably because of the seemingly infinite range of issues potentially referred
to by the symbol ‘?’ and the absence of sample sentences on the CC sheet. The majority of errors (10 out of 17) were not
corrected appropriately by students of this level. Intermediate students were, however, more successful in this regard,
correctly addressing 10 of the 12 errors. Interestingly, whilst the elementary-level students usually consulted the CC sheet for
this error code (in 11 out of 17 instances), only one instance of this occurred in the intermediate-level group. This may be
because thee vast majority of syntactic errors made by intermediate-level students were flagged with the ‘?’ symbol (see
Table 2) and the CC sheet does not provide a model sentence for this error code. The students may thus have known that the
CC sheet would not provide further assistance during redrafting. When confronted with the ‘?’ symbol, some students chose
to consult the dictionary (two elementary and one intermediate-level student) for example sentences with the target word or
(in bilingual dictionaries) the translation.

3.4. Orthographic and punctuation errors

These more mechanical features of writing appeared to be corrected with relative ease (see Table 6). In fact, this error type
appeared to be resolvable in most instances regardless of the use of the CC sheet. This may be due to the very limited array of
correction options for punctuation (‘P’) or capitalization (‘C’) errors, the most frequent error types for both levels.
Students employed the same three approaches to resolving errors as in the previous two sections. Perhaps unsurprisingly,
the use of the dictionary was reserved for errors marked ‘SP’ (orthographic errors) (see Example 10). The remaining error
types (‘C’: capitalization, combine words ‘/ )’, and ‘P’ punctuation) were resolved either by directly correcting the flagged
problem or by consulting the CC sheet.

Example 10 (S27 eInt.):

Table 6
Orthographic and punctuation errors.

Correction of Error CC Use CC Use & Correct

Yes No Yes No Yes No


Ele. 40 5 35 10 29 6
N ¼ 24 (89%) (11%) (78%) (22%) (83%) (17%)
Int. 61 1 46 16 45 1
N ¼ 62 (98%) (2%) (74%) (26%) (98%) (2%)
12 L. Buckingham, D. Aktug-Ekinci / Journal of English for Academic Purposes 26 (2017) 1e16

The related transcription from the TAP:


“The second one is the best choise”, we wrote this wrong, “SP” e there is a mistake with the “spelling”, yes. The
dictionary [the student consulted it]. What I meant was “choosing”, no, actually “choice”, I wrote it wrong. Let me check
it, hmm, “choice”, yes, we wrote it wrong.

Corrected sentence on the final draft:


I believe the second one is the best choice because (…)

3.5. Student exit interviews

Upon completing the revision session, the second author interviewed each participant individually in Turkish. In addition
to questions about their impression of the utility of using the correction code system, students were also requested to explain
pauses in their verbalization which appeared pertinent to understanding the correction process. The students had their first
and revised drafts beside them during the interview to prompt their recollection of specific examples from their own drafting
experience.
All students were in favour of receiving coded feedback on their first draft and they perceived the use of the CC sheet
during revision as very helpful independent of proficiency level. This view is supported by the examples of students engaging
in a metalinguistic analysis of their errors, as observed in the TAPs. As the comments below illustrate,10 the CC sheet appeared
to provide the appropriate level of guidance to students during the revision process.
(S29 e Int.): I consult the code for all the mistakes. I don't need to memorize the symbols because we can use it while
writing our final drafts. Consulting the code for the mistakes is easier than memorizing all the errors. I think consulting
the correction code sheet for our errors is fun. We look at the code, and then we correct our mistakes. It's fun.
Two students also mentioned their perspective of the correction code use as a pedagogical tool for teachers:
(S23 e Int.): Giving feedback with symbols is easier for you as well, because if the code hadn't been invented, you would
have to explain our errors in many words instead of these symbols. So, using the correction code is very good.
While the sample sentences were considered advantageous by all, two particular benefits were cited by five students:
sentences clarified the meaning of the symbols, and they prompted students' thinking. Relatedly, four students specifically
mentioned that the absence of sample sentences created difficulties in attempting to respond to the ‘?’ symbol.
Further challenges in using the CC sheet included finding the symbol quickly on the CC sheet (mentioned by five students),
the limited number of sample sentences (mentioned by two students), and confusions between symbols such as S/PL and SP
(mentioned by two students). Unprompted, students volunteered solutions for these issues, such as the use of colours for
different sections of the CC sheet and arranging symbols in alphabetical order.

4. Discussion and conclusion

When addressing most error types during the redrafting process, students made use of the CC sheet on the majority of
occasions. For all four error categories, in the vast majority of instances in which students consulted the CC sheet the
redrafting attempt was successful. In many instances (such as the case of lexical errors), the revision appeared more likely to
be successful if the CC sheet was consulted. Students also confirmed in the exit interviews the usefulness of this form of
revision support.
Students tended to read the sample sentence on the CC sheet and compare this with their own sentence. The think-aloud
transcripts showed that they sought metalinguistic clues while doing this. While it is clearly impossible to provide a model
sentence that would furnish the pertinent clue for any given error, students appeared to notice features such as word class (or
word morphology) and word order and use such clues to generate a range of potentially correct options. This supports
Sampson’s (2012) affirmation that the additional explanatory information (or illustrative examples) helps students refine
their output. It appears that consulting this resource helps stimulate learners' explicit metalinguistic reasoning. The CC sheet
thus appears to aid the consciousness raising of linguistic rules or patterns, which students have already acquired but which
they do not consistently apply.

10
Translated from Turkish by the second author.
L. Buckingham, D. Aktug-Ekinci / Journal of English for Academic Purposes 26 (2017) 1e16 13

In this sense, the use of the CC sheet appeared to improve the quality of students' noticing, in that it prompted the type of
linguistic reflection that assisted students in addressing the error. Qi and Lapkin (2001), in their analysis of think-aloud data
during text revisions, refer to such instances as ‘substantive noticing’, which they define as the type of cognitive behaviour
that is likely to lead to an improved resolution of errors during re-drafting.
Admittedly, however, the requirement to ‘think aloud’ may have been reactive and thus contributed to this display of
explicit reasoning. Analogously, the written prompts students received during periods of silence may have stimulated stu-
dents' thought process. In particular, the prompt ‘Why?’ may have encouraged students to formulate a justification that they
had not been explicitly thinking in that instant. The formulation ‘What are you thinking?’ may have been more suitable.11
Both aforementioned factors (the relative utility of the CC sheet and the potential reactivity of the verbalization require-
ment) would need to be examined in a subsequent study with an experimental design.
From the observation notes and TAPs it appeared that deeper metalinguistic processing of feedback occurred when the
challenge level implied by the coded feedback was at an appropriate level of complexity. At contrasting points of the
complexity continuum were the codes ‘/’ and ‘?’. The former appeared to prompt no more than surface-level processing
(that is, no language-related episodes occurred and resources were not consulted); in the latter case, the code flagged
errors of a more complex nature and the absence of adequate metalinguistic clues in the CC sheet was clearly a hindrance
to elementary students in particular. As cautioned by Guenette (2007), for feedback to be effective, students need both to
understand it and be capable of undertaking an appropriate response. Neither condition appeared to be fulfilled in this
case. As evidenced by the transcript, the quality of the reflection that was prompted by both codes was comparatively low.
In the case of ‘/’, the correction may be undertaken in a mechanical manner and without verbalization, or without any
verbalized language-related analysis. On several occasions, learners obeyed the prompt without understanding the
rationale.
The use of dictionaries, consulted primarily for lexical and spelling errors, usefully complemented the prompts on
the CC sheet during in-class revision. Students invariably attended to model sentences contextualizing the language
feature and, judging from students' attention to these, perceived them as providing clues pertinent to the revision
task.
Several pedagogical implications may be derived from the foregoing discussion. Firstly, the effective use of resources such
as the CC sheet and a dictionary during an in-class assessed re-drafting exercise appears to help students improve their
performance, and in the case of the former, stimulate linguistic reflection. If students are encouraged to use such resources
during assessments, they may be more motivated to work on refining their use of these under non-exam conditions. As
dictionary use is also taught in class (and students are expected to employ them independently throughout their degree
programmes), permitting the consultation of this resource during assessments is pedagogically compelling.
As previously discussed, not all error codes were likely to stimulate ‘substantive noticing’. The use of codes which prompt a
perfunctory response (e.g., ‘/’), or conversely those which do not provide sufficient guidance (e.g., ‘?’) should be reconsidered.
In the former case, placing the code in the margin might stimulate greater linguistic reflection of the respective sentence,
while in the latter case, a prompt requesting learners to reformulate the whole sentence may prove more constructive.
In consonance with Qi and Lapkin (2001), we consider that the act of verbalizing the linguistic rationale behind corrections
undertaken during re-drafting encourages students to review their linguistic knowledge. This itself may have longer term
learning benefits. Teachers may encourage the use of this learning strategy by requesting a brief (written) explanation for the
revision of selected errors in a re-drafting task.
Drawing on our observations relating to L1 language use during this study, two further recommendations may be made.
Despite regular in-class exposure to correction codes during coursework and assessments, students still found interpreting
individual codes to be challenging, and this was confirmed during the exit interviews. Convention aside, there appears to be
no basis for the use of abbreviations of English words as CC symbols. Indeed, students appeared to interpret codes more
readily when these were abstract symbols rather than abbreviations. It thus seems reasonable to suggest that (where stu-
dents' L1 is shared) equivalents in the students' L1 replace English-based symbols. At the very least, future work may consider
incorporating this variable into the study design, as to date students' preferences and the relative efficacy of using students' L1
for correction codes has not been considered.
The use of the L1 clearly also supported the verbalization task. The TAP transcriptions and observations revealed that many
students used Turkish for language-related episodes while revising, such as when seeking the appropriate lexeme or
morphological form in English. The strategy appeared to help clarify their intended meaning and understanding of mor-
phosyntactic and semantic contrasts. This supports previous findings in Woodall (2002) regarding the use of learners' L1 as a
pedagogical aid during composition tasks. We strongly recommend that study participants be at liberty to use their L1 when
attempting to articulate cognitive process or affective states during verbal protocols (whether concurrent or retrospective),
and arguably also during interviews.

11
We are grateful to a reviewer for this insight.
14 L. Buckingham, D. Aktug-Ekinci / Journal of English for Academic Purposes 26 (2017) 1e16

Appendix A. Error correction code

Appendix B. Screenshot of the training video


L. Buckingham, D. Aktug-Ekinci / Journal of English for Academic Purposes 26 (2017) 1e16 15

Appendix C. Sample paragraph for think-aloud training

Appendix D. Writing prompt example

Topic 1:

People have different ways of escaping the stress and difficulties of modern life. Some read, some exercise, others work in
their garden. What do you think are the best ways of reducing stress? Use specific details and examples in your answer.

References

Armengol, L., & Cots, J. M. (2009). Attention processes observed in think-aloud protocols: Two multilingual informants writing in two languages. Language
Awareness, 18(3e4), 259e276.
Bowles, M. A. (2010). The think-aloud controversy in second language research. New York: Routledge.
Creese, A., & Blackledge, A. (2015). Translanguaging and identity in educational settings. Annual Review of Applied Linguistics, 35, 20e35.
Ellis, R. (2009). A typology of written corrective feedback types. ELT Journal, 63, 97e107.
Faerch, C., & Kasper, G. (1987). Introspection in second language research. Philadelphia: Multilingual Matters LTD.
Ferris, D. R. (2011). Treatment of error in second language student writing (2nd ed.). Ann Arbor, MI: The University of Michigan Press.
Ferris, D. R., & Roberts, B. (2001). Error feedback in L2 writing classes: How explicit does it need to be? Journal of Second Language Writing, 10, 161e184.
Guenette, D. (2007). Is feedback pedagogically correct? Research design issues in studies of feedback on writing. Journal of Second Language Writing, 16,
40e53.
Han, Y., & Hyland, F. (2015). Exploring learner engagement with written corrective feedback in a Chinese tertiary EFL classroom. Journal of Second Language
Writing, 30, 31e44.
Kang, E., & Han, Z. (2015). The efficacy of written corrective feedback in improving L2 written accuracy: A meta-analysis. The Modern Language Journal,
99(1), 1e18.
Komura, K. (1999). Student response to error correction in ESL classrooms. Unpublished master’s thesis. Sacramento: California State University.
Lee, I. (2008). Understanding teachers' written feedback practices in Hong Kong secondary classrooms. Journal of Second Language Writing, 17/2, 69e85.
Leki, I. (1991). The preferences of ESL students for error correction in college-level writing classes. Foreign Language Annals, 24, 203e218.
Liu, Q., & Brown, D. (2015). Methodological synthesis of research on the effectiveness of corrective feedback in L2 writing. Journal of Second Language
Writing, 30, 66e81.
Oxford, R. L. (1990). Language learning Strategies: What every teacher should know. Boston: Heinle & Heinle.
Qi, D. S., & Lapkin, S. (2001). Exploring the role of noticing in a three-stage second language writing task. Journal of Second Language Writing, 10(4), 277e303.
Sachs, R., & Polio, C. (2007). Learners' use of two types of written feedback on an L2 writing task. Studies in Second Language Acquisition, 29, 67e100.
Sampson, A. (2012). Coded and uncoded error feedback: Effects on error frequencies in adult Colombian EFL learners' writing. System, 40, 494e504.
16 L. Buckingham, D. Aktug-Ekinci / Journal of English for Academic Purposes 26 (2017) 1e16

Storch, N., & Wigglesworth, G. (2010). Learners' processing, uptake, and retention of corrective feedback on writing: Case studies. Studies in Second Language
Acquisition, 32, 303e334.
van Beuningen, C. G., De Jong, N., & Kuiken, F. (2012). Evidence on the effectiveness of comprehensive error correction in second language writing. Language
Learning, 62, 1e41.
Wigglesworth, G. (2005). Current approaches to researching second language learner processes. Annual Review of Applied Linguistics, 25, 98e111.
Woodall, B. R. (2002). Language-switching: Using the first language while writing in a second language. Journal of Second Language Writing, 11(1), 7e28.

Louisa Buckingham was previously an assistant professor at Bilkent University. In 2016, she published Doing a Research Project in English Studies (Routledge), a
text which has a strong focus on the development of academic writing skills for international graduate-level students.

Duygu Aktug-Ekinci holds an MA from Bilkent University and is currently pursuing her PhD at Anadolu University. She has been a lecturer on the academic
English programme at Uludag  University for many years.

Вам также может понравиться