Вы находитесь на странице: 1из 7

Applied Linguistics 2013: 34/1: 99–105 ß Oxford University Press 2012

doi:10.1093/applin/ams073 Advance Access published on 18 December 2012

FORUM

What Do We Mean by Writing Fluency and


How Can It Be Validly Measured?

MUHAMMAD M. MAHMOUD ABDEL LATIF


College of Languages & Translation, Al Imam Mohammad Ibn Saud Islamic University

Downloaded from http://applij.oxfordjournals.org/ at Universidad Arturo Prat on July 13, 2014


(IMSIU), Riyadh, 5701, Saudi Arabia
E-mail: muhablatif@gmail.com

Fluency is an essential component in writing ability and development. Writing


fluency research is important to researchers and teachers interested in facilitat-
ing students’ written text production and in assessing writing. This calls for
reaching a better understanding of writing fluency and how it should be mea-
sured. Although fluency is the construct with the most varied definitions and
measures in writing research, such large variance in conceptualizing the con-
struct is rarely discussed. In an attempt to demystify the what of this construct,
the present article reviews its definitions, shows how its measurement has been
influenced by oral production research, and discusses some issues related to the
validity of the varied measures used for assessing it.

DEFINITIONAL CONFUSION OF WRITING FLUENCY


‘Fluency’ is a term over which there has been much debate in applied linguis-
tics research. Compared with both reading and speaking fluency, writing flu-
ency has been defined in much greater varied ways. The fact that many studies
(e.g. Sasaki 2000; Storch 2009) used two or more measures to assess writing
fluency clearly indicates the definitional confusion they might have about it.
Such definitional confusion can also be noted in how writing fluency is con-
ceptualized in literature. Many of the definitions given to writing fluency are
of qualitative nature, including producing written language rapidly, appropri-
ately, creatively, and coherently (Wolfe-Quintero et al. 1998) and using lin-
guistic structures to achieve rhetorical and social purposes (Reynolds 2005).
On the other hand, some researchers adopting process-based definitions of
writing fluency view it as the richness of writers’ processes and ability to or-
ganize composing strategies (Bruton and Kirby 1987), and the speed of lexical
retrieval while writing (Snellings et al. 2004). This variance clearly shows that
there is no agreed-upon definition of writing fluency. Historically, writing flu-
ency research dates back to 1946 when van Bruggen reported his study on the
100 FORUM

regularity of the flow of written words. Fluency reoccurred in the late 1970s in
composing research measuring it by using the composing rate and/or text
quantity. It can be argued that assessing writing fluency has been greatly
influenced by speaking fluency measurement since that time. The next section
provides a clarification for this issue.

SPEAKING FLUENCY VERSUS WRITING FLUENCY


Given that both speaking and writing are productive skills, it is necessary to

Downloaded from http://applij.oxfordjournals.org/ at Universidad Arturo Prat on July 13, 2014


highlight speaking fluency measures in order to understand why some iden-
tical or similar measures have been used to assess writing fluency. Skehan
(2003) identifies four measures of speaking fluency: (i) breakdown fluency
or pausing; (ii) repair fluency: reformulations, replacements, false starts, and
repetition; (iii) speech rate: the number of words per minute or syllables per
second; and (iv) length of bursts occurring between pauses. In addition to
these measures, a few studies assessed speaking fluency non-quantifiably by
depending on listeners’ perceptions. Researchers adapted these five oral pro-
duction fluency measures to assess writing fluency. Table 1 shows the similar-
ity between speaking fluency measures and writing fluency ones and provides
examples of the studies using fluent written production measures.
As shown in Table 1, it can be easily noted that written fluency measures are
derived from oral fluency ones. Although the writing fluency measures in the
first four categories are identical to the speaking measures listed by Skehan
(2003), the measures in the last category are similar to speaking fluency meas-
urement in focusing on readers’ rating of written text fluency as opposed to
listeners’ rating of oral output fluency.
Writing fluency measures are of two types: product-based measures depend-
ing on written texts regardless of how they were produced and process-based
measures drawing upon the online observation of writers’ composing pro-
cesses. All the measures given in the table are product-based indicators of
writing fluency with the exception of three (pausing, length of rehearsed
text, and length of translating episodes) which are process-based indictors.
Having clarified how writing fluency measures are identical to speaking flu-
ency measures, the following question remains unanswered: can we assess
writing fluency validly by adapting speaking fluency measures? In the next
section, the author tries to answer this question.

ISSUES RELATED TO THE VALIDITY OF THE WRITING


FLUENCY MEASURES
To identify whether the above-mentioned adapted measures assess writing
fluency validly or not, we need to take into account the characteristics of
real-time language processing in both speaking and writing, writing task
FORUM 101

Table 1: A comparison between speaking fluency measures and writing flu-


ency measures
Speaking fluency measures Writing fluency measures

Breakdown fluency Writers’ pausing (Spelman Miller 2000).


Repair fluency Changes made to the text (Knoch 2007)
Speech rate Composing rate (Sasaki 2000)
Text quantity (Baba 2009)
Length of bursts occurring Length of translating episodes written between

Downloaded from http://applij.oxfordjournals.org/ at Universidad Arturo Prat on July 13, 2014


between pauses pauses (Abdel Latif, 2009)
Length of rehearsed text between pauses
(Chenoweth and Hayes 2001)
Listeners’ perceptions of Linguistic features characterizing rhetorical functions
speakers’ fluency (Reynolds 2005)
Number and length of t-units (Storch 2009)
Sentence length (Johnson et al. 2012)
Text structure, coherence, and cohesion
(Storch 2009)

performance variables, and empirical evidence for the in/validity of the writing
fluency measures used. In what follows, these issues are discussed.

Real-time language processing differences in oral and


written production
Speaking tasks require far shorter time than writing ones. Since learners spend
roughly equal amounts of time performing speaking tasks in which they pro-
duce one oral product version, pausing and speech rate are essential temporal
elements in oral fluency. The issue is different when performing a writing task
because writers usually spend much more varied amounts of time performing
the same task and have much more varied amounts of pausing time. Besides,
some writers may produce one draft, while others may write two or more
drafts while performing the same task. When writers produce multiple
drafts, their final draft, given that it is most likely to be a copied one, is ex-
pected to be written in shorter time than the earlier draft(s).

The composing rate, text quantity, and real-time


measurement of writing fluency
Writing is the most cognitive of all language skills. While composing texts,
writers use many processes, including planning, monitoring, reviewing,
retrieving, and transcribing. Normally, writers allocate much more time to
planning, monitoring, reviewing, and retrieving than to transcribing texts.
102 FORUM

For example, Flower and Hayes (1981) cite research indicating that ‘in some
cases 70% of composing time is actually pause time’ (p. 229). Previous studies
did not use the composing rate or text quantity as real-time measures of writ-
ing fluency. Rather, they used them in terms of the final text produced without
taking the real-time dimension into account. That is why the way writing
fluency has been assessed using these two measures is problematic.

Writing task performance variables


Some task performance variables influence the quantity of texts writers pro-

Downloaded from http://applij.oxfordjournals.org/ at Universidad Arturo Prat on July 13, 2014


duce and their composing rates. First, producing longer or shorter texts may be
dependent on factors such as writers’ familiarity with the topic, and/or their
pre-task decisions to include a specific amount of words or lines in the text.
Moreover, judging writers’ fluency through dividing the amount of text they
produce by the time spent on performing the task may be refuted by the as-
sumption that some writers do not spend much time performing a given task
due to their negative affect. The two assumptions concur with Wolfe-Quintero
et al.’s (1998) view that product-based measures of writing fluency are prob-
lematic due to the nature of the task.

Writers’ pausing versus speakers’ pausing


The above discussion of language processing time differences in speaking and
writing implies that pausing does not seem to be a valid indicator of writing
fluency either. The findings of previous studies do not provide conclusive evi-
dence for the positive or negative relationship of pausing with writing ability.
In light of Bereiter and Scardamalia’s (1987) writing models, it may be argued
that expert writers with a reflective composing style are likely to have longer
pauses than novice writers with an impulsive composing style. When writers
pause, they use any composing process other than transcribing (i.e. planning,
monitoring, and retrieving); some of these processes may facilitate the ease of
written language production while others may hinder it. Contrarily to speak-
ing fluency research findings, Matsuhashi’s (1981) study implies that ‘when
writing moves fluently ahead most decisions are made at the sentence bound-
ary before the writer begins to write’ (p. 130). Accordingly, writers’ pausing
may enhance or hinder their fluency depending on its location and the com-
posing processes used in pauses, while speakers’ pausing is congruently viewed
as an indicator of their dysfluency.

Text-changing operations and linguistic features as


measures of writing fluency
Repair aspects do not seem to be markers of speaking fluency nor do the
changes made to written text appear to indicate writing fluency. These are
generally regarded as communication or composing strategies in speaking or
writing research, respectively. Regarding the other product-based measures of
FORUM 103

writing fluency (e.g. linguistic features characterizing rhetorical functions,


number, and length of t-units), these mainly assess accuracy and/or complex-
ity aspects in the text rather than the flow of its production. According to
Hamp-Lyons (1990), there may be some writers who are fluent but inaccurate,
and some writers who are accurate but non-fluent.

Empirical evidence for the in/validity of the writing fluency


measures used
It is noteworthy that researchers have used product-based measures to assess

Downloaded from http://applij.oxfordjournals.org/ at Universidad Arturo Prat on July 13, 2014


writing fluency without questioning their validity or giving a convincing ra-
tionale for using them. For example, while Baba (2009) assessed her partici-
pants’ fluency using text quantity, her statement that it ‘is often used as an
index of writing fluency’ (p. 195) suggests that she has some degree of uncer-
tainty about its validity. The empirical evidence, assumptions, and conclusions
given above all indicate that we cannot depend on all product-based measures
and pausing in assessing writing fluency. The fact that the composing process is
far more cognitive than speaking, and the assumptions about task performance
variables support the hypothesis that the composing rate and text quantity are
not valid measures of writing fluency. Abdel Latif’s (2009) study provides
empirical evidence supporting this hypothesis. Accordingly, the studies using
the composing rate and/or text quantity to assess writers’ fluency seem to have
depended on faulty assumptions.

Measuring fluency using the length of writers’ rehearsed text


versus their translating episodes
Given that the above discussion implies that all the product-based measures
and pausing may not assess writing fluency validly, what is left undiscussed so
far is the validity of the length of writers’ rehearsed text and the translating
episodes or production units occurring between pauses in assessing their flu-
ency. Chenoweth and Hayes (2001) used the length of writers’ proposed text
(i.e. rehearsed or written for the first time) occurring between pauses as a
determinant of their fluency but as measured by their composing rates. A
main problem with Chenoweth and Hayes’s conceptualization is that they
did not differentiate between the contributions of the newly rehearsed text
versus written text to fluency. Using writers’ rehearsed text as a measure of
fluency is also problematic because it mainly depends on their verbal ability,
and in some cases, L2 writers do not tend to rehearse much of their transcribed
text, or prefer to plan it in L1, while in other cases they do not transcribe much
of their planned text. Given this, the question raised is: can we regard the
length of the newly written text as a valid measure of fluency?
The idea of using the length of writers’ newly written text or translating
episodes was first introduced by van Bruggen (1946) who examined the rate
and regularity of the flow of words written through using a time recording
104 FORUM

kymograph that records the movements of writers’ pencils. His study denotes
that fluent writers plan their compositions in word groups, rather than a single
word at a time. This fluency measure was also used in more recent studies
(Spelman Miller 2000; Abdel Latif 2009) employing keystroke logging, and
think-aloud protocols, respectively.
Other studies also signal the possibility of measuring writers’ fluency using
the length of the sentence parts they produce though some of these studies
(e.g. Perl 1979; Chenoweth and Hayes 2001) used the composing rate in as-
sessing it. Perl (1979) referred to her participants’ fluency by contrasting fluent
writing that can be observed when ‘sentences are written in groups or

Downloaded from http://applij.oxfordjournals.org/ at Universidad Arturo Prat on July 13, 2014


‘‘chunks’’’ to non-fluent writing occurring ‘when each sentence is produced
in isolation’ (p. 322). The observations reported by Perl (1979) and Chenoweth
and Hayes (2001) as well as the empirical evidence given by the studies of van
Bruggen (1946), Spelman Miller (2000), and Abdel Latif (2009) all indicate
that the length of writers’ translating episodes may assess their fluency more
validly. Adopting the mean length of translating episodes as a measure of
writing fluency is congruent with viewing it as an observable characteristic
of real-time behaviour (Segalowitz 2010).

CONCLUSION
This article has shown a case of definitional confusion of writing fluency as
well as a varied range of measures used to assess it. Showing how the multiple
measures of writing fluency were adapted from or influenced by measures of
speaking fluency and highlighting the characteristics of real-time language
processing in both speaking and writing and writing task performance vari-
ables, the article concludes that all process-based measures along with pausing
and length of rehearsed text do not seem to be valid measures of writers’
fluency. Accordingly, writing fluency can be operationally defined as writers’
ability to produce texts in large chunks or spans and is optimally measured
through using the length of writers’ translating episodes or production units.
This process-based measure assesses real-time fluent written production and is
compatible with the cognitive characteristics of writing performance.
A final point is related to the argument that it is a good practice in research
to use multiple measures for assessing the target construct. Indeed, this is true
but only when these measures assess what they claim to assess, that is if these
measures do not assess writing fluency validly, there is no point in using them
for such purpose. Future discussion and further empirical studies on the val-
idity issues highlighted in this article can lead to reaching some consensus on
how writing fluency should be defined and assessed.

ACKNOWLEDGEMENT
The ideas covered in the article are derived from my doctoral research which was supported by the
Sheikh Nahayan Doctoral Dissertation Fellowship granted by TIRF.
FORUM 105

REFERENCES
Abdel Latif, M. M. 2009. ‘Towards a new Perl, S. 1979. ‘The composing processes of un-
process-based indicator for measuring writing skilled college writers,’ Research in the Teaching
fluency: Evidence from L2 writers’ think-aloud of English 13/4: 317–36.
protocols,’ Canadian Modern Language Review Reynolds, D. W. 2005. ‘Linguistic correlates of
65/4: 531–58. second language literacy development: evi-
Baba, K. 2009. ‘Aspects of lexical proficiency in dence from middle-grade learner essays,’
writing summaries in a foreign language,’ Journal of Second Language Writing 14/1: 19–45.
Journal of Second Language Writing 18/3: Sasaki, M. 2000. ‘Toward an empirical model of
191–208. EFL writing processes: an exploratory study,’

Downloaded from http://applij.oxfordjournals.org/ at Universidad Arturo Prat on July 13, 2014


Bereiter, C. and M. Scardamalia. 1987. The Journal of Second Language Writing 9/3: 259–91.
Psychology of Written Composition. Lawrence Segalowitz, N. 2010. The Cognitive Bases of Second
Erlbaum. Language Fluency. Routledge.
Bruton, D. L. and D. R. Kirby. 1987. ‘Research Skehan, P. 2003. ‘Task-based instruction,’
in the classroom: written fluency: didn’t we do Language Teaching 36/1: 1–14.
that last year?,’ The English Journal 76/7: 89–92 Snellings, P., A. van Gelderen, and A. de
Chenoweth, N. A. and J. R. Hayes. 2001. Glopper. 2004. ‘Validating a test of second
‘Fluency in writing: generating texts in L1 language written lexical retrieval: a new meas-
and L2,’ Written Communication 18/1: 80–98. ure of fluency in written language production,’
Flower, L. S. and J. R. Hayes. 1981. ‘The preg- Language Testing 21/2: 174–201.
nant pause: an inquiry into the nature of plan- Spelman Miller, K. 2000. ‘Academic writers
ning,’ Research in the Teaching of English 15/3: on-line: Investigating pausing in the produc-
229–43. tion of text,’ Language Teaching Research 4/2:
Hamp-Lyons, L. 1991. Assessing Second Language 123–48.
Writing in Academic Contexts. Ablex. Storch, N. 2009. ‘The impact of studying in a
Johnson, M. D., L. Mercado, and A. Acevedo. second language (L2) medium university on
2012. ‘The effect of planning sub-processes on the development of L2 writing,’ Journal of
L2 writing fluency, grammatical complexity, Second Language Writing 18/2: 103–18.
and lexical complexity,’ Journal of Second Van Bruggen, J. A. 1946. ‘Factors affecting re-
Language Writing 21/3: 264–82. gularity of the flow of words during written
Knoch, U. 2007. ‘Diagnostic writing assessment: composition,’ Journal of Experimental Education
the development and validation of a rating 15/2: 133–55.
scale‘, Ph.D. thesis, The University of Wolfe-Quintero, K., S. Inagaki, and
Auckland, New Zealand. H. Y. Kim. 1998. Second Language
Matsuhashi, A. 1981. ‘Pausing and planning: Development in Writing: Measures of Fluency,
the tempo of written discourse production,’ Accuracy and Complexity. University of Hawaii
Research in the Teaching of English 15/2: 113–34. Press.

Вам также может понравиться