Вы находитесь на странице: 1из 8

LINGUISTICS, EMPIRICAL RESEARCH,

AND EVALUATING COMPOSITION


Forrest Houlette
It seems that David Hume's mother once said "Little
Davie means well, but he's not very bright." "That's okay," said
the philosophy professor who told me the story, "because she
was wrong on both counts."l But very often we hold Davie's
mother's sentiments toward empiricists because we feel that
they needlessly attack and destroy some of our favorite assump-
tions about writing, the universe, and everything. Michael Holz-
man's recent argument that composition research must avoid
scientism and use sodal scientific methodologies "in the service
of a humanistic pedagogy" is a case in point. Holzman accuses
empirical researchers, particularly those involved with sentence
combining, of pursuing mathematical models and statistical
analyses for the sake of securing a validity for their research
which it otherwise would not have had, of "wearing those gor-
geous cloaks over a poor reality." He is willing to admit that "a
social scientific methodology eventually may be valuable in liter-
acy research," but he jealously guards the role of nonempirical
study, stating that empirical, social scientific methodologies "will
not necessarily be superior to humanistic modes of research."2
The tone of the superiority of the nonempirical humanistic over
the empirical social scientific is evident, and it aligns Holz-man
with little Davie's mother. The problem, though, is that the
empirical and the nonempirical do not conflict. What the
typical humanistic dismissal of scientism does not concede is
that we need the empirical point of view precisely because it
attacks our assumptions. Empiricists keep us honest in impor-
tant ways. We must therefore always keep in mind who they are
and what they can do for us.
JOURNAL OF ADVANCED COMPOSmON, Volume V (1984). Copyright
1988.
108 Journal of Advanced Composition
At first glance, empiricists may seem to be people who
trust only what they see, people like Bishop Berkeley, who
denied the existence of the physical world apart from the per-
ceiving mind.
3
Since Heisenberg and his uncertainty principle
entered the scene, however, they do not even trust what they do
see.
4
They are more likely to believe in what they can measure,
with the caveat that how they measure influences the way they
understand the measurements. Empirical researchers go about
testing assumptions by predicting the measurements they ought
to get based on those assumptions and comparing those measure-
ments to the measurements they actually get. They use statistics
to tell how large a difference between what they wanted and
what they got means that the assumption is invalid.
Empiricists are not by nature antihumanists. The empir-
ical spirit grew out of the Renaissance and Reformation along
with Humanism. The critical difference between the two was
that the Humanists favored classical texts as having the power to
explain the physical world while the empirical spirit favored
observation of nature.
s
Both traditions continued in philosophy,
the Humanists most probably tracing their roots through Des-
cartes' cogito ergo sum, the empiricists through Berkeley's esse
est perdpi. The conflict evident in Holzman's argument easily
reduces to the root conflict which spawned the differentiation of
empiricism from Humanism. The question is epistemological:
whether we shall go to learned texts and apply the techniques of
rationalism to deduce what is the case, or whether we shall
observe nature and induce what is the case. The answer to this
question would appear to be necessarily a choice between one
option or the other. But this conflict is really trivial. Any
empirical study must rely on the products of intuitive, intro-
spective, or rationalistic types of reasoning. Hypotheses come
from observation, critical observation, yet these observations are
not rigorously controlled. Hypotheses are guesses which reflect
all of the personal biases of the observer-biases based on intui-
tive, introspective, and rationalistic modes of reasoning. They
are the result of the observer projecting these biases on the
world. The controlled observation of the experiment is merely a
test to see whether these hunches can be considered correct. Any
humanistic study must rely to a certain extent on empirical obser-
vation of the world. Descartes, for instance, discusses several
metaphors based on his observation of the world, and these
metaphors shape and inform the premises of his deductive argu-
Forrest Houlette 109
ments.
6
In rationalistic methods, however, there is no rigorous
control on the observation, the derived metaphor, or its inter-
pretation. The rationalistic and interpretive methodologies of
the humanities therefore complement the experimental method-
ologies of empiricism. Humanistic methodologies yield the stuff
of hypotheses; empirically proven hypotheses provide further
grounds for humanistic interpretation. The two sets of disci-
plines must rely on each other by nature.
Theoretical inquiry in the humanistic fields of composi-
tion study is a method which proceeds largely by introspection
and argument by example. What we believe about writing and
how to teach it depends on how we perceive ourselves as writers
and what we accept as examples of good writing. To teach the
composing process, we must first believe we understand how we
ourselves compose. We must also understand from observation
how we believe our process might differ from another writer's
process. We will teach only what coincides with our understand-
ing of process. To teach style, we must first understand what we
think is good style. We will rely on examples from literature for
this knowledge. To teach organization, coherence, or unity, we
must first understand these concepts. Again, we will rely on
examples to establish this knowledge.
Empiricism, when applied to composition research, must
inherently take a linguistic or behavioral approach to discourse
because of the emphasis on measurement. In order to measure
some feature of a text or its impact on a reader, we must be able
to convert aspects of the text or of the response to numbers. If
we wish to measure a text, we must count grammatical features,
coherence markers, propositions, features of context, or perhaps
features of suprasentential structures, for all of which we must
depend on linguistic descriptions of the discourse. To describe
rhetorical or cognitive features of the text, we must still count
them in terms of the linguistic features which evidence them. If
we wish to make measurable statements about the writing of the
text, we must observe, record, claSSify, and count the writer's
behaviors during the production of the text. If we intend to
measure a reader's response, we must depend on some sort of a
semantic differential scale. Linguistics and psychology are there-
fore the "pure" sciences which an empirical composition re-
searcher applies to texts in order to verify insights suggested by
the less empirical disciplines of rhetoric and literary study.
The role for the humanist within composition research is
110 Journal of Advanced Composition
to provide us with intuitive and introspective knowledge about
composing. The role for the empiricist is to teach us to test our
introspections in rigorous ways, to show us how to demonstrate
them functioning in the world of reference. Those tests tell us
where our introspections are in error for lack of measurement.
They check the fit of our assumptions to the physical world and
provide guidelines for further investigations. Empiricism is
therefore a line of inquiry complementary to the other lines in
composition research. It can provide answers that the others can-
not provide, just as they can provide answers that it cannot.
Finding examples of how the empirical interacts with the
humanistic is not difficult. Ironically, a good example comes out
of the sentence combining research Holzman discusses. We all
assume that writing tests, especially holistically graded writing
tests, actually measure writing ability and therefore possess the
property of validity. They have face validity; that is, they appear
to users and takers to measure writing skills. They also have
content validity-they are constructed in accordance with sound
theoretical descriptions of writing skills. We therefore assume
that writing tests do what they are supposed to do. But we do
not consider empirical validity, how well test scores correlate
with independent measures of writing which are also derived
from sound theoretical descriptions, and we do not for two rea-
sons? First, inherent in the early research which tried to link
improvement on external criteria to improvement in grades
(primarily sentence combining research) was the a priori assump-
tion that graders made decisions about writing in response to the
same linguistic features measured by external criteria.
8
We
knew, for instance, that t-unit scores grew larger both as students
got older and as students moved from groups characterized as
low in ability to groups characterized as high in ability. Why
wouldn't we assume that grades should reflect those differences?
Second, new schemes of grading were introduced, allOwing us to
assign reliable grades, reliability being the ability to reproduce the
grades consistently over readministrations of the same or paral-
lel forms of the test. The discussion of the new reliability quite
simply outshined questions of validity, especially when two of
the three types of validity had been achieved.
9
Lester Faigley was the first to question this assumption of
empirical validity, but he questioned primarily the ability of
Hunt's dause-to-sentence factors to explain the inherent and self-
evident validity of grades.1
0
Faigley assumed that reliably graded
Forrest Houlette 111
writing assignments were inherently more valid measures of
writing skill than any external criterion known. When he
noticed that Hunt's factors and a few of his own devising did not
correlate well with such grades, his reaction was to reject the
external criteria as valid measures of writing skill and to ques-
tion the validity of the theory which produced them. Faigley's
reaction, however, was a function of his faith in the assumption
that reliably assigned holistic scores were indeed valid measures
of writing skill, a humanistic assumption since its relies on an
epistemology based on the reading of texts and the judgments of
informed readers. There is a counter argument, though, another
way of posing the problem, which Faigley did not consider, prob-
ably because of his faith in his assumption. Instead of question-
ing the ability of the variables to correlate with grades, he could
have questioned the ability of grades to correlate with variables.
What if the external criteria were indeed the more valid mea-
sures? This is an empirical question since it relies on an episte-
mology which values external verification.
Recently I undertook to investigate this alternative way of
posing the question. What I found was not terribly surprising.
First, almost no one had considered the question. Of the few
studies available, only one found a strong correlation between
grades and independent measures)l The outlook for success
with studies of the empirical validity of holistic scores was there-
fore bleak. Second, none of the research was overly concerned
with the statistical significance of the correlations, whether they
might have been produced by a chance interaction of actually
unrelated factors. Procedures were just lax enough to inflate
error levels unacceptably. Third, a growing body of psychological
literature indicated that human beings do not necessarily make
valid decisions, a fact which calls into question Faigley's assump-
tion of the validity of holistic scores,12 Grading is a process of
making decisions about which of several categories to consign a
series of writing samples to. If graders are not necessarily good
judges, their grades do not possess the inherent validity Faigley
believed them to have. Because of these three factors, I decided
to design an exploratory study to see whether independent
criteria might be found which would correlate well with grades.
The study employed three sets of reliably graded papers
and seven variables, Hunt's clause to sentence factors and two
variables derived from the theory of given and new informa-
tion, content words per t-unit and the percentage of content
112 Journal of Advanced Composition
words marked as given information.1
3
What it revealed is that
under some circumstances empirical validity is possible. For
two sets of papers, the correlations would not be distinguished
from zero. For the other set, however, a combination of two
variables, percentage of content words given and words per sen-
tence, explained twenty-three percent of the variance in the
grades.1
4
As studies like this one go, that is a fairly high percen-
tage. The three sets of papers were collected under a mixed bag
of conditions involving both basic and intermediate writers,
primary trait and general impression scoring, and contextual and
noncontextual assignments. The intent was to try to catch a
group of conditions that could produce validity. In this sense,
the study was successful. The results favored intermediate over
basic writers, primary trait scoring over general impression scor-
ing, and contextual assignments over noncontextual assign-
ments.
What this study tells researchers in composition who ad-
here to humanistic methodologies is that the assumption of the
validity of reliably assigned holistic scores is a dangerous assump-
tion. It would seem that such scores can at some times be more
valid than at other times. What this study tells researchers who
follow the empirical tradition is that they need more fully to
explore the relationships between holistic scores and external
criteria. It may be that such research could verify a set of condi-
tions under which the most valid grading could be done or that
such research could identify the theories most useful in con-
structing external measures of writing ability. (The two possibili-
ties coincide with the two ways of viewing the problem men-
tioned above.) The study implies research programs for research-
ers of both persuasions. The questions nonempirical theorists
must explore are when and how the validity discovered might
be produced or improved upon. The question that empirical
theorists must address is whether the nonempirical answers
work out well in practice.
If we are going to find the answers to such questions, we
need to rely on both types of researchers and we need to be cross-
training researchers in both lines of inquiry. We need to be able
to communicate with one another, and interdisciplinary train-
ing, since it promotes understanding, is the most effective
means of facilitating communication. These are the reasons
why the linguistic, the psychological, and the empirical have
been adopted by researchers in composition and teachers of com-
Forrest Houlette 113
position theory. What empiricists do enables work by their non-
empirical colleagues which in tum enables their own work. The
relationship is not the antithetical one Holzman implies.
Rather, it is very like mine with my checkbook. I had gone
along several months just believing that I had not made a mis-
take, a nonempirical stance in the extreme. But I got out my
very empirical calculator recently. I found two hundred dollars.
Ball State University
Muncie, Indiana
Notes
1 G. Stanley Kane told me the story while I was at Miami University. I
have not been able to trackdown its authenticity.
2 Michael Holzman, ''Scientism and Sentence Combining," College Com-
position and Communication, voL 34, no. 1 (Feb. 1983), pp. 79, 74, and 78, respec-
tively.
3 For an introduction to Berkeley's ideas, see W. T. Jones, A History of
Western Philosophy, 2nd ed., voL llI: Hobbes to Hume (New York: Harcourt,
Brace, and World, 1969), pp. 280-95.
4 Werner Heisenberg provides a relatively understandable explanation
of his uncertainty principle and related issues in Physics and Beyond (New
York: Harper and Row, 1972), pp. 58-81.
5 See Jones, Hobbes to Hume, p. 68.
6 For an introduction to Descartes' method, see Jones, Hobbes to Hume,
pp.154-91.
7 For an excellent discussion of the concept of validity, see David P.
Harris, Testing English as a Second Language (New York: McGraw-Hill, 1969),
pp.18-21.
8 See Frank O'Hare, Sentence Combining: Improving Student Writing
Without Formal Grammar Instruction (Urbana, IlL: NCI'E, 1973), p. 67, Kellogg
W. Hunt, Grammatical Structures Written at Three Grade Levels (Champaign,
Ill: NCI'E, 1965), and Walter D. Leban, Language Development: Kindergarten
Through Twelfth Grade (Urbana,Ill.: NCI'E,1976).
9 See Paul B. Diederich, Measuring Growth in English (Urbana, Ill:
NCI'E,1974).
10 Lester Faigley, ''Names in Search of a Concept: Maturity, Fluency,
Complexity, and Growth in Written Syntax," College Composition and Commun-
ication ,31, no. 3 (Oct. 1980),291-300.
114 Journal of Advanced Composition
11 Mary Lou Howerton, The Relationship between Quantitative IZnd
Qw.zlifative Mi!Ilsure5 of Writing SkiUs, ED 137 416. See also Linda Brodkey
and Rodney W. Young, A Sensible, Interesting, Orgtmized, RJreturiall Procedure
fur the Grading of Student Essays, ED 173 005, and Sean A. Walmsley and Peter
Mosenthal, Psycholinguistic BIlses fur Holistic Judgements of Children's Writ-
ten Discourse, ED 177.
12 See Leon Rappoport and David A. Summers, ed., Humtl7l Judgment IZnd
SocilZl InterRCtion (New York: Holt, Rinehart, and Winston, 1973) for several
review articles and bibliographies.
13 Content words per t-unit is a general measure of the number of proposi-
tions present in discourse. Percentage of content words marked as given informa-
tion is a measure of the information shared by writer and reader. Given infor-
mation is defined within the theory as information shared by writer and
reader in consciousness. New information is information not so shared, that is,
information which needs to be communicated from writer to reader in order to be
shared. For further explanation of the variables, see Forrest Houlette, A
Regression Model of the Grading Process Employing V Ilriables Drawn from the
Theory of Given Rnd New InfurmlZtion, Doctoral Dissertation, University of
Louisville. For further explanation of the theory, see Herbert H. Clark and
Susan E. Haviland, "Comprehension and the Given-New Contract," in Dis-
course Processes: AdVtl1lCes in Resetlrch Rnd Theory, ed. Roy O. Freedle, voL I:
Discourse Production and Comprehension (Norwood, N. J.: Ablex Publishing,
1977), pp. 1-40.
14R=.492.

Вам также может понравиться