Вы находитесь на странице: 1из 9

English for Specific Purposes, Vol. 15, No. 3, pp.

233-241, 1996
Copyright © 1996 The American University
Pergamon Printed in Great Britain. All rights reserved
0889-4906/96 $15.00 + 0.00
S0889-4906(96) 00007 -5

Research and Discussion Note

P r o c e s s A s s e s s m e n t in ESP: Input.
T h r o u g h p u t and Output
Julio C~sar Gim~nez

Abstract - - Assessment is an area in EFL/ESL in which a great deal of research


has still to be done. Much the same holds true in most English for Specific
Purposes (ESP) courses, in which students study English to meet academic,
professional or occupational needs. In most ESP programs students' performance
has been evaluated following the well-established practice for testing only
students' end-products, ignoring any assessment of the learning process. This
practice, however, has some drawbacks, among which the most serious is that
end-product evaluation comes too late in the learning process for formative
feedback to take place. On the other hand, process assessment, an alternative
assessment procedure which evaluates not only students' "products" but also how
they have come to acquire the proficiency needed to produce them, gives ESP
instructors as well as students the opportunity to improve outcomes when there
is still time for so doing. This paper discusses the importance of implementing
process assessment in ESP courses, and the impact it may have on them as well
as the results that can be achieved by this type of alternative assessment through
examples drawn from a local program at the tertiary level.

Introduction
Students' performance in ESP courses has been assessed as the end-product
of a process rather than as the reflection of an on-going process which, if
carefully controlled, would assure a more reliable final product. This well-estab-
lished practice of evaluating students' final production has been transferred
from the field of ESL/EFL to the field of ESP without any adaptation. The
reason for this is simply that assessment has been for a long time a neglected
area in ESP (Alderson 1988b). However, as a way of evaluating ESP students'
performance, this practice is quite limited in scope because it is applied at the
end of the process when it is usually too late for improving the final product,
or when it is too difficult to eradicate deeply-rooted erroneous linguistic habits.
Process assessment, on the other hand, offers an alternative form of evalu-
ating students' performance that can be successfully implemented in ESP

Address correspondence to: Julio C__~sarGimL~ez, 9 de Julio 635-3 "A", (5000) C6rdoba, Argentina.

233
234 J.C. Gim~nez

courses. This alternative assessment procedure evaluates students' performance


on a continuous basis, allows performance evaluation at any stage in the learn-
ing process, and lets students work on their areas of need before it is too late.
Why would ESP courses benefit from process assessment? The answer lies
in the very essence of ESP. First, ESP courses are usually of short duration
(usually from 6 ~ months), seriously limiting the time available for remedial
work. This time constraint seems even greater when we consider that, unlike
General English students, ESP students usually work within a rigid timetable.
Secondly, ESP students are instrumentally motivated and they "are likely to be
more goal oriented and to expect success" (Ellis & Johnson 1994: p. 11). Thus,
the perception of what the course should prepare them to do is different from
that of students in General English courses. Finally, ESP students, especially
those who need English at work, are more likely to put into practice what they
have learnt as soon as they leave their classrooms, reinforcing language errors
through frequent practice.

Some Theoretical Considerations


The following theoretical considerations are the result of reflecting upon the
outcomes of a project that was set up at Instituto de Estudios Superiores (hence-
forth IES) in C6rdoba, Argentina, to improve the assessment procedures in the
ESP courses run at this institution. Students at IES take ESP courses as a require-
ment of the different departments: Marketing, Management and Public Relations.
This project was based on the premise that assessing students' performance
in an ESP course should consist of a thorough analysis of the input, a contin-
uous assessment of the throughput variables and, finally, an evaluation of the
output. The term input, which should not be confused with Krashen & Terrell's
"linguistic input", refers to the students' and the institution's efforts and
resources: efforts and resources ESP teachers count on before starting the
course. Throughput variables involve the internal state and behavior of both
the students and the institution; output refers to the students' mid-term and
final production or outcomes (see Fig. 1).
Input assessment is, in general terms, the analysis of all the input variables
ESP instructors should consider at the beginning of a course. These variables
are of two kinds: indzvidual variables and organizational variables. Individual
variables include students' attitudes, aptitude, experience, needs, purposes,
skills, etc. Organizational variables comprise the institution's structure, equip-
ment, pmp(L~, budget, and the like. As can be seen, input assessment is more
precise a term than needs analysis in the sense that needs analysis, as it is
generally understood, describes the "true" needs of students only (Hutchinson
& Waters 1989) whereas input assessment involves the analysis and evalua-
tion of a wider range of variables.
Assessment of the individual and organizational variables results in a thorough
knowledge of the human and material resources ESP instructors will be working
with. In this way, instructors know what the students and the institution expect
from them and what they can expect from the institution and the students.
ESP Process Assessment 255

feedbsck process

Individual t Mctivmi~ Evaluation


Individ~l Product
vamb~
vmabks ]

vmhlbi~ I
Sm~mc

Figure 1. Process assessment in ESP.

Throughput assessment is more difficult than input assessment because of


the nature of the variables involved. Not only are these variables less tangible
but they are also unstable. Throughput variables include the motivation and
perceptions of the students and the climate and cooperation of the institution.
The importance of throughput assessment lies in the fact that it is a window
to the production process, enabling the ESP instructor to make adjustments to
the methodology and the materials so as to make them match the results of the
input variables assessment. The focus on throughput assessment is allied with
the growing awareness that process and product variables are essential compo-
nents in the impact of any course (Prodromou 1995; Spada 1994).
Output evaluation consists in analyzing the end product of instruction on a
continuous basis and making changes in the process for the future if necessary.
There seems to be a close connection between output evaluation and input
assessment as the output cannot be changed if the input remains the same.
This output-input connection is established through the feedback process, by
means of which any improvement found to be needed in the output is made by
seeking to alter the relevant variables in the input. Feedback is now understood
to be an important component in any teaching and assessing situation (Ellis &
Johnson 1994; Goldstein & Liu 1994; Harris & McCann 1994; Rea-Dickins &
Germaine 1993). An ongoing assessment of the input and throughput variables
should help foster high output quality, as shown in Fig. 1.

Putting Theory into Practice


1. Collecting Data
The first task we, ESP instructors at IES, carried out was to collect infor-
mation on the variables for the input and throughput assessments.
We first identified Input individual variables so as to be able to make
decisions concerning teaching methodology, materials, activities and the like.
236 J.C. Gimenez

The following is a list of procedures we used to gather the information we


needed:
a. Interviews
b. Aptitude and Attitude Tests
c. Questionnaires
d. Placement tests
e. Diagnostic tests
Next, input organizational variables were taken into account. We found out
what budget the institution had assigned to the ESP courses, what available
equipment (VCRs, cassette players, etc.) there was and even the physical
structure (classroom, offices, etc.) that we would be allowed to use for our
classes/activities. All these are decisive factors in the decision-making
process.
Gathering input organizational variables makes ESP instructors more aware
of the resources and constraints that make up the dynamics of their work place
and that will directly contribute to an effective organization of the course and
its activities (Aydellot 1995). However, instructors should not feel discouraged
by their findings; on the contrary, "Constraints do not just prohibit or control;
they direct how resources can be used" (van Lier 1994: p.9).
Throughput individual and organizational variables require constant monitor-
ing. This on-going process provides ESP instructors with the possibility of making
changes before the output is evaluated, as variables in the context change.
There are a number of instruments to collect throughput individual variables,
of which the following proved most useful to us:
a. Motivation graphs
b. Observations
c. Conferences
d. Students' comments
e. Students' diaries
f. Achievement tests
g. Progress charts
Nunan (1989) includes a detailed description of many techniques and instru-
ments used to collect in-classroom and out-of-classroom information. We
collected organizational variables of the two kinds by informally interviewing
the institution principal and the personnel in charge of distributing rooms,
equipment, arranging time-tables, and so on.

2. Assessing Data
The next step was to assess all the data (input and throughput individual
and organizational variables) we had collected. This step is usually considered
the most troublesome (Berwick 1989; Nunan 1989; Seedhouse 1995).
An analysis of the information gathered through the instruments to collect
input individual variables revealed:
ESP Process Assessment 237

a. How well prepared, linguistically speaking, the students were


b. How they saw language learning, and specially how they viewed learn-
ing in English
c. How useful they thought English was, and specifically in what circum-
stances they would be using English
d. How well prepared they felt they were to start working
f. What purpose/s they had for learning English
The results of this analysis were of great value because they enabled us to
decide on the teaching methodology, materials, and so on. By the same token,
we then had all these data available for reference when adjustments were
needed during the assessment process.
If, for example, an activity had not been motivating, it may have been
because there was little or no correlation between the activity or its materials
and the results of the input variables assessment, that the activity was beyond
or below the group's linguistic competence. Reviewing the data on the input
variables enabled us to understand the output results.
Assessment of the throughput individual variables is an ongoing process that
reveals how the learning process is developing. Through the measuring instru-
ments (motivation charts, observations, students' comments, etc.). We were able
to reconsider the decisions made after the assessment of the input individual
variables. This ongoing process revealed:
a. How well students were doing
b. How they felt about the course and themselves as learners
c. How valid the decisions already made were
d. What adjustments were needed before the final evaluation of the output
Analyzing the input organizational variables exposes ESP instructors to the
tangible reality of their work places, making them aware of the constraints as
well as the potentialities of their physical setting. Analyzing the throughput
organizational variables serves instructors as the ongoing measure by which to
make adjustments to the input variables, by, for example, requesting the cooper-
ation of the institution over equipment, or reconsidering course goals.
Last but not least, this continuous assessment of all the variables in the
teaching context should reveal to ESP instructors not only how well students
are doing (process) but also whether their students' performance (product) can
help them get nearer the target speech community they will be part of once
they have finished their course. Although this ethnographic consideration also
applies to General English courses (Boswood & Marriot 1994; Seedhouse 1995),
it is more critical in ESP courses as professionals at work are expected to be
more accurate in their command of English.

3. Evaluating the Output


Evaluating the output does not simply involve a single final test or a single
battery of tests just before the course ends. On the contrary, it means evaluat-
ing students' performance by taking into account the assessment of the input
238 J.C. Gim~nez

and throughput variables plus all the adjustments made to the input after the
throughput assessment.
At IES, in evaluating the output after the first week of class, we had to assess
the students' performance and consider adjustment of the input variables. Some
instructors were not able to interpret the throughput data collected and so they
then needed to make new decisions to ensure that students' performance would
be judged on the basis of learning (Krashen & Terrell 1983), and not on inappro-
priate variables.
This, in my view, is one of the main aspects that makes ESP assessment
different from assessment in other courses. That is, the learning process, choice
of materials, assessment and evaluation process become more the responsibil-
ity of both parties, teachers and students, in ESP courses than they might be
in other courses of General English. Although there is a need for students'-
participation and responsibility in the learning and assessing process in any
kind of courses of English (Biria & Hassan Tahririan 1994; Harris & McCann
1994; Murphey 1994-1995; Palacios Martinez 1993; Smith 1994), this need is
much more concrete in ESP. More often than not, ESP students possess the
subject-matter knowledge their ESP instructors lack and thus can help them
make more appropriate choices.
To sum up, input assessment gives ESP instructors the foundations to start
building, while throughput assessment indicates how well the structure is
being erected and if some bricks have not been properly laid and need adjust-
ment. Only then can the evaluation of the building activity start to take place.
But this final evaluation should consider factors such as the proper selection
of bricks and mortar, a change of materials if they fail to meet workers' needs,
and so on. Regular meetings, interviews, observations and record keeping are
considered a fundamental part of this final evaluation (Allerson & Grabe
1986).

T h e Impact of P r o c e s s A s s e s s m e n t on an E S P Course
To implement process assessment at IES some changes had to be made.
These changes involved teachers and students as well as the institution itself.
ESP instructors participated in a series of seminars centered around three
aspects: new trends in ESP, classroom management and process assessment
training. The seminar activities aimed to instruct the participants in new teach-
ing methodologies, data interpretation, record keeping procedures and process
assessment implementation.
ESP instructors then held regular meetings with subject-matter teachers to
decide on topics to discuss cooperatively in their classes, and to talk about new
ways of assessing the course content. These meetings proved beneficial for both
kinds of teachers. ESP instructors were helped with doubts they had about
content, and subject-matter teachers increased their knowledge of English by
helping ESP instructors choose the class materials. Cooperative teaching has
proved highly beneficial in any course of English; for ESP courses it has meant
an effective solution to the long-standing debate on ESP instructors' subject-matter
ESP Process Assessment 239

knowledge as "cooperative teaching provides a short cut to subject-knowledge for


ESP teachers" (Gim6nez 1995: p. 11).
In the same way, students were asked to participate in the learning process
by suggesting activities and topics to be discussed. They usually favored those
topics they also discussed in subject-matter classes and activities they believed
were simulative of their future professional life. Students also discussed ways
of assessing their performance.
As a result, students felt deeply motivated and were actively participant as
they understood they played an important part in the process. They grew more
responsible for their learning and instructors felt highly rewarded. Individual
and organizational throughput variables made a great contribution to process
assessment. Students were actively concerned with record keeping as a way of
accounting for those activities and materials that should be kept and those that
should be discarded.
Instructors felt so involved in the project that they set regular meetings to
discuss the data they had collected, to compare results and to help one another
with data interpretation to most efficiently reassess input variables.
Throughput assessmerit took most of the meeting time. Progress tests were
designed and administered at set times, every eight classes, to complement the
results produced by analyzing students' motivation charts, students' diaries and
teacher-student interviews.
The institution principals also cooperated extensively when they saw the first
signs of change. They participated in some of the meetings and proposed ideas
on how best to utilize the institution's human and material resources.
There were also positive effects on the assessment of students' performance.
Output assessment is now validated against all records kept by teachers and
students, as well as the considerations resulting from input and throughput
assessment. Now, after a student sits for the end-of-term exam, he/she "walks
away" with not only a score for his/her performance but also an assessment
portfolio that reflects all he/she has done and still needs to do. This portfolio
serves as a guide for the student when working in the self-access center and
when self-assessing his/her progress.
The assessment portfolio also means less time is needed by the student's
next term ESP instructor for input assessment and more time can be spent on
the learning and assessment processes.

Conclusion
Assessment in ESP courses should be a reflection of the very nature of
ESP. That is, assessment in ESP courses should aim to meet the specific
needs of a specific group of students. In other words, "If one teaches ESP
then on clearly has to test ESP..." (Alderson 1988a: p. 16). And, "Naturally,
theory shapes assessment" (Allerson & Grabe 1986: p. 163). Assessing ESP
students' performance as process rather than as product means a different
and, at times, more complex way of looking at testing. Nonetheless, it should
assure instructors of achieving a more reliable final product which would, in
240 J.C. Gim~nez

due time, enable students to become interactive participants in the ESP


speech community.
Besides, process assessment applied to ESP courses encourages students to
become involved in the learning process, making them more participative and
responsible at the same time. By the same token, institutions also grow more
aware of the needs of both their ESP instructors and students.

(Revised version received February 1996)

Acknowledgements- I am grateful to Peter Master and Liz Hamp-Lyons for


their comments, and the two anonymous reviewers who have helped me make
this a better article. All my gratitude to my colleagues Nora Sapag and Maria
I. Asis for providing me with their helpful comments on the revised version of
this article.

REFERENCES
Alderson, J. C. (1988a). Testing English for specific purposes: how specific can
we get? In A. Hughes fEd.), Testing English for university study. (pp. 16-28).
ELT Documents: 127. Oxford: Modem English Publications.
Alderson, J. C. (1988b). Testing and its administration in ESP. In D. Chamberlain
& R.J. Baumgardner (Eds.), ESP in the classroom: Practice and evaluation. (pp.
87-97). ELT Documents: 128. Oxford: Modem English Publications.
Allerson, S. & Grabe, W. (1986). Reading assessment. In F. Dubin, D.E. Eskey,
& W. Grabe, (Eds), Teaching second language reading for academic purposes.
(pp. 161-181) Reading, MA: Addison-Wesley.
Aydellot, J. R. (1995). Foreign language curriculum organization. English
Teaching Forum, 33(1% 30-31.
Berwick, R. (1989). Needs assessment in language programming. In R.K.
Johnson fEd.) The second language curriculum. Cambridge: Cambridge
University Press.
Biria, R. & Hassan Tahririan, M. (1994). The methodology factor in teaching
ESP. English for Specific Purposes, 13(1% 93-101.
Boswood, T. & Marriot, A. (1994). Ethnography for specific purposes: teaching
and training in parallel. English for Specific Purposes, 13(1), 3-21.
Ellis, M. & Johnson, C. (1994). Teaching business English. Oxford: Oxford
University Press.
Gim+nez, J. C. (1995). Cooperative teaching in ESP: a third view. TESOL
Matters, February/March 1995, 11.
Goldstein, L & Liu N-F. (1994). An integrated approach to the design of an
immersion program. TESOL Quarterly, 28(4% 705-725.
Harris, M. & McCann, P. (1994). Assessment. Oxford: Heinemann.
Hutchinson, T. & Waters, A. (1989). English for specific purposes. Cambridge:
Cambridge University Press.
Krashen, S. & Terrell, T. D. (1983). The natural approach: language acquisition
in the classroom. Oxford: Pergamon Press.
ESP Process Assessment 241

van Lier, L. (1994). Some features of a theory of practice. TESOL Journal, 28(1),
6-10.
Murphey, T. (1994--1995). Tests: learning through negotiated interaction.
TESOL Journal, 4(2), 12-16.
Nunan, D. (1989). Understanding language classrooms. Cambridge: Prentice Hall
International.
Palacios Martinez, I. M. (1993). Learning from the learner. English Teaching
Forum, 31(2), 4447.
Prodromou, I. (1995). The backwash effect: from testing to teaching. ELT
Journal, 49(1), 13--25.
Rea-Dickins, P. & Germaine, K. (1993). Evaluation. Oxford: Oxford University
Press.
Seedhouse, P. (1995). Needs analysis and the general English classroom. ELT
Journal, 49(1), 59-65.
Smith, P. (1994). Learner self-assessment in reading comprehension: the case for
student-constructed tests. English Teaching Forum, 32(4), 41-43.
Spada, N. (1994). Classroom interaction analysis. TESOL Quarterly, 28(4),
685-688.

Julio C. Gim6nez is an ESP instructor at the Instituto de Estudios


Superiores and Universidad Empresarial Siglo XXI and an EFL teacher at the
National University of C6rdoba in C6rdoba, Argentina. He has published in
TESOL Matters and EL T News & Views from Buenos Aires, Argentina.

Вам также может понравиться