Академический Документы
Профессиональный Документы
Культура Документы
edited by
Jerry B. Ayers
Mary F. Berney
.....
"
Kluwer Academic Publishers
Boston/Dordrecht/London
Distributors for North America:
Kluwer Academic Publishers
101 Philip Drive
Assinippi Park
Norwell, Massachusetts 02061 USA
Contributing Authors ix
Acknowledgments xiii
1
Introduction 1
J. T. Sandefur
2
Background for Teacher Education Program Evaluation 3
Jerry B. Ayers and Mary F. Berney
3
The Accreditation Plus Model 13
Jerry B. Ayers, William J. Gephart, and Paul A. Clark
4
Selection and Evaluation of Knowledge Bases
for Teacher Education Programs 23
Roger S. Pankratz
5
Quality Controls in Teacher Education
Programs 43
William E. Loadman
6
Testing for Admissions 49
Lawrence Rudner
vi
7
Evaluating Field-Based Experiences in Teacher
Education 69
Nancy L. Zimpher
8
Assessing Student Performance Outcomes in
Teacher Education Programs 85
Joyce R. McLarty
9
Assessment of Faculty in Teacher Education Programs 103
John A. Centra
10
Use of Mail Surveys to Collect Information for
Program Improvement 117
Jerry B. Ayers
11
Follow-Up Evaluation of Teacher Education
Programs 131
James R. Craig
12
Evaluating the Structure of the Education Unit 151
Edell M. Hearn
13
Physical Facilities Evaluation in Teacher
Education Programs 169
Mary F. Berney
14
Evaluating Financial Resources for Teacher
Education Programs 177
Robert L. Saunders
vii
15
Evaluation of Library Resources for a Teacher
Education Program 195
Edward D. Garten
16
Models and Modeling for Teacher Education Evaluation 211
Mary F. Berney and William J. Gephart
17
Implementation of Evaluation Results 219
William L. Rutherford
18
Elements of Law as They Relate to Teacher
Education Evaluation 237
Joan L. Curcio
19
We Can Get There from Here 251
Mary F. Berney and Jerry B. Ayers
Appendix 255
with a major in experimental psychology. For the past three years he has been
involved with the development and field testing of assessment devices for use in
the evaluation of teachers in the state of Kentucky. He is the author of over 75
papers and three books including Methods of Psychological Research.
His current research interest is in the area of teacher evaluation.
The editors wish to thank a number of people for assisting us in the completion of
this project. Our staff, support personnel at Tennessee Technological University, the
authors, the publisher's staff, and our families contributed to the effort in various
ways. We were blessed with such an abundance of excellent material from the
authors that we had to make difficult choices about what had to be omitted from this
final version, but we want to take this space to make public our thanks to the people
whose assistance made it possible for us to complete this book.
Joni E. Johnson typed the greater part of both the draft and the final version.
She takes pride in having learned to trick the computer into producing what we
wanted rather than what it thought we needed and we are happy that she did. We also
appreciate her constant quest for perfection and her cheerful, professional attitude.
Graduate assistants Lori A. Birdwell, P. Christine Sibert, Boolie S. Stephens,
and Teresa A. Thompson served ably as proofreaders, typists, researchers, and
indexers. They represent the best of the new generation of professional educators and
we were fortunate indeed to have their assistance on this project.
John E. James, Joni. E. Johnson, and Sandra K. Miles each provided some of
the gmphics for this text; we gmtefully acknowledge their expertise.
Patricia Eaves, Sharon A. Heard, and Edith A. Young, support staff in the
College of Education, also helped with numerous editing chores and their willingness
to take on the additional burden is appreciated.
Joel Seber and Carl W. Owens provided technical assistance. Dr. Owens was
most generous in sharing his office and his equipment as well as his time.
Linda Mulder, Jean Moore, and Roger Jones of the Tech library provided
assistance in checking references and compiling the Appendix to the text.
Special thanks go to Mark Gregory of Inacomp Computers in Nashville,
Tennessee. Mr. Gregory loaned a Macintosh computer and word processing package
to the staff in the Center for Teacher Education Evaluation for the production of the
final copy of this book.
While we assume responsibility for the appearance and content of the final
product, we gmtefuIIy acknowledge the painstaking proofreading done by Sharon
Heard and James Harper.
We thank the authors, not only for producing such excellent pieces initially, but
for their patience with our deadlines and our editing of their work as we first
expanded, then reduced the size of each chapter. The trends and issues described in the
papers are those which educators face daily; the proposed solutions or approaches are
practical and worthy of serious considemtion. Working with these authors has been a
positive educational experience for us.
Zachary Rolnik and his staff were very patient with our constant questions and
requests. We are gmteful for the professional assistance we received from them.
To Mary N. Ayers and James Harper, we can only say, "We hope the next one
will be easier." We do appreciate your support, and that of everyone else who was
involved in the project.
A Practical Guide to
Teacher Education Evaluation
1
INTRODUCTION
J. T. Sandefur
Western Kentucky University
These are not my words. They are a direct quote from the Executive
Summary of the Carnegie Forum Report on Education and the Economy entitled
A Nation Prepared: Teachers for the 21st Century (p. 2, 1986).
This report was motivated by four purposes:
3. To reaffirm that the teaching profession is the best hope for establishing
new standards of excellence as the hallmark of American education; and
Although the Carnegie Report was published in 1986, 1984 may well be
remembered as the year of the "Reports on Education" and the year that initiated
2
what some are now beginning to call the educational reformation. Following
years of increasing public concern about the quality of education of America's
youth, the nation was galvanized to action by a series of reports, chief of which
was the report entitled A Nation at Risk: The Imperative for
Educational Reform (1983). That report confirmed the public's conviction
that education was in desperate need of reform. The report, which was brief,
succinct, and well written, made effective use of emotion-laden words and
phrases. For example, the title, "A Nation At Risk." brought even the most
complacent to attention. Repeated reference to "the rising tide of mediocrity" and
the statement, "If an unfriendly foreign power had attempted to impose on
America the mediocre educational performance that exists today, we might well
have viewed it as an act of war" (p. 5) brought the public's concern to a fervent
pitch. As a result, the report is seen to be the capstone of an educational reform
movement and the impetus for states to legislate and mandate all sorts of
educational reforms. The result in many states was legislation to test both
students and teachers, to increase the length of the school day, to cut out frills
and to stress basic skills, to develop career ladders for teachers, to limit athletics,
to develop beginning teacher programs, and to initiate or implement dozens of
other reforms.
As a result of the emphasis on evaluation by both accreditation agencies and
the so-called reform movement, universities preparing teachers have eagerly
sought assistance in developing and implementing evaluation programs of their
graduates. For years much of that assistance has come from Tennessee
Technological University under the able leadership of Dr. Jerry B. Ayers and his
staff. The leadership continues with the publication of the book, A Practical
Guide to Teacher Education Evaluation.
The content of the book is highly appropriate to the needs of universities.
For example, the knowledge base of teacher education is a primary concern of
institutions preparing teachers. Personnel want to know how it is identified,
explicated and implemented. Other primary concerns of teacher education covered
include evaluation issues of admissions, field experiences, student performance
outcomes, surveys, follow-up programs, faculty and structure of the governance
unit. These and other significant topics have been covered by recognized experts
in teacher education evaluation.
There can be no doubt but that the book will be warmly received by the
teacher education community. The editors and authors should be commended for
their contribution to the improvement of teacher education.
REFERENCES
The need for improved evaluation of teacher education programs has been well
documented over the past two decades. In the past five years, most states have
mandated substantial reforms in teacher education progmms. These reforms have
included:
Recent work by the Southern Regional Education Board indicated that too
little program evaluation was implemented to show if these changes in teacher
preparation were really making a difference (SREB, 1988). States such as
Florida, Virginia, Oklahoma, Louisiana, West Virginia, and Georgia are
examining ways to evaluate teacher education. However, there is a dearth of
practical methods to accomplish the needed evaluations in a systematic and
ongoing manner.
Daughdrill (1988) recently pointed out that assessment and evaluation are
doing more for higher education than any other development in recent history.
The Carnegie Foundation (1982) emphasized a need for institutions of higher
education to "reaffirm and strengthen self regulation." Other national
commissions and scholars echoed this stance. Evaluation is a key to the reform
process. This book is designed to meet the evaluation needs of institutions of
higher education relative to improving programs and the needs of society for
mandated accountability.
4
National Accreditation
Regional Accreditation
regional accrediting associations for both schools and colleges were in operation.
These six COPA recognized regional accrediting associations include:
State Approval
Some type of approval process for teacher education programs exists in each
of the fifty states. The Center for Teacher Education Evaluation staff studied
guidelines for the evaluation and approval of teacher education programs from all
fifty (Ayers, 1988). Six states employed the NCATE standards, and seven states
used NASDTEC standards. Several states use a combination of standards. For
example, in Tennessee, there is a state approval process for teacher education
programs; however, those institutions that are NCATE approved are
automatically approved by the State of Tennessee. It is anticipated that more
states will adopt the NCATE standards (or some modification) for the approval
of teacher education programs in the near future.
The standards for approval of teacher education programs by states that did
not employ NCATE and/or NASDTEC guidelines were examined in depth to
determine the needs for formative and summative evaluation. No additional
evaluation needs were found in these guidelines. In some instances, a state
might require a particular test (Le., California requires students being admitted to
formal study in teacher education to perform at a particular level on the
California Basic Skills Test).
Knowledge Base
The selection of a knowledge base has become a primary issue in the past
several years. The 1987 NCATE standards emphasize the development of a
knowledge base to guide the operation of a teacher education program. To this
end, Chapter 4 provides a description of how to select a knowledge base that can
become the foundation for program change and improvement.
Evaluation of the knowledge base is described in Chapters 4 through 11.
Chapter 4 provides specific information and suggestions for the evaluating the
knowledge base that was used in a particular teacher preparation program.
Laboratory and field experiences are among the most important aspects of
the preparation of future teachers. For this reason, emphasis was given in
Chapter 7 to the various methods for evaluating different types of laboratory
situations (e.g., evaluation of students observing in the classroom to the level of
student teaching in which an individual assumes the role of the teacher for an
extended period).
Outcomes Assessment
Follow-up Evaluation
Mail follow-up studies are widely used to gather evaluation data on teacher
preparation programs. Chapter 10 includes techniques to use in the development
of questionnaires specific for a given program. True follow-up evaluation in
teacher education requires the use of observation instruments in the classrooms
of graduates. Chapter 11 includes a description of various techniques that can be
used to effect studies of the follow-up of teacher education graduates. Follow-up
is a key to improving teacher education programs and for providing the needed
feedback for program development.
Program Resources
Part III of the book was designed to provide supplemental information about
a continuous system of formative and summative evaluation. The "Plus" part of
the Accreditation Plus Model was designed to provide additional information
about a teacher education program that was beyond the normal evaluations
conducted as a part of a continuous accreditation study. Chapter 16 provides a
description of classic models that can be used in evaluation and examples of the
use of selected models. The use of evaluation information for program
improvement is essential. Chapter 17 provides a guideline for using evaluation
information for program improvement. The chapter focuses on faculty
involvement and provides a checklist for using information for program
improvement. Chapter 18 includes a summary and discussion of some of the
major legal questions that an evaluator will be confronted with in conducting
various types of studies. The information is of particular use to those
individuals who do not have an extensive background in legal affairs. Chapter
19 provides an overview of the future. It includes a brief critique of Chapters 1
through 18 and examination of future developments in the field of teacher
education program evaluation.
3. In order to become familiar with (or review) the legal status and
requirements related to program evaluation, review Chapter 18.
4. Review the existing evaluation plans for the teacher education program, the
types of data available from existing sources, and additional data needed
(Ewell & Lisensky, 1988; Jones, 1988).
7. Determine if there are any additional areas of evaluation that have not been
met. Review Chapter 16 on models and modeling. This will serve as a
base for the development of additional evaluation tools.
Although the materials contained in this book were built around NCATE
standards, due emphasis was given to regional accreditation standards and to the
program approval requirements of the various states. If an institution follows
the evaluation procedures contained in this book, the program for the preparation
of teachers should meet the accreditation/approval standards of any agency.
evaluations of the evaluation, are conducted along the same lines as any other
evaluation. The standards were written as guiding principles and
SUMMARY
Each of the chapters described here can be used alone or in any combination
that will be of greatest benefit to the user. Someone who has experiences with
accreditation may choose to begin with an overview of the changes in the
NCA TE Standards. A person or committee with less experience may want to
read through the chapters in sequence. Individuals wishing to look at aspects of
a program which were not previously evaluated should begin with those, by
topic, or perhaps with the chapter on models to see what precedents exist.
Above all the contributing authors and the editors want this to be a practical
guide to program evaluation for teacher educators.
REFERENCES
Ayers, J. B., Gephart, W. J., & Clark, P. A. (1988). The Accreditation Plus
Model. Journal of Personnel Evaluation in Education, 1, 335-
343.
Carnegie Foundation for the Advancement of Teaching. (1982). The control
of the campus. Princeton, NJ: Princeton University Press.
Daughdrill, J. H. (1988, January 27). Assessment is doing more for higher
education than any other development in recent history. The Chronicle
of Higher Education, 34(20), A52.
Ewell, P. T., & Lisensky, R. P. (1988). Assessing institutional
effectiveness. Washington: Consortium for the Advancement of Private
Higher Education.
Gollnick, D., & Kunkel, R. (1986). The reform of national accreditation. Phi
Delta Kappan, 68, 310-314.
Hord, S. M., Savage, T. M., & Bethel, L. J. (1982). Toward usable
strategies for teacher education program evaluation. Austin,
TX: The University of Texas, Research and Development Center for
Teacher Education.
Joint Committee on Standards for Educational Evaluation. (1981). Standards
for evaluations of educational programs, projects, and
materials. New York: McGraw-Hill.
Jones, D. W. (Ed). (1988). Preparing for NCATE: Criteria for
compliance: external evaluation. Chicago: North Central
Association of Colleges and Schools.
National Council for Accreditation of Teacher Education. (1987). Stall dards,
procedures, and policies for the accreditation of profe!lsional
education units. Washington: NCATE.
Sandefur, J. T. (1982). Teacher education's evaluation of graduates: Where are
we going and how do we know when we get there? In S. M. Hord, T. V.
Savage & L. J. Bethel (Eds.). Toward Usable Strategies for
Teacher Education Program Evaluation. Austin, TX: The
University of Texas, Research and Development Center for Teacher
Education.
Southern Regional Education Board. (1988, November). State-level
evaluation of teacher education programs in the SREB states.
Atlanta: Author.
3
Paul A. Clark
Milligan College
Much of the blame for the present condition of education in the nation has been
placed on teacher education. Teacher education is in need of revision and reform.
However, there is a paucity of knowledge about what the content of teacher
education programs should be and about the relationship between preparation
programs/knowledge base and effective teacher performance. Basic research and
evaluation data by institutions preparing teachers must be collected and analyzed
in order to overcome the problems associated with teacher education reforms.
The value of systematic evaluation in improving teacher education programs
cannot be underestimated. The standards of such groups as the National Council
for Accreditation of Teacher Education, regional accreditation associations, and
state departments of education require that teacher education programs be
accountable for their products, i.e., the graduates of the programs. To that end,
systematic formative and summative evaluations must be undertaken.
A variety of models can be used to evaluate various aspects of teacher
education curricula. There is, however, a dearth of comprehensive evaluation
models. The central mission of the Center for Teacher Education Evaluation is
the development, refinement, and field testing of evaluation models and
materials. To remediate that condition, the Center staff developed the
Accreditation Plus Model. This Model is a viable entity that can be used as a
vehicle for the evaluation of teacher education programs. The model has become
the basis for a practical approach to the evaluation and improvement of teacher
education programs that is outlined in this book. An article describing the
Model was published under the same title as this chapter in 1988 in the Journal
of Personnel Evaluation in Education, 1, 335-343. The article is
reprinted as the remainder of this chapter with an updated schematic of the model.
Evaluation of education has an extensive history. It dates to work done by
the University of Michigan in starting the North Central Association of
Colleges and Secondary Schools. The association evaluated secondary schools to
14
help the University make its matriculation decisions (circa 1895). From the tum
of the century to the mid-1950s, diverse factors evolved four general approaches
to educational evaluation. Those four are described by Madaus, Stufflebeam, and
Scriven in "A Historical Overview" (pp. 3- 22) of their book, Evaluation
Models. Those evaluation forms were:
The mission of the Center for Teacher Education Evaluation (CTEE) is the
improvement of teacher education in the State of Tennessee and elsewhere. A
central objective in the accomplishment of that mission is the development of a
way or ways to improve teacher education evaluation. As the Center's activities
got under way, the Center staff began a literature search to find applicable
evaluation models or forms of evaluation.
Finding references to "evaluation models" or approaches was not difficult.
In a few months about 40 references to evaluation models were found. The
Center's problem changed. We no longer sought bibliographic references to still
another model. Rather, we sought ways of applying specific evaluation models
to improvements in the education of teachers.
In years past we have chided evaluation theorists about proliferation of
evaluation models. The Center for Teacher Education Evaluation realizes that if
the proliferation of evaluation models is counterproductive when others do it, it
15
is equally abhorrent when we do it. That set the stage. Which of these
evaluation approaches could (1) help us understand the evaluation process central
to our work, and (2) help us reduce the number of extant evaluation models?
The "dimensions of value" useful in assessing the quality of individual
evaluation models include: (a) complete versus incomplete models, (b) old
versus new models, (c) mature versus immature models, and (d) isomorphic
versus analogous models. At the same time we can conceive of types of models.
Here we focus on verbal, graphic, physical, and mathematical modeling. A
model (among other things) is a representative of the entity being represented. It
stands in for some thing. My right hand is a representation of my left hand.
For every part in my left hand there is a corresponding part in my right hand.
There is only one item, handedness, that keeps my left hand from being an
isomorphic model of my right hand. The latter is its mirror imagery character.
We sought a model that will help us understand the evaluation of teacher
education. We would like a complete, whole model, one that has some extended
history, and thus is aged and mature. The Center's charge is to improve teacher
education by improving our ability to evaluate teacher education programs. We
sought a model which has program evaluation at its core.
The recognition of "accreditation" as an evaluative approach helped delineate
and redirect the Center's task. Accreditation is a form of evaluation. It assists
people in making informed decisions in situations in which the relative worth of
competing options is difficult to measure. Madaus, Scriven, and Stufflebeam
speak to that point in their historical overview. Others who state or imply that
accreditation is a form or model of the evaluation process include R. Travers, W.
Webster, E. House, R. Stake, R. Floden, J. Sanders, and B. Worthen, whose
work on the subject are included in the Madaus, Stufflebeam, and Scriven text
cited previously.
A careful examination of the accreditation standards for NCATE (National
Council for Accreditation of Teacher Education) is impressive. NCATE's
standards are clustered in five categories:
At this point the education unit has a decision to make. If the education
unit is satisfied with the information generated via the accreditation process, then
that documents compliance. The design and planning work is done. What is left
is implementation and monitoring. The Accreditation core of the model has
been accomplished.
If, however, additional evaluative questions exist and, if the education unit
wants those items informed, "use-tailored evaluation" procedures will be planned
and implemented. This is the place for the Plus aspect of the Accreditation Plus
Model. And, this is the time to turn to the 40 or so extant evaluation models in
search of evaluation tools and techniques that will produce the desired evaluative
findings. The call here is for an informed eclecticism in the assembly of
evaluational procedures that will meet the additional evaluative needs. An
application of A-Plus will meet both the accreditation compliance information
and needs not handled in the accreditation evaluative information.
The subtle, unwritten policy that bigger programs are better is not the
position of the CTEE. The Accreditation Plus Model will be designed to be
helpful to serious small colleges, or agencies who want to improve their
evaluation expertise and practices. The assumptions of this model are
summarized below.
3. The atmosphere of accreditation is, but should not be, that of a crisis.
Threat is detrimental to productive change. The individuals and agencies
who direct accreditation should take all the steps necessary to change that
climate. Accreditation can and should be a team affair, a force for the future,
not a spectre from the past.
The components of the A-Plus Model and their general relationships are
presented in Figure 1. Concurrent with this writing Center staffers are
developing flow charts that will show the work to be done in applying
Accreditation Plus. The sequence of presentation of the components of the
THE ACCREDITATION PLUS MODEL
o @ COMPLETION OF THE PROGRAM GUARANTEES The Ultimate Criteria:
PROGRAM El.EA\ENTS THAT THE CANDIDATE WIll HAVE. 9 PUPIL GROWTH OUTCOMES
1. Candidate selection 1. Earned 0 mojor in a recognized field of study 1. Academic growth
2. Program (e.g., physics, history, language arts); 2. Physicol, social. and emotional growth
3. Staff 2. Attained a liberal arts education; 3. Moster)" of language arts skills
4. Candidate outcomes 3. Mastered educational theory. principles, 4. Mathemoticolliteracy
5. EdJcational unit methods, and practices; 5. Learning skills
6. Follow-up 4. Performed according to standards in a set of
- 6. Productive citizenship
7. Pupil outcomes monitored field experiences; and
5. Obtained a provisional license to teach.
o 0
1 9 EVAWATION OF THE EVAWATION--
9 I. SEIKT THE LEVELCS) OF THE JOINT STANDARDS COMMITTEE
TEACHER EDUCATION PROGRAM
--A new one ACCREDITATION SOUGHT.
--A developing one II. UST THE EVAWATIVE QUESTIONS /
THAT STRUCTURE ACCREDITATION
AT THE SEIKTED LEVELCS).
@
,
EXISTENCE OF A TEACHER EDUCATION III. ESTABUSH DATA GENERATION AND
ACCREDITATION PLUS
PROGRAM NECESS;ATES ACCREDITATION REPORTING PROCEDURES.
IV. PLAN FOR IMPROVEMENT AND
o Notional tD RESOURCES.
ACCREDITATION PROCESS. Clusters of extant evaluation models
51
State Handbook on standards, self stUdies. including systems models, goal based
teams of experts, site visits. reports @ / ~ models, naturalistic models, formative!
on unit, reviews by panel. reports on FORMATIVE SUMMATIVE summotive models, and others; and
Regional
decision EVALUATION EVAWATION Proven evaluation tools
model has little consequence. The Accreditation Plus Model has been presented
to numerous audiences. Each started with a different component. There is some
logic for starting with the Ultimate Criteria (#1) and the Second Ultimate
Criteria (#2), Pupil Growth Outcomes and Teacher Candidate Outcomes. The
components of The Accreditation Plus Model are as follows:
Summary
A-Plus as the evaluation approach for the project, we capitalize on a long and
effective history supplemented by the best of the current crop.
Figure 1 represents the evaluation of teacher education using the
accreditation approach. It describes the model's components and in some places,
the relationships of those elements. The Center's staff is working to detail
further the elements of the accreditation approach to evaluation. Questions and
comments are welcomed as we move further with this work.
REFERENCES
Roger S. Pankratz
Western Kentucky University
the unit ensures that its professional education programs are based on
essential knowledge, established and current research findings, and
sound professional practice. Each program in the unit reflects a
systematic design with an explicitly stated philosophy and objectives.
Coherence exists between (1) courses and experiences, and (2) purposes
and outcomes (NCATE, 1987).
the unit ensures that its professional education programs have adopted a
model(s) that explicates the purposes, processes, outcomes, and
evaluation of the program. The rationales for the model(s) and the
knowledge bases that undergird them are clearly cited along with goals,
philosophy, and objectives (NCATE, 1987).
These excerpts from the Knowledge Base standards, and the remaining four
standards and 22 criteria, have prompted many teacher educators who are
responsible for developing institutional reports for accreditation to ask:
4. What resources are available for addressing the knowledge base standards?
5. How can faculty know if the knowledge bases that have been selected and/or
developed are adequate?
Philosophical Definitions
Clearly there is no such entity as "the one knowledge base for professional
education." Rather, there are many knowledge bases that exist in many different
forms. Furthermore, knowledge bases are not static but rather, they expand as
ongoing research, scholarship, and experience constantly contribute to our
understandings. This is the philosophical side of the knowledge base definition.
Those faculty or staff members selected to produce an institutional report for
25
accreditation will be, however, interested in a much more practical definition.
Furthermore, anyone who has been briefed on the requirements and processes of
the accreditation process knows that the knowledge base for professional
education at each institution must be a well-defined and documented entity that
can be evaluated by members of a Board of Examiners relative to a set of
published standards and criteria. Thus, for those whose lot or honor it is to
provide leadership in selecting, evaluating, and describing program knowledge
bases in an institution, the following operational definition is offered.
An Operational Definition
The text of the NCATE Standard I. A. and the first two criteria include the
following items as expectations for programs in a teacher education unit:
o Essential knowledge
o Established and current research findings
o Sound professional practice
o Systematic design
26
o Explicitly stated philosophy
o Goals and objectives
o Coherence between experiences and outcomes
o Adopted models
o Purposes
o Processes
o Evaluation
o Rationales for models
o Knowledge bases that undergird
o Scholarly inquiry
o Theory development
Many of these items suggest criteria for the knowledge base source
documents that will contain the essential knowledge base for professional
education programs (i.e., essential knowledge, established and current research,
sound professional practice, scholarly inquiry, and theory development).
However, these source documents cannot exist in a vacuum nor can they be
selected at random. They are based on an "explicitly stated philosophy" and they
are selected according to a "systematic program design." Goals, objectives,
purposes, processes, and evaluations flow from a philosophy, and knowledge
base source documents support program objectives and evaluation processes.
Adopted models show the relationships between the elements of the program and
how the knowledge base sources "undergird" program elements.
Thus, a minimum of four essential elements is recommended for inclusion
in a program knowledge base document that is designed to address Standard I. A.
in an institutional report: (a) Program Philosophy and Assumptions, (b)
Program Outcomes and Evaluation Processes, (c) a Program Model Based on an
Organizing Theme, and Cd) Knowledge Base Source Documents. Each of these
four elements can be developed through collaborative processes and should be
treated as four separate developmental tasks by a program faculty. Each of the
four will be further delineated below by (a) stating the task to be achieved, (b)
providing a rationale, (c) suggesting a process for achieving the task, and (d)
describing the resulting product that could become part of a program portfolio
or an institutional report in addressing Standard I. A.
I. What should we assume are the most important purposes of schools and
schooling for the children our graduates will teach (e.g., the development of
cognitive knowledge and skills, thinking skills, social skills, cultural
values, self concept, etc.)?
2. What do we believe are the most important role(s) our graduates should be
prepared to perform in their professional work place (e.g., technical expert,
organizer/manager, counselor model/leader, decision maker, etc.)?
4. What do we believe about the importance and function of clinical and field
experiences in our preparation program?
1. What are the limitations of the program with respect to length in years,
credit hours of professional education, and resources?
2. What state policies and mandates will place significant constraints on the
program (i.e., state guidelines, exit testing, statewide internship, etc.)?
3. What are the guidelines of learned societies that have been adopted that
require specific processes and/or outcomes?
Answers to each of these questions should affect the curriculum design, and
all have implications for selecting the appropriate knowledge bases. "Issue"
questions (1) and (2) regarding the assumed purpose of schooling and key roles
for which graduates will be trained are of special significance in shaping the
philosophy of the program. For example, if a program faculty assumes that the
primary purpose of schooling is to develop self-concept and that the most
important role of the teacher is that of a counselor, the program and the
knowledge base that supports it will be very different from one for a program
based on the assumption that cognitive knowledge and skills should be the
cenL'al purpose of schools and the role of the beginning teacher is that of a
technical expert. In considering questions (1) and (2) above, ample time should
be allowed for facuIty input, debate, and discussion to develop ownership in key
elements of the program philosophy. Where time is not a factor and where
faculty like to be original and creative, planners might allow faculty to generate
faculty responses to these questions. But, if planners generate a number of
responses to each important issue, design, or constraint question and then
facilitate discussion and agreement on the most acceptable response(s),
ownership can usually be achieved in a relatively short time frame.
This statement implies that program objectives and outcomes are influenced
by the philosophy and must show a direct relationship, a "coherence," with the
curriculum. Program outcomes also provide an organizer for selecting and
developing the program knowledge base. Suggesting an evaluation process
and/or the instrumentation for each outcome helps to further define outcome in
the minds of planners and to remind them that each outcome must be framed in a
manner that can be subjected to assessment and an accountability system.
o Analysis of content
o Analysis of student needs
o Diagnosis of learning problems
o Planning curriculum
o Planning instructional strategies
o Implementing instruction
o Managing student behavior
o Managing materials and resources
o Evaluating student progress and providing feedback
o Evaluating instruction
o Communicating with students
o Communicating with parents
o Communicating with peers
Final Project. The process described above should produce a list of 10-
30 program outcomes statements, each with a suggested evaluation process that
30
embraces the entire realm of knowledge, skills, dispositions, and/or
competencies that a graduate of the program needs to demonstrate in order to
function as a fully prepared professional. The numbers 10-30 represent an
arbitrary recommendation. Experience has shown that too few outcomes
statements fail to provide the delineation of areas of knowledge, skills,
dispositions, etc. needed for program design. On the other hand, with too many
outcomes statements there is the danger of losing sight of the key performance
areas and the major performance foci for which the graduate is being prepared.
Following are examples of four possible program outcomes statements and
accompanying suggested evaluation processes developed for a middle school
teacher professional preparation program.
The four examples listed above are performance oriented. The specific types
of behaviors or performance criteria that would be acceptable are dependent to a
degree on the program philosophy. The more developed the evaluation processes
and instrumentation, the more clearly program performance outcomes will be
defmed and specified.
Table 1
A Taxonomy of the Knowledge Base for Professional Studies
Research Evaluation
(Assessment Technique Tests and Measurements
Tests and Measurements) Assessment
Table 2
Identification of Knowledge Base Source Documents
Planning
Instruction
Evaluation
Management of
Student Behavior
program faculty are serious about identifying and explicating 'the essential
knowledge base for their students and about communicating this knowledge base,
then it must be in a form that can be accessed and communicated with ease.
The new NCATE standards in the Knowledge Base Category require faculty
collaboration in the design of curriculum. Furthermore, the intent of the
knowledge base standards is to induce all teacher educator units to develop new
structures, processes, and behaviors that require total and ongoing involvement
of program faculty. It is a task that must be achieved if each program in the unit
is to reflect a "systematic design" with all the requirements listed in Standard I.
A. The pattern many institutions used under the old standards, appointing a
series of faculty committees to independently write parts of a self-study, is
simply not practical under the new accreditation guidelines. Teacher education
unit administrators must find ways to involve all program faculty in the
knowledge base development effort. While every program faculty has unique
characteristics and while some strategies work better in some situations than
others, there are some principles of operation and some leadership strategies that
appear to facilitate faculty involvement and participation in most curriculum
development efforts.
As a staff dean who has worked with a significant number of program
faculties to develop knowledge bases and who is presently orchestrating the
knowledge base development effort of ten program faculties on his own campus,
the author makes the following suggestions from wisdom of practice. First, the
initiative leadership, and authority for the knowledge base evaluation, selection,
and/or development must come from the dean and/or department chair's office. It
must be made clear to program faculty from the outset that the leadership of the
unit expects all faculty to participate in this developmental effort and that the
effort has a high priority. Also, it must be understood by all that those faculty
who have been given leadership responsibilities in the development process
assume these under the authority and full support of the unit administration.
Second, it is the dean's and/or department chair's responsibility to locate and
provide support and incentives for faculty involvement in the evaluation,
selection, and development of program knowledge bases. Even though unit
administrators may argue that budgets are tight and have been pared to the bone,
this author, with more than 23 years in teacher education, has never known an
education dean or department chair that did not have the ingenuity to locate
resources for what he or she regarded as highest priority. If the unit
administrators believe in the new NCA TE standards that have knowledge bases
as the centerpiece, they should show evidence of their commitment to these
37
professional standards through support and incentives for faculty participation.
In practical terms, this may mean providing food services for faculty work
sessions. It may mean finding resources for an overnight faculty retreat. It may
mean locating and providing technical support services and outside expertise to
provide faculty development activities. It also may mean regarding faculty who
make special contributions to the development process by paying them as
consultants or providing release time or some other professional incentive that is
valued. The key is to demonstrate to faculty that their professional contribution
and involvement are prized by the administration. The amount of faculty
participation and commitment to a development effort a dean can buy with
$10,000 is amazing if used judiciously to show faculty that their efforts are
appreciated.
Third, use overnight retreats and away-from-campus concentrated work
sessions to conduct faculty training or to obtain faculty input on key issues
related to knowledge base development. The completion of the four tasks
described earlier in this chapter and their products requires a concentrated effort by
the total faculty that is difficult to achieve near telephones, offices, and students.
It is critical that an initial workshop where the four elements of knowledge bases
described earlier are introduced, where key issues are discussed, and where the
foundation of a knowledge base is determined be done in a setting away from
normal routines. While full-day work sessions away from campus are
successful, planners at Western Kentucky University prefer two half-day
overnight retreats that include opportunity for interaction among faculty as well
as structured workshop activities.
Fourth, use small work groups to process faculty input from total faculty
involvement sessions (i.e., workshops and retreats) and to organizationally
transform these into products that can be studied, discussed, and refined. At
Western Kentucky University ten program faculty work groups of four or five,
each with a designated leader, have the responsibility to organize data,
information, and ideas from knowledge base development workshops and produce
drafts of program knowledge base documents with four elements:
I. Philosophy,
2. Outcomes and evaluation processes,
3. Program theme and model, and
4. Knowledge base sources.
The first draft document will be worked through, refined, and modified with
full faculty participation at the second faculty retreat session spaced four to six
months from the first faculty workshop. Leaders of each of the ten small work
groups will be paid a modest consulting fee for their professional contributions
and will be held accountable for the production of the drafts that will become the
program knowledge base document containing the four elements described.
[Additional suggestions relative to coordinating the involvement of everyone are
provided by Rutherford].
38
How Adequate is our Program Knowledge Base?: Three Sets of
Criteria
Even though some performance areas or knowledge domains may not have key
documents in all three of the above categories, whether knowledge documents do
or do not exist in all categories should be addressed in the evaluation process.
Multiple Criteria. Valli and Tom (1988) propose that a knowledge base
framework must embody five characteristics for it to adequately inform the
practice of teaching and teacher education. Use of these five characteristics will
prompt a program facuIty to go beyond what is essential and practical for the
beginning graduate. These characteristics are for those program faculty that are
reaching for the knowledge base that fully address the needs of the master teacher
and seasoned practitioner at all levels. According to Valli and Tom (1988) this
higher level knowledge base framework must meet the following requirements:
The authors of these requirements have labeled these five characteristics as: (a)
the scholarly criterion, (b) the multiplicity criterion, (c) the relatedness criterion,
(d) the usefulness criterion, and (e) the reflectivity criterion.
Using the description of these five characteristics suggested by Valli and
Tom (1988), a program faculty might ask themselves the following questions
regarding the adequacy of their know ledge base:
SUMMARY
REFERENCES
Bernal, E., Cleary, M., Connelly, M. J., Gerard, M. L., Kryspin, J., &
Nicodemus, E. (1988). In D. W. Jones (Ed.). A taxonomy of the
knowledge base for professional studies. Knowledge Base for Teacher
Education. Muncie, IN: Ball State University.
Brouillet, F. B., Marshall, C. R., & Andrews, T. E. (1987). Teaching and
learning in the affective domain: A review of the literature.
Olympia, W A: Professional Education Section, Office of the Department of
Public Instruction.
Brouillet, F. B., Marshall, C. R., & Andrews, T. E. (1987). Teaching and
learning in the cognitive domain: A review of the literature.
Olympia, WA: Professional Education Section, Office of the Department of
Public Instruction.
42
Gideonse, H. D. (1989). Relating knowledge to teacher education:
Responding to NCATE's knowledge base and related standard.
Washington, DC: AACTE Publications.
Howey, K. R., & Zimpher, N. W. (February, 1988). A workshop on
program change and assessment in teacher education. Presented
at the meeting of the American Association of Colleges for Teacher
Education, New Orleans, LA.
Knowledge base for the beginning teacher internship program.
(1989). Frankfort, KY: The Kentucky Department of Education, Capitol
Plaza Towers.
Mitzel, H. (Ed.). (1982). Encyclopedia of education research (5th ed.).
New York: Collier Macmillan.
National Council for Accreditation of Teacher Education. Standards,
procedures, and policies for the accreditation of professional
education units. (1987). Washington, DC: Author.
Pankratz, R. S., & Galluzzo, G. R. (1988). Designing a knowledge base for
teacher education programs: A developmental workshop for program
faculty. Bowling Green, KY: Western Kentucky University.
Reynolds, M. C. (Ed.). (1989). Knowledge base for the beginning
teacher. New York: Pergamon Press.
Short, E. C. (1987, July/August). Curriculum decision making in teacher
education: Policies, program development, and design. Journal of
Teacher Education, 2-12.
Shulman, L. S. (1987). Knowledge and teaching: Foundations of the new
reform. Harvard Education Review, 57(1), 1-22.
Smith, D. C. (Ed.). (1983). Essential knowledge for beginning
educators. Washington, DC: American Association of Colleges for
Teacher Education, Clearinghouse on Teacher Education.
Valli, L., & Tom, A. R. (1988). How adequate are the knowledge base
frameworks in teacher education? Journal of Teacher Education,
39(5),5-12.
Wisniewski, R. (1988). Illinois Smith and the secret of the knowledge base.
Journal of Teacher Education, 39(5), 2-4.
Wittrock, M. C. (Ed.). (1986). Handbook of research on teaching,
Third Edition. New York: Collier-Macmillan Publishers.
5
William E. Loadman
The Ohio State University
4. Identify and delineate criteria and standards for various program aspects.
5. Examine internal (e.g., local program standards) and external standards (e.g.,
national associations) to consolidate and identify uniqueness to assist in
reduction of duplication of efforts.
44
Purpose
things to all people for all situations sometimes results from poorly stated
purposes.
Locus of Control
Who is responsible for the quality control system? Control can reside with
State Departments of Education, legislatures, accrediting agencies,
universities/colleges, and practicing teachers. If care is not taken, this can lead
to duplication of effort, frustration of those under the mandates, and a lack of
cohesion and forward movement. Frequently the impetus for quality control is
exerted from an external source and is improved on at the local level.
Focus
On what does the quality control system focus? Several potential foci exist.
The focus could be on the quality of the preservice program. If that is the case,
major considerations are the definition of "program" and consensus on the
operational definition of "quality." For such topics as program improvement and
judgments about the program, program graduates are one focal point and their
performance on the job one measure of quality. Other possible foci include
faculty capabilities and performance, and student know ledge and skills at the
conclusion of the program. Some systems operate on multiple foci and it is not
uncommon to find systems where the foci are in conflict (Le., formative
evaluations versus program judgment or summative evaluations).
When and where does one focus the quality control system? There are
several options: pre-enrollment, during the program, at exit from the program,
and after graduation. The ideal system is part of an on-going monitoring and
improvement effort. [See chapters by Ayers, Craig, and Rutherford for more
information.]
Personnel
Are those persons most directly affected by the quality control effort
involved in deliberation about the system? If they are not, there is likely to be
little ownership and hence little support for the system or for using the data
which result from its application. [See chapters by Craig and Rutherford.]
Who conducts the quality control process? Many efforts are conducted by
persons within the organization under scrutiny. Their findings may be subjected
to validation by an external team. The findings of an external team frequently
carry sanctions for non-compliance with a set of standards and a seal of approval
for compliance. Positive efforts are more likely when those involved in the
enterprise are initiating and developing the effort.
Standards
Are there standards upon which the quality control system is based? There
are several sets of standards, including those promulgated by accrediting agencies
46
Criteria
Are there clearly specified criteria for each of the standards? In general, the
answer is "No." Some criteria can be deduced easily from standards while other
standards yield only grudgingly what are speculative criteria at best. The level of
specificity differs within as well as across sets of standards.
Is there agreement among professionals about the operationalization of the
criteria? No. There is, for example, a great debate over what constitutes
pedagogical knowledge. The limited specificity and lack of agreement on
standards compounds the problem of developing criteria.
Are the criteria which have been used frequently comprehensive in scope?
No. For example, few existing systems for assessing teacher quality assess oral
communications skills, whether the person is comfortable with children, or the
person's philosophy about teaching, yet these variables are integral to th({
teacher's performance.
Assessment
What are the most common means of collecting information for quality
control in preservice professional education? These are presented in full detail in
the other chapters of this book. Rudner discusses the evaluation of candidates
wishing to enter teacher education programs. Zimpher describes the evaluation
of field and laboratory experiences. McLarty provides a detailed listing and
description of instruments which are commonly used to measure outcomes.
Ayers and Craig discuss mail follow-up and follow-up evaluations, respectively.
Centra discusses faculty evaluation. Heam describes governance and Pankratz
discusses the selection and evaluation of the knowledge base. Checklists and
guidelines accompany many of the chapters.
Implementation
o Delineate purpose
o Establish program goals
o Detennine program standards
o Detennine appropriate criteria for standards
o Obtain necessary instrumentation
o Develop mechanism and procedures for implementation
o Establish timelines for implementation
o Select focus for pilot effort
o Select and train team members
o Pilot test the system
o Revise system as appropriate
o Implement other aspects of system over time
o Compare perfonnance to standards and make judgments about
compliance/non-compliance
o Obtain external validation of findings
o Evaluate quality control system
Several options for panel composition exist; they are dependent on the nature of
the task to be accomplished.
o measurement/assessment professional(s)
o local education agency administrator(s)
o practicing classroom teacher(s)
o quality control staff (if available)
o teacher educator(s)
o university/college administrator(s)
48
Professional Directions
SUMMARY
Lawrence M. Rudner
LMP Associates and American Institutes for Research
The chapter also provides the foIIowing practical tools for those involved in
admissions testing:
The most visible of the calls for recruiting talented and committed people
into the profession have stemmed from the reform reports of the 1980s. Reports
such as A Nation at Risk, (U. S. Department of Education, 1983);
Conditions of Teaching, (Feistritzer, 1983); Excellence in Our
Schools, (National Education Association, 1982); Tomorrow's Teachers,
(Holmes Group, 1986); and A Call for Change in Teacher Education,
(National Commission for Excellence in Teacher Education, 1985) have re-issued
the call for improved teacher education. The number and range of concerned
organizations appears to be at an all time high.
These reports postulate that if better candidates were recruited into teacher
education programs, better qualified teachers would graduate and pre-college
students would learn more. While recognizing the host of social realities,
ranging from lack of parental involvement in education to the decline in
enrollments in Teacher Education, these reports present a compelling case to
state, community and college policy makers. These policy makers have, in tum,
placed new pressures on SCDE.
NCATE
39. Incentives and affirmative procedures are used to attract candidates with
potential for success in schools.
40. Applicants from diverse economic, racial, and cultural backgrounds are
recruited.
41. A comprehensive system, which includes more than one measure, is used
to assess the personal characteristics, communications, and basic skills
proficiency of candidates preparing to teach (p.44).
Not all SCDE are bound by the NCATE standards; non-NCATE institutions
appear to be equally concerned with recruitment and equity. A survey of 161
institutions showed similar responses between the two types of institutions on
items concerning admissions testing (Kapel, Gerber, & Reiff, 1988).
Issues regarding admissions test selection and use include supply and
demand, test content, basic skills and teacher effectiveness, cut scores, and types
of tests. Each is discussed in the following section.
Teacher testing grew during the late 1970s and early 1980s--an era of open
college admissions, surplus graduates in educatiolJ, and declining student
enrollments in professional education programs. During this era there was
approximately one vacancy for every two teacher education graduates.
Educational organizations and many SCDE could afford tougher standards.
Testing as a means of selecting the most capable made a great deal of sense. In
many parts of the country, however, the era of surplus has ended. School
districts are beginning to experience increasing numbers of vacancies and SCDE
are experiencing declines in enrollments. The 1985 annual survey of college
freshmen (Astin, 1985) reported that only 5% of college freshman were interested
in becoming teachers. Further, students with the greatest academic talent were
the least likely to choose teaching as a career. Yet it was during this era that
teacher testing programs were first implemented. In order to meet the demand for
teachers, very low standards were set. Rudner and Eissenberg (1988) pointed out
that the average passing scores on the National Teacher Examinations Basic
Skills examinations were set by the states to be 3 to 5 standard errors of
measurement below the scores recommended by standard setting panels. The
standard setting panels were comprised of testing and measurement experts who
indicated the score marginally qualified individuals would be expected to receive.
Traditionally, enrollment in teacher education programs has reflected the trends of
supply and demand. Shortly after periods of teacher surplus, enrollment declined,
Table 1
Summary for Testing for Admissions to Teacher Education Programs
Test Content
While basic skills tests are the most prevalent form of admissions tests,
interest in measuring other cognitive skills is also increasing. Kapel, Gerber, and
Reiff (1988) noted that most institutions consider social/emotional fitness in
teacher education admissions and that several, although not many, use formal
standardized instruments. Other forms of testing such as evaluation forms,
observation checklists, personality inventories, and biographical information
forms are also used. Marshall, Sears, and Otis-Wilborn (1988) provided an
excellent evaluation of the literature pertaining to admissions test content.
Selection of test content must be based on self-examination of a program
and what is expected of its students:
3. Does it matter if students enroll who are unsure about their commitment to
the profession? If so, then a career maturity inventory may be a necessary
prerequisite.
Cut Scores
ADVANTAGES DISADVANTAGES
ADVANTAGES DISADVANTAGES
Practical Tools
The "practical tools" provided in this section include: (a) test evaluation
criteria, (b) a listing of sources of information about standardized tests, (c) a
description of several tests used by SCDE; and (d) a summary of some current
admissions requirements.
Test Evaluation
o Reliability
o Predictive validity
o Content validity
o Construct validity
o Test administration
o Test reporting
1. How were the samples used in pilot-testing, validation and norming chosen?
2. Are they representative of the population for which the test is intended?
6. Was the number of test-takers large enough to develop stable estimates with
minimal fluctuation due to sampling errors?
8. Do the difficulty levels of the test and criterion measures (if any) provide an
adequate basis for validating and norming the instrument?
3. What are the reliabilities of the test for different groups of test-takers?
5. Is the reliability sufficiently high to warrant the use of the test as a basis for
making decisions concerning individual students?
5. What is the basis for the statistics used to demonstrate predictive validity?
7. How accurate are predictions for individuals whose scores are close to cut-
points of interest?
3. What research was conducted to determine desired test content and/or evaluate
it once selected?
4. Were the procedures used to generate test content and items consistent with
the test specifications?
I. Is the conceptual framework for each tested construct clear and well-founded?
2. What is the basis for concluding that the construct is related to the purposes
of the test?
3. Does the framework provide a basis for testable hypotheses concerning the
construct?
Test reporting. The methods used to report test results, including scaled
scores, subtest results and combined test results, must be described fully along
with the rationale for each method. Test results should be presented in a manner
that will help schools, teachers, and students make decisions that are consistent
with appropriate uses of the test. Help should be available for interpreting and
using the test results.
Questions to ask are:
2. Are they clear and consistent with the intended use of the test?
3. Are the scales used in reporting results conducive to proper test use?
4. What materials and resources are available to aid in interpreting test results?
Test and item bias. The test must not be biased or offensive relative to
race, sex, native language, ethnic origin, geographic region or other factors.
Test developers are expected to exhibit a sensitivity to the ethnographic and
demographic characteristics of test-takers, and steps should be taken during test
development, validation, standardization, and documentation to minimize the
influence of cultural factors on individual test scores. Tests do not yield
equivalent mean scores across population groups. If they did do so, one could
inappropriately assume that all groups have had the same educational and cultural
experiences. Rather, tests should yield the same scores and predict the same
likelihood of success for individual test-takers of the same ability, regardless of
group membership.
61
Questions to ask are:
1. Were reviews conducted during the test development and validation process
to minimize possible bias and offensiveness?
3. What criteria were used to evaluate the test specifications and/or test items?
7. How were items selected for inclusion in the final version of the test?
10. Does the test predict the same likelihood of success for individuals of the
same ability, regardless of group membership?
11. Was the test analyzed to determine the English language proficiency required
of test-takers?
13. Should the test be used with individuals who are not native speakers of
English?
Finding the right test for a particular purpose can be quite difficult. The
evaluator must identify a variety of potentially useful tests, collect and review
technical materials, and identify and evaluate the practical considerations. This
section is designed to help with the first step--identifying useful instruments.
Books which contain lists of available instruments, reviews, and online
information retrieval systems are described below. These descriptions are taken
from the ERIC Digest Finding Information on Standardized Tests
(Rudner & Dorko, 1988).
A vailable Tests
Mitchell, James V. Jr. (ed.), Tests in Print III (TIP III): An Index to
Tests, Test Reviews, and the Literature on Specific Tests.
Buros Institute of Mental Measurements, University of Nebraska Press, 901
North 17th Street, Lincoln, Nebraska 68588-0520, (402) 472-3581, 1983,
714 pages.
Pletcher, Barbara P., Locks, Nancy A., Reynolds, Dorothy F., and Sisson,
Bonnie G. A Guide to Assessment Instruments for Limited
English Speaking Students. Santilla Publishing Company, New
York. Out-of-print. Available through ERIC Document Reproduction
Service, 3900 Wheeler Avenue, Alexandria, Virginia (800) 227-3742,1977,
223 pages.
Test Reviews
Several books provide in-depth, candid reviews of available tests. The best-
known are:
Keyser, Daniel J., and Sweetland, Richard C. (eds.), Test Critiques. Test
Corporation of America, Volume I, 1985, 800 pages; Volume II, Test
Corporation of America, Westport Publishers, Inc., 330 W. 47th Street,
63
Kansas City, Missouri 64112, (816) 756-1490, 1985, 872 pages; Volume
III, 1985, 784 pages; Volume IV, 1986, 768 pages; Volume V, 608 pages.
Identifying and searching test information can be quite simple for people
who have access to the online database system managed by Bibliographic
Retrieval Services (BRS), 1200 Route 7, Lantham, New York, 12110, (800)
345-4277.
Test Descriptions
Commitment To Teaching
Psychological Measures
Interview Forms
Teacher education can benefit from a good match between student learning
style characteristics and program attributes. Not surprisingly, measures of
learning style characteristics are being considered in teacher education (Van Cleaf
and Schade, 1987). Caution is urged as these measures are often plagued with
low test-retest reliabilities and classification instability.
Myers Briggs Type Indicators, by Consulting Psychologists Press,
Inc., 577 College Avenue, Palo Alto, CA 94306. Frequently used in business
settings, the MBTI examines preferences for extroversion or introversion,
sensing or intuitive perception, thinking or feeling judgment, and judgment or
perception.
Kolb Learning Style Inventory, by McBer and Company, 137
Newberry Street, Boston, MA 02116.
SUMMARY
This report outlined some of the issues surrounding admissions testing and
provided practical information. Sources of information about tests were
identified, several tests were described, criteria for evaluating tests were discussed,
and some current testing practices were identified.
Neither the author nor the staff at the Center for Teacher Education
Evaluation advocates the indiscriminate use of any test. Likewise, no specific
instrument or criterion is recommended for use in candidate selection. The
development of a defensible set of admissions criteria is a necessary step in the
overall process of improving teacher education programs.
Admissions testing can prove to be useful for screening applicants. Before
adopting a testing program, however, testing needs and goals should be carefully
identified. Tests should then be evaluated against those needs and goals. Once
the test is selected, meaningful cut-scores should be established.
Before the needs and goals of an admissions testing program can be
identified, the needs and goals of the teacher education program itself must be
identified and prioritized, and a plan developed for addressing them. The
Accreditation Plus Model for teacher education evaluation was developed in
response to the understanding that the design of an educational program cannot
be a linear or sequential task. Rather, it is possible to begin the process at any
one of a number of points and move through all phases of the design process.
67
REFERENCES
Nancy L. Zimpher
The Ohio State University
This review, however brief, of the many issues related to field experiences
can be viewed in the context of the synthesis pieces prepared on early field
71
be built up" (p. 15). This conceptualization differentiates among the habits that
help the teacher become thoughtful and alert as a student of teaching as opposed
to those which make the teacher immediately proficient in terms of technique,
but not necessarily reflective about teaching. As a consequence, Dewey
distinguished the outcomes in field experiences related to a conception of field
experience as an apprenticeship (having a utilitarian character) or field
experiences as laboratory (fostering personal inquiry and reflection). Zeichner
(1983) drew on this set of distinctions to present a continuum of alternative
paradigms of field experience in teacher education. She contrasted an
instrumental perspective of learning to teach with designing field experiences for
purposes of reflective thinking as epitomized by Dewey's distinction of the
laboratory approach. Zeichner referred to this approach as "the inquiry-oriented"
laboratory paradigm. This conceptual approach to field experiences provides a
background for an analysis of field experiences as programmatic experiences,
which follows.
From these three different aspects of teacher education, Griffin derived a set
of "program features" that could define the nature of field experience programs.
the distinctive elementary teacher education programs which they studied. Drawn
from these conceptualizations, four critical features in the design, conduct, and
study of programs of field experiences are presented below.
To understand and review these four clusters of program attributes one must
understand why it is being proposed that field experiences be looked at more
programmatically. In part, the need for a more programmatic approach to field
experience stems from deficiencies in teacher education programs as identified in
follow-up studies. Responding in a number of these studies (Drummond, 1978;
deVoss, 1978, 1979, 1980; & Loadman, 1983), students noted deficiencies in
their own teaching ability which they attributed to a lack of treatment of those
competencies in their teacher education programs. Particularly noted were those
technical problems related to classroom discipline, classroom management, the
ability to effectively motivate students, and interactions with parents and other
members of the school community. In an analysis of research on teacher
education, Koehler (1985) discussed a series of problems associated with
beginning teaching which were identified by students who had recently graduated
from teacher education programs. Students attributed their deficiencies to a lack
of effective treatment of these concerns in their teacher education programs. As
Koehler observed, these are issues which are clearly placed in the curricula
typical of teacher education programs, but which appear to present content in the
form of answers to questions not yet asked or posed by students. This led
Koehler to observe that in teacher education we experience a "feed forward
problem" wherein we give students information which they, out of context, have
little motivation to learn and then when in context (that is, in the classroom),
cannot recall, or in some instances claim they never covered in their program.
Carter and Koehler (1987) posited that what would be more helpful to
students in the acquisition of knowledge about teaching might be "event
structured knowledge" wherein knowledge about teaching and learning could
become integrated by prospective teachers who acquire knowledge and skills in
the context of classroom events and then process the effects or problems
encountered in the utilization of these skills to approximate utilization of the
skill in the classroom setting. Educators must begin to develop a rationale for
thinking more coherently abQut field experiences. Field experiences must be
integrated with total programs of teacher education, and it is to this integration
that the four program attribute clusters are directed in the next section.
Barnes (1987) made the case for designing programs and goals for programs
against conceptualizations to achieve desired program outcomes. In this sense
the conceptual framework presents the "program's assumptions, philosophy and
research base and outlines the implications of that knowledge for teaching" (p.
14), and then builds throughout the program repeated themes which extend,
illustrate and elaborate the conceptual framework. Thus, a critical feature of field
experiences is that these field experiences link in a purposive way to the central
conception of the teacher education program. This informs students, cooperating
teachers, and university supervisors of the critical and over-riding
conceptualization of teaching and learning being adhered to in the program.
Such a conceptualization could be, as Griffin proposes, an analytic and reflective
orientation. A host of other conceptions about teaching and learning could be
fostered either simultaneously or with one having priority over another.
IIIustrations of these conceptions could be, metaphorically, the notion of teacher
as: (a) executive manager, (b) artisan, (c) decision maker or problem solver, or
(d) reflective practitioner.
The second feature critical to the development of field experience programs
relates to contextual issues. Griffin (1987) and Zeichner (1987) posit that field
experiences must be attended to within the context of the setting, or be what
Griffin calIs "context-sensitive." The selection of a field site, the nature of
teaching and learning in that environment, the climate of the classroom, the
nature of the student population, the perspectives of the staff, the philosophy of
leadership embodied by the principal and other teachers, and the conditions of
schooling in a particular school district, city or culture, impact on the nature of
the field experience. Thus, in an analysis or assessment of field experiences, the
evaluator must take into account dimensions which describe the nature of the
field experience from a contextual or ecological perspective.
The third program feature that programs should be organized around are clear
goals and objectives. The underlying assumption must be that one goal of the
program is to create within students dispositions toward the acquisition of
knowledge about teaching and the subsequent utilization of that knowledge.
This is accomplished, not exclusively from a skill orientation, but through a
disposition to use what we know about teaching in order to be productive in the
classroom. Griffin (1986) acknowledges that programs should be built around
the knowledge base on teaching. The Howey and Zimpher (1989) treatment of
this concept focuses on the extent to which programs of teacher education,
including field experiences, are designed with knowledge of the empirical bases
for understanding teaching and learning. These include the research on teaching,
the research on effective schooling, and the research on a host of mediating
variables that impact on the nature of teaching and learning in schools. The
degree to which field experiences focus on a clear empirical or knowledge base
for teaching should be a part of the analysis, evaluation, and assessment of the
nature of these field experiences in teacher education programs.
Finally, and importantly, programs are (a) developmental and ongoing
(Griffin, 1987) and (b) articulated and reiterative (Howey and Zimpher, 1989).
Throughout the construction of the articulated and integrated programs (in tenns
of individual courses, blocked experiences, and laboratory and clinical
75
This is a planned repetition rather than program redundancy and would allow an
evaluator to look at the nature of field experiences and look for the reiterative
and reinforcing concepts that are ongoing and developmental throughout the
program.
These four program attributes or features provide a backdrop for a further
analysis of the nature of field experiences with an eye toward how best to most
effectively evaluate these experiences.
The design for the evaluation of field experiences proposed in this section of
the chapter draws heavily on two sources:
formal judgments are made about teacher candidates after they graduate from
teacher education programs and become practicing teachers. Each of these
systems has limitations, primarily with regard to the absence of more formative
data that would inform how students experience programs and how programs and
students are changed as programs progress. The system proposed herein is
multifaceted, and requires cumulative data gathering and analysis throughout the
student's matriculation in the program and takes a practice-oriented
developmental form. In this instance the system is organized against a set of
expectations, as follows:
o System findings must be the result of multiple and triangulated data inputs
and analyses.
o The system must provide for sequential and longitudinal data collection,
analysis, and usage.
o The system must be legally responsible (Zimpher & Loadman, 1985, p. 11-
12).
strategies to the selection and organization of the curriculum and structure of the
classroom. The knowledge base which undergirds the field and clinical
experiences from this perspective is a set of technical guidelines derived from
empirical knowledge about teaching and from a host of studies about effective
teaching. [See Pankratz's chapter for additional information on this topic.]
Clinical competence refers to problem solving and inquiry in the
classroom and provides a perspective for students in field experiences to think
about classroom activity through a problem solver or clinician framework. This
perspective of reflective action relies in part on personal theories and the
integration of theoretical and practical knowledge.
Personal competence emerges from:
1. Program evaluation designs for field experience allow for the collection of
data incrementally throughout the field experience program, from earliest
experiences through the completion of student teaching and internship.
79
2. Data collection moves from data which are largely documentary and
descriptive to data which are analytical and competency oriented, and
ultimately to narratives which reflect the rich ecology of classrooms in
which field experiences occur and acknowledge the context-sensitive nature
of field experiences.
This section of the chapter is organized against the conceptual framework for
teacher competence explicated above including technical, clinical, personal, and
critical competence. Provided here are lists of representative texts and materials
that could be used by students in self assessment of growth and development
relative to teacher competence and also used as vehicles for formative and
summative program evaluation of field experiences. Included are:
o Inventories,
o Questionnaires,
o Cases useful for individual analysis and assessment, and
o Self-inventories.
These materials can be viewed as a data base for assessing student progress in
field experiences and a basis for an inquiry-oriented and reflective disposition
toward teaching among teacher candidates in field experiences. The letters
following each citation refer to the competence(s) the reference addresses (i.e.,
TC refers to Technical Competence, CLC refers to Clinical Competence, CRC
refers to Critical Competence, and PC refers to Personal Competence).
Sources of Self-Inventories
Shulman, J. H., & Colbert, J. A. (1988). The mentor teacher case book.
San Francisco: Far West Laboratory for Educational Research and
Development, ERIC Clearinghouse on Educational Management and
Educational Clearinghouse on Teacher Education. (CLC)
Shulman, J. H., & Colbert, J. A. (1988). The intern teacher case book.
San Francisco: Far West Laboratory for Educational Research and
Development, ERIC Clearinghouse on Educational Management and
Educational Clearinghouse on Teacher Education. (CLC)
The two Shulman and Colbert casebooks present a series of cases about
individual practice and experience as well as guided analyses of the cases,
inventories, and discussion questions relative to the cases. They provide
descriptions of practice which can be used for charting progress of field-
experience students.
REFERENCES
Zimpher, N., deVoss, G., & Nott, D. (1980). A closer look at university
student teacher supervision. Journal of Teacher Education, 31(4),
11-51.
Zimpher, N., & Howey, K. (1986). Adopting supervisory practices to different
orientations of teaching competence. Journal of Curriculum and
Supervision 2(2), 101-127.
Zimpher, N., & Loadman, W. (1985). A documentation and assessment
system for student and program development (Teacher Education
Monograph No.3). Washington, DC: ERIC Clearinghouse on Teacher
Education.
8
Joyce R. McLarty
American College Testing Program
The performance of graduates is the single most important thing one can assess
in a teacher education program. It is good to have well-qualified faculty. It is
good to have a low student-to-faculty ratio. It is good to have well-equipped
facilities, a well-stocked library, and access to good student-teaching situations.
It is good to have well-qualified incoming students and a well-designed, carefully-
articulated instructional program. But none of that matters if the graduates
produced do not have the skills and abilities to become good teachers.
All the attention to input and process in the world cannot guarantee good
outcomes. All it can do is improve the likelihood of obtaining them.
Therefore, it is essential to attend to student outcomes, to observe them and
document them, and to use the information to improve the performance of future
students. No assessment is more critical to the success of a teacher education
program.
This chapter offers an approach to developing a student performance
assessment which is tailored to the individual institution's circumstances and
goals. The approach begins with identification of the goals the institution has
for its graduates, continues through selection of assessment strategies focused on
these desired outcomes, and moves to selection or development of assessment
instruments. The closing discussion addresses the question of risks and the
attribution of performance outcomes. Throughout the chapter, the implications
of both theoretical and practical concerns are noted.
GOAL IDENTIFICATION
Harris (1986, p. 13) suggests that goal development begins with three key
questions:
o What profiles of your alumni do you have, or can you develop, in terms of
achievements as career accomplishments, life-styles, citizenship activities,
and aesthetic and intellectual involvements?
o are knowledgeable about their content and the strategies for teaching it,
1. Knowledge of duties
a. those specified elsewhere in this listing
b. applicable school laws and regulations
c. school expectations
2. Knowledge of school and community
a. community expectations
b. community context and environment
3. Knowledge of subject matter
a. subject specialization
b. literacy skills
4. Ability to provide instructional design
a. course design based on curriculum requirements and student
characteristics
b. selection and creation of instructional materials
c. competent use of available resources
d. evaluation of course, materials, and curriculum
e. knowledge of the needs of special students
f. ability to use human resources
5. Ability to gather information about student achievement
a. testing skills
b. grading knowledge including grading process and grade allocation
6. Providing information about student achievement
a. to students
b. to administrators
c. to parents, guardians, and other appropriate authorities
7. Classroom skills
a. communication skills
b. management skills
i. discipline (control of classroom behavior)
ii. achievement (coverage of required content)
iii. emergencies (e.g. fire, first aid)
8. Personal characteristics
a. professional attitude
b. professional development
88
27, Precondition 6.2}. Once the preconditions are met, NCA1E has additional
standards and criteria which apply to program completion. These are reproduced
below. Process requirements such as these should be considered in the design of
the outcome assessment process, but are not themselves outcome requirements.
The unit ensures that the academic and professional competence of education
students is assessed prior to granting recommendations for certification and/or
graduation.
51. Evaluation systems that assess the academic and professional competence
of students include multiple sources of data (such as standardized tests,
course grades, and performance in classroom or school settings).
52. The applkation of a published set of criteria that specify acceptable levels
of performance for exit from all professional education programs is
monitored (1987, p. 46).
Not all program goals are amenable to formal assessment. Many legitimate
and critically important goals are simply beyond the scope of current assessment
technology. While it is important to identify a broad spectrum of desired
outcomes for the program, not all of these will be appropriate assessment
targets. Consider, for example, the list of goals in Table 1. This list could have
been produced by many teacher preparation programs. It is a mixture of
informational (knowledge), attitudinal, and skill-oriented goals. Some are stated
broadly; others are somewhat more narrow. Some are clearly covered within the
span of the instructional program while others may better be thought of as
entrance requirements or as skills to be developed after program exit (e.g., during
the teaching internship). As currently worded, many of the goals do not appear
to be measurable.
90
Table 1
Sample Performance Goals for Teacher Education Graduates
Once individual goals have been selected, the set of goals should be
reexamined to ensure that the balance of coverage of goal areas is appropriate. It
is not uncommon to unintentionally focus on measuring those aspects of
program outcomes which are most easily measured. It is important to determine
what is not being assessed as well as what is being assessed, and to ensure that
the emphasis portrayed in the set of assessment goals is an acceptable
representation of the program's desired outcomes.
Prioritize Goals
Since the program's goals are likely to exceed the program's resources for
assessing them, it may be necessary to establish priorities among the goals
selected for assessment. Focusing on the characteristics of the individual goals
and the probable benefits of the assessment data to the program can be helpful in
determining priorities among the performance outcomes. The following
questions may help to focus priority issues:
o How unique will the data be? Will they duplicate or reinforce existing
information, or will they make a unique contribution?
o What decisions will be informed by the data? When? How critical are they?
o Are there any requirements that dictate this type of assessment? What are
they?
Types of Instrumentation
may be considered. It is beyond the scope of this chapter to describe and provide
directions for constructing the many types of instrumentation a teacher
preparation program may wish to consider. Fortunately, there are many
references and experts available to provide assistance.
2. assessment as learning,
Risks
So far, the focus of this chapter has been on the use of information from
performance outcomes assessment to improve the performance of future students.
A common approach is to provide faculty with feedback so that they can
96
o On what scale will the data be reported, and how will that be determined?
o If data must be aggregated across different scales, how will the conversion be
handled (equipercentile, z-score)?
Some instruments are appealing from a content and credibility standpoint, they
may prove intractable under some implementation requirements. Table 2
provides a sample performance outcome assessment plan which incorporates
standardized testing where a minimum competency criterion is required.
Table 2
Sample Performance Outcome Assessment Plan
Some types of data which may be useful in interpreting and utilizing the
primary outcome information are presented below.
Student Data
Age (returning student?)
Sex
Race/ethnic background
Is English the primary language?
Previous educational experience (transfer student?)
Previous work experience (any teaching?)
Subject area/grade level specialization
Specific field experiences (supervising teacher, student grade level(s), subject
areas)
Courses taken, grades received
Current work, if any (hours per week) or other outside obligation
Program Data
Faculty (level, training, experience in course taught)
Texts used
Field experiences offered (type, duration, adequacy of supervision and
feedback)
Table 3
Possible Quality Assurance Efforts
Subject-area knowledge Course grades (faculty) Ask student to verify; compare with transcript
English-speaking Oral/Written Retest student and compare scores; ask faculty from
and writing Test (student) English or ESL department to score. Compare with
observation. If scored by multiple observers, calculate
inter-rater consistency.
Direct-instructional Classroom observation Use trained observers, use two at the same time and
skills (student) compare scores. Relate scores to pupil gains in
learning
100
CONCLUSION
Many reasons are given for avoiding the assessment of program outcomes.
The following arguments are among the most frequently cited:
REFERENCES
John A. Centra
Syracuse University
What is Evaluated?
Teaching, research, and service are the major functions of higher education,
but the emphasis assigned to each of these in evaluating the performance of
individual faculty members varies across institutions. Expectations and the
relative value accorded each function also vary situationally. At some
institutions, scholarly productivity is clearly a faculty member's primary
responsibility; at others excellence in teaching is weighted more heavily. A fair
evaluation system begins with communicating what will be evaluated and what
criteria will be used to make judgments. In this chapter, the evaluation of a
faculty members' teaching will be considered as a function of: (a) student
learning, (b) student evaluations of faculty, (c) self-evaluations, (d) colleague or
peer ratings, and (e) evaluations by committees and administrators.
The National Council for Accreditation of Teacher Education (NCA TE)
Standard IV. D. reads, "The unit implements a faculty evaluation system to
improve faculty teaching, scholarly and creative activities, and service" (1987, p.
48). For this reason, research and service will be discussed, although in less
detail than teaching.
Evaluation is used to direct the improvement of performance and to guide
personnel decisions. Evaluations which are conducted to improve performance
are called formative; evaluations which are made for personnel decisions are
called summative. Planners of teacher education programs must be concerned
with formative evaluation. That is not to say that evaluation for personnel
decisions plays no role in teacher education programs, but program improvement
is the cornerstone of the Accreditation Plus Model and thus is the focal point for
this chapter.
104
Student Evaluation
completing their degrees (several years after completing the courses) were quite
stable. Marsh (1977) found agreement between recent graduates' and current
students' evaluations of the same instructors teaching the same courses.
1. course organization/planning,
5. examinations/grading,
6. student involvement,
7. assignments/readings, and
8. instructor enthusiasm/dynamism.
o Instructor rank,
o Instructor or student personality.
o Instructor or student gender,
o Student college year,
o Class size,
o Expected (or actual) grade,
o Reason for taking a course,
o Purpose of ratings, and
o Academic discipline.
107
Self Evaluations
Colleague Evaluations
Committee Evaluations
of student learning is necessary, but is difficult to apply since one needs the right
situation and the proper controls as described previously. Fonnative evaluation
of student ratings can be useful for some teachers, but the changes may not be
overwhelming and it is not always evident what to do about poor ratings. With
summative evaluation of student ratings there is the possibility of some bias, so
one needs proper controls for collecting and interpreting data. Summative
evaluation also uses accumulated ratings across courses and years. Self-analysis
can be useful and video/audio feedback helpful with formative self-evaluation,
but with summative self-evaluation, self-ratings are not very useful, and an
activities report is essential, as are materials submitted by the teacher.
Formative colleague ratings can be helpful since they provide informal feedback,
whether based on classroom visits or not, but they depend largely on the skill
and knowledge of the colleagues. Summative colleague ratings of classroom
practices tend to be biased and unreliable; however, peer evaluations of
knowledge of subject, course outlines, texts, and student performance, could be
useful periodically. Formative alumni ratings about the curriculum and other
college experiences can be useful in program and institutional evaluation. On
the other hand, summative alumni ratings are difficult to obtain in a systematic
and reliable manner. They correlate highly with ratings by current students, so
in most instances they would not add much new information.
Assessment of Research
tenure or upper level promotions are being considered. Cole and Cole (1967) and
Braxton and Bayer (1986) identified three shortcomings of citations as
performance indicators:
3. The significance of the work may not have been recognized by the author's
colleagues.
A tally of publications and presentations, alone, should not serve as the sum
total of a person's scholarly endeavors. Boyer (1987) remarks that measures of
scholarly activities could include a facuIty member being asked to author or
review a text book or react to a recent development in his or her field.
112
Assessment of Service
The third area of faculty evaluation is service and as with research, the
weight service carries in an evaluation will vary with the institution. Miller
(1987) defines professional service as activities "such as participating or holding
office in professional associations and societies and to professional status as
viewed by oneself and by others" (p. 65). Public service, he notes, includes
"applied research, consultation and technical assistance, instruction, products, and
clinical work or performance" (p. 66).
In a recently released book, the Joint Committee on Standards for
Educational Evaluation describe the Propriety Standard for service orientation as
the promotion of "sound education principles, fulfillment of institutional
missions, and effective performance of job responsibilities, so that the
educational needs of students, community, and society are met" (1988, p. 21).
The book addresses personnel evaluation in terms of the utility, feasibility,
propriety, and accuracy.
CONCLUSION
The basic principles of faculty evaluation are no different from those which
guide assessment in other fields. This paper has focused on the need to base
assessments on multiple sources of research-proven valid information.
Assessment information can and should also be used to help faculty overcome
weaknesses and build on strengths. For personnel decisions it is also necessary
to review the evidence at several levels and to follow legal and ethical procedures.
Additional information about legal issues is found in Curcio's chapter at the end
of this book and readers will certainly want to read Rutherford's chapter on
utilizing evaluations.
APPENDIX
REFERENCES
Jerry B. Ayers
Tennessee Technological University
The survey is one of the oldest research techniques used in the social sciences and
education. Survey data are generally gathered by use of one of three techniques
(or some combination of the three). These three methods are mail surveys,
personal-interview surveys, and telephone surveys (Kerlinger, 1967). The mail
survey is widely used to gather follow-up data for improvement of teacher
education programs (Adams & Craig, 1981; Ayers, 1979; Ewell & Lisensky,
1988; Isaac & Michael, 1981; Villene & Hall, 1981). Craig addresses the
broader spectrum of follow-up evaluation in the following chapter; therefore, the
major focus of this chapter will be on the design, development, and use of mail
follow-up surveys.
Follow-up Surveys
Design Constraints
National Database
pilot tests, the instrument was revised and long and short versions were
generated.
The instrument requests graduates to provide information in six broad areas
including: ratings of preservice program quality, knowledge and understanding of
program content, adequacy and source of development of teaching, employment
history, background information, and perceptions of the goals and
responsibilities of teachers
Mailing Process
and can also provide a preliminary indicator of where problems might lie in the
completion of the questionnaire. In the future, the questionnaire could be
modified in such a way that returns can be increased.
Number or Mailings
The issue of increasing the rate of return for questionnaires and how to deal
with non-respondents has been frequently discussed in the literature. McKillip
(1984) advocated the application of attitude theories to the return of mailed
questionnaires. He has explored four theories of attitude measurement for
increasing the rate of return of mail follow-up questionnaires. Altschuld and
Lower (1984) {escribed some factors that have increased the return of mail
follow-up questionnaires to 96%. Hogan (1985) found little difference in the
results of follow-up questionnaires that had high and low response rates. Boser
(1988) explored the problem of whether respondents are representative of the
total group. The results of her work indicated that there were no differences
between respondents and non-respondents.
Data processing and analysis are dependent upon the use that will be made of
the results of the survey. Therefore, the original objectives of the survey need to
be examined to determine the specific type of analysis needed. The Appendix of
this chapter includes a suggested checklist for preparing a final report
The most widely used statistic derived from surveys is frequency counts
(counting the replies and presenting them in tabular form). Data is tabulated
either by hand or by a computer. Frequency counts serve as a base for all other
data analysis.
A complete set of descriptive statistics should be calculated for the data.
Correlations across various data sets may also be of value. If data have been
collected from previous years or from other programs or institutions, inferential
statistical techniques may be used. A variety of standard research texts designed
for the social sciences and education can provide assistance in this area (Isaac &
Michael, 1981; Erdos, 1983; Babbie, 1973; Kerlinger, 1967).
Data analysis via the computers is common. Microcomputers provide easy
access to computing capabilities that will allow for constructing frequency
distributions and other descriptive statistics. A number of programs are available
122
Reporting of Results
Reporting the results of mail follow-up studies is one of the most important
aspects of follow-up evaluation, yet frequently it is not given sufficient
attention. In order for the information to be useful in program improvement and
in redesign efforts, it must be put into the hands of the faculty and administrators
in a concise form. The typical report should contain a description of the purpose
of the survey, the method employed, a summary of the findings, conclusions,
recommendations, a summary of the tabulations and calculations, and a copy of
the survey instrument.
Other information can be included at the discretion of the researcher. Copies
of the report and all of the original data sheets should be kept in order to develop
a longitudinal study of the graduates of the institution. Longitudinal studies are
of importance in improving teacher education programs.
Face-to-Face Surveys
Telephone Surveys
home, rules out the advantages of the face-to-face interview, and the logistics of
the situation can be difficult.
SUMMARY
Surveys are one of the most widely used techniques in education and the
social sciences to gather data for program change and improvement. The mail
survey is probably the most common technique of gathering information from
graduates in order to obtain self-reports of their perceptions of the teacher
education programs in which they were trained. The mail survey allows the
collection of a set of data which ranges from frequency counts of occurrences of
events to attitudes and opinions about a particular program. This information,
in turn, can be used
This chapter provided suggestions for the development and use of various
type:> of instruments, a bibliography of references, information on commercially
available instruments, and a checklist to be used in survey work. The
information in this chapter will be of use in the development and
implementation of mail follow-up surveys.
APPENDIX
This section presents a set of criteria which can be used in the evaluation of
mail follow-up surveys. The criteria were extracted from a variety of sources and
are divided into several sections including survey design, survey instrument, data
gathering, data processing, and reporting. Each section can be used as a separate
checklist of items that should be carried out in order to conduct a mail survey
that will provide useful data for program improvement and redesign.
Survey Design
Survey Instrument
2. Does the instrument include the appropriate demographic questions that will
allow for continuing contact with the subjects?
4. Are all of the questions written such that the respondent can be expected to
answer without guessing?
11. Are the directions clear and the instructions helpful in getting precise
answers?
13. Did the teacher education faculty have input into the questions' construction?
18. Was the total instrument of such a length that it could be completed in a
reasonable time by the respondents?
19. Do the questions reflect the objectives of the teacher education programs?
127
Data Gathering
Data Processing
6. How were the calculations (e.g., percentages, means, and medians) made?
Reporting
3. Does the report list the date of publication, title of the project, and the
sponsoring agency?
5. Does the report describe clearly the objectives and limitations of the study?
128
6. Does the report describe clearly the methodology used in the study?
10. Are the tabulations for all questions included in the report?
11. If the answer to question lOis no, why were some results omitted ?
13. Are all tables clear and readable, with appropriate titles?
16. Is the report useful to the faculty responsible for revising teacher education
programs?
REFERENCES
James R. Craig
Western Kentucky University
When these factors are taken into account, and only then, can truly effective
follow-up evaluation systems be designed and implemented. To ignore these
factors is to squander resources and produce data that will be of minimal use, if it
is used at all. To incorporate these factors is to enhance the likelihood that
follow-up evaluation will provide meaningful and timely feedback that will be
used to create more effective and efficient teacher education programs.
studies in teacher education in the last two decades can perhaps be traced to the
1968 revision of the standards by the National Council for Accreditation of
Teacher Education (NCA TE). The new standards emphasized follow-up
evaluation. Sandefur's (1970) monograph on follow-up evaluation has been
widely used for teacher education evaluation. His model was based on an
outcome-oriented and competency-based approach. The continued emphasis on
follow-up evaluation has been reinforced recently by the NCATE; its revised
standards (1987) call for follow-up evaluation as a criterion for compliance. The
widespread use of follow-up evaluation in teacher education programs was
documented by Adams and Craig (1981) in a survey of 445 respondent
institutions affiliated with the American Association of Colleges for Teacher
Education (AACTE). Adams and Craig reported that 86% of the sample
indicated that they conducted follow-up evaluations of their programs using
questionnaires mailed to their graduates. [Refer to the previous chapter for
specific information on mail follow-up studies.] Interviews and direct
observations were also reported as being used, but much less frequently. Despite
widespread practice, however, the conduct of follow-up evaluation studies in
teacher education has been criticized.
After reviewing 26 evaluation studies, Katz, Raths, Mohanty, Kurachi, and
Irving (1981) raised issues about the validity and the usefulness offollow-up data
based on surveys of graduates. Katz, et aI., (1981) questioned the selection of
survey participants, the representativeness of the sample, the conclusions drawn
from the follow-up data, and the timing of follow-up data collection. They
reported what they believed to be sampling bias in response rates and obvious
and global recommendations for change which probably would not and, given the
vague and general nature of the recommendations, could not be addressed by
program faculty. Thus, Katz et al. concluded that under current conditions,
there is little reason for conducting follow-up studies, especially when using
questionnaires. They suggested that enough rival hypotheses and explanations
can be generated from an evaluation report to render it virtually useless for
program development. Adams, Craig, Hord, and Hall (1981) agreed with Katz et
al. that practice in follow-up evaluation is narrowly conceived. However,
Adams et al. went on to argue that by focusing on follow-up questionnaires
onl y, Katz and her colleagues presented a distorted view of the variety of methods
actually employed in the evaluation of teacher education programs--other
procedures (e.g., employer interviews, direct observation) have been used, albeit
much less frequently. Perhaps the most important lessons to be learned from the
many years of practicing follow-up studies in teacher education is that the social
context in which the follow-up is being conducted is one of the primary
determiners of its form and substance.
The social context within which follow-up evaluation is conducted sets the
parameters that frame decisions relative to resources, operational procedures, and
133
other evaluation issues. In particular, four aspects of the social context associated
with follow-up evaluations must be considered and understood in order to design,
implement, and operate effective follow-up evaluation systems. These are
described in the following section.
Personal/Professional Relationships
Teacher education programs change over time and, therefore, what was
appropriate and acceptable for follow-up evaluation at one time may not be at
another. Laws are passed, new research is published, facuIty retire, etc. Any or
all of these may necessitate a revision of the program of study and/or the
development of new courses. Such changes typically necessitate changes in the
design and/or content of the follow-up evaluation procedures. Unlike research
designs which attempt to maintain constancy of conditions throughout the
course of the data collection process, evaluation designs must be flexible and
adaptable to meet ever changing program conditions and information needs.
134
Perhaps the most important aspect of the social context which influences
follow-up evaluation is the values held by the various audiences involved in the
evaluation process. Briefly, values are positive and negative feelings that people
have and project onto other people, objects, and situations (Williamson,
Swingle, & Sargent, 1982). As such, values are relatively permanent cognitive
structures that have a pervasive effect on an individual's behavior. In the context
of follow-up evaluations, values are a major force in the formation, operation,
and evaluation of programs (Wortman, 1975) and in the interpretation of
evaluation and program outcomes (Gorry & Goodrich, 1978). Because values are
relatively resistant to change, they tend to determine the direction and operation
of a program regardless of the nature and extent of the evaluation information
made available. This means that while both administrators and faculties aspire
to follow the Dogma of the Immaculate Perception (Kaplan, 1964) and try to
conduct value-free evaluation, in reality it is not possible. Recognition of the
prominent role that values play in the evaluation process provides a basis for
understanding how "non" evaluation factors influence the purpose, use, and
design of follow-up evaluations.
PURPOSES
1. Accountability,
2. Improvement,
3. Understanding.
4. Knowledge production.
Accountability
Improvement
Understanding
Knowledge Production
Use Defined
The definition of use typically advanced has been that use occurs when
follow-up data are directly employed in objective, observable ways in program
modification and operation (Cohen, 1977; Mathis, 1980; Patton, 1978).
However, such a definition does not take into account the fact that decision
making often is based more on the values held by the decision makers than on
the information available to them (Harper & Babigan, 1958). The use offollow-
up data in program decision making is almost always diffuse in nature and is not
always directly observable (patton, 1978). Use of follow-up data is an iterative
process which focuses on the assimilation of evaluation information into the
decision making process (Craig & Adams, 1981).
Follow-up evaluation systems can be structured to use the data they provide
by building in mechanisms to increase the systematic inclusion of follow-up
data in program decision making (e.g., Akpom, 1986; Covert, 1987; Craig &
Adams, 1981; Nowakowski, 1985). First and foremost, policy makers,
administrators, and program personnel at all levels must be actively involved in
the organization and implementation of the follow-up evaluation. Involvement
begins with the chief administrative officer responsible for the program but
extends to all individuals who are involved in the program and who have a stake
in its operation. The particular procedures by which this involvement is
accomplished can vary (see Havelock & Lindquist, 1978) but the outcome should
be that everyone has the opportunity to conduct a preliminary overview of the
evaluation and an analysis of its various ramifications. One way this may be
accomplished is detailed below. In addition, the individuals involved should have
regular and consistent contact with the follow-up data through such means as
reports, presentations at faculty meetings, and problem solving activities based
upon the follow-up (Freeman, 1987). A follow-up evaluation in which useful
and usable data are collected and which meets the criterion of internal logic is
doomed to failure unless a knowledge utilization strategy is built into the system
during the planning and implementation processes.
137
3. The follow-up evaluation syste.m must operate within, and not apart from,
the organizational framework that currently exists.
Planning
Implementation
follow-up evaluation process may operate. This process will allow the realistic
setting of the parameters within which the follow-up evaluation must function
(e.g., the budget), develop a commitment to the follow-up evaluation process,
and identify possible program decisions that could result.
If, at that point, the decision is still to institute a follow-up evaluation
system, then the preliminary review should be repeated with those individuals
within the organization who serve as the primary decision makers for the
programs to be evaluated. The decision making team should be identified by the
chief administrative officer and should be composed of individuals who
collectively facilitate the establishment of specific program goals, functions, and
operational procedures as well as those who are responsible for decisions
regarding program modification and continuance (e.g., an assistant dean, a
department chair). The same considerations addressed by the chief administrative
officer should be addressed by the decision making team. This group should also
determine the procedures for the creation of the planning and evaluation team.
Planning
The planning and evaluation team is the primary initiation and operational
force in a follow-up evaluation effort. The team determines the details of the
follow-up evaluation (e.g., the types of data to be collected) and is responsible
for communicating to all concerned the form and substance of the various
activities. So that the team can function effectively, it should be composed of
both formal and informal program leaders and be limited to no more than ten
members. The exact composition of the team will be specific to each situation
and is determined by such things as the number of program staff, the leadership
style of the chief administrative officer, budget restrictions, etc. The planning
and evaluation team should be charged with accomplishing four tasks:
The planning and evaluation team should create a viable, flexible, workable
follow-up evaluation plan that:
o specifies the evaluation data required to make decisions regarding the present;
140
o rates the importance of the various data consistent with the teacher education
program goals and objectives and the current knowledge regarding effective
teaching;
o evaluates the possible data sources and data collection procedures in light of
access, cost, developmental time, staff development needs, time delays, etc;
o prioritizes the possible data sources and collection procedures in terms of the
follow-up evaluation data required and related resource restrictions; and
o describes the available data collection procedures selected and/or adapted, new
procedures developed, and the data collection training required, if any.
The planning and evaluation team should consider the following types of
evaluation data:
Knowledge
Teaching Behavior/Skill
Teacher Attitudes
Perception of Preparation
Implementation and data collection reflect the activities associated with the
operation of the follow-up evaluation plan. Briefly, the follow-up evaluation
plan established by the planning and evaluation team should be specified to the
extent that procedures for data collection are established. These include selecting
instrumentation and data collectors, establishing procedures for selecting
participants, identifying data management systems, etc. The follow-up plan
should be implemented and operated by the evaluation manager under the
direction of the planning and evaluation team.
CONCLUSION
APPENDIX
The third instrument involves Interpersonal Skills (IS) and ten behaviors.
This instrument assesses the teacher's characteristics as they are related to climate
in, and the teacher's ability to manage, a classroom. The instrument is divided
into three areas as follows:
The fourth instrument involves Professional Standards (PS) and assesses the
teacher's relationship with colleagues, the acceptance of professional
responsibilities and efforts to improve professional skills. The instrument is
divided into two areas as described below:
REFERENCES
Edell M. Hearn
Tennessee Technological University
to analyze and determine whether a given system meets the intended standards.
NCA1E has defined governance as follows:
The governance system for the professional education unit ensures that
all professional education programs are organized, unified, and
coordinated to allow the fulfillment of its mission (NCA1E, 1987, p.
49).
commission, or committee, it is assumed that these could form all or part of the
administrative body if they were officially recognized by the institution.
As in the past, although governance or its synonym may not be used
directly in the other standards, the likelihood of many other standards being met
or not met depends to a great degree upon the governance system. When
governance is weak it is likely that one or more other weaknesses in the total
teacher education program may be in evidence. Following are some hypothetical
examples of problems that could occur:
2. The knowledge base controlling the teacher education program may have
gaps, or there may be a lack of coordination of the program.
7. Students may not be differentiated by major field of study. For example, the
courses of study within a program for the preparation of an industrial
chemist are essentially the same as those for an individual who wishes to
teach chemistry in grades 7-12. There is, however, a need for the individual
who aspires to be a teacher to take specialized courses to meet state licensure
requirements and to pursue work in pedagogy and other areas of professional
education.
9. Faculty who teach in general education and also in the specialized major
areas of a teacher education program are reluctant to become involved in the
154
public schools. They do not see it as a part of their job and are sometimes
not rewarded for it when an effort is made to become involved.
11. Separate autonomous programs may exist in which students receive little
preparation in professional education. These programs are most often
offered through a College of Arts and Sciences in such areas as Music, Art,
Health and Physical Education, and Psychology.
12. Advisement may be frustrating and complex for both faculty and students.
A strong system of governance must exist for the other NCATE standards to
be met. The governance system must be in place, along with a knowledge base
and a system for quality controls, before such activities as follow-up evaluations
of graduates can be effectively implemented. It is important therefore, that the
unit give special attention to this fact. One must not forget that a system for
evaluation is a necessity. It is a rather simple process of accountability that
calls for the institution, based upon the best research available, to plan a
program(s), explain what program(s), implement the program(s), and prove that
the goals and purposes of the program(s) have been met. The crucial element of
any program is how well the graduates have performed. Without a proper
governance structure this information is not likely to be obtained.
Historically, teacher education has been faced with the responsibility of
effecting a balanced program between disparate elements. On the one hand it has
been important to have the cooperation of the arts and sciences faculty for both
general education and subject matter preparation while at the same time having
the support of the practitioners in the field to provide clinical experiences and
supervision. It has been necessary for the professional faculty to work
effectively with both groups in trying to blend both elements (general and
clinical education) into the teacher education programs. This program-making
responsibility and authority is central to the policy decisions governing teacher
education in the institution. Program development is no easy task and the
burden sometimes resides with teacher education faculty.
There is no one system of governance that will meet the needs of all
institutions. It should be kept in mind that governance must be tied to the
Professional Education Unit, which is described as
within the realm of professional judgment. The checklist and questions can be
applied to all types of institutions irrespective of size or mission.
I PRESIDENT I
.... - -- -l APPEALS
COMMITTEE
I
COLLEGE
CURRICULUM
COMMITTEE
TEACHER EDUCA-
TIONFACULTY
(The Unit)
I PRESIDENT
ACADEMIC
DEAN
DIRECTOR OF
TEACHER
EDUCATION
I I
PROGRAM IN PROGRAM IN PROGRAM IN
EARLY CHILD- ELEMENTARY SECONDARY
HOODEDUC. EDUCATION EDUCATION
I
I 1
TEACHER EDUCA- GRADUATE
TION COMMITTEE EXECUTIVE
(The Unit) COMMITTEE
- - - -- DEAN, COLLEGE
OF
EDUCATION
- - Indicates
coordinating
relationships.
I
DEAN, COLLEGE DEAN,COLLEGE
OF BUSINESS OF - - Indicates a Coordinating Relationship
ADMINISTRATION - - - - - - - -- EDUCATION
I
I I
DIRECTOR OF ASSOCIATE
r--------- SUPPORT SER- DEAN FOR
I VICES & LAB- ADMINISTRA-
I ORATORY EXP. TION
I
I
I I
DEPT. OF BUS. DEPT. OF DEPT. OF DEPT. OF SCHOOL DEPT. OF LAB.
EDUC. & OFFICE LEISURE ART & MUSIC SERVICES C& I SCHOOL
ADMINISTRATION EDUCATION EDUCATION -PERSONNEL
--
Summary on Governance
It is clear that if governance is out of place many of the other standards will
be weak or not met. All of the standards are important but the two most critical
standards in the minds of most educators are those of governance (control) and
evaluation (program and product [student] at entrance and after entering the
profession).
In examining the governance of a teacher education program, special
attention should be given to the NCA TE criteria for compliance with the
Standard V. B: Resources. Particular attention should be given to the criteria
since they are closely related to governance and control of the teacher education
program and have been sources of problems in meeting accreditation standards at
some institutions. The criteria are presented following the Figure.
CURRICULUM
COUNCIL*
(Faculty Senate)
COLLEGE OF
EDUCATION
COLLEGE OF INTER-UNIVERSITY
SUBJECT MATIER
EDUCATION COMMITTEE ON
DEPTS. OF OTHER t- - -
DEPTS. TEACHER EDUCATION
COLLEGES
(The Unit)
92. An identifiable and relevant media and materials collection for use by
students and faculty (NCATE, 1987).
The role the Unit and governance plays in regard to the purely quantitative
standards of accreditation is also critical. For example, NCATE standards require
that students achieve a 2.5 grade point average for admission to the study of
professional education, that students spend a minimum of ten weeks full-time in
student teaching, and that the maximum teaching loads for full faculty at the
undergraduate level not exceed 12 semester hours and 9 semester hours for faculty
teacQing at the graduate level.
Irrespective of the specificity that is contained in the standards, there is
much professional judgment to be made in regard to whether they are met. This
is as it should be! Application of the questions to the compliance criteria in the
Appendix of this chapter should be helpful in establishing an excellent system of
governance. In tum the likelihood of an institution being successful in meeting
accreditation standards is greatly enhanced.
APPENDIX
The governance system for the professional education unit ensures that
all professional education programs are organized, unified, and
coordinated to allow the fulfillment of its mission (NCA TE, p. 49).
Criteria 68. The goals of the professional education unit are congruent with
the institution's mission.
2. What evidences exist that indicate that the mission of the institution is in
harmony with the goals of the professional education unit?
3. If the mission of the institution and the goals of professional education are
not in harmony, what is the major rationale?
4. How did the institution determine that the goals of the professional
education unit and the missions of the institution were congruent?
Criteria 69. The unit effectively carries out its responsibility and discharges
its authority in establishing and implementing appropriate policies for
governance, programs, admission and retention of education students, and faculty
selection and development in professional education.
2. Are there indications (in writing) that the control of programs lie with the
unit of teacher education?
3. Has the institution adopted admission and retention standards that reflect the
carrying out of other NCATE criteria for compliance?
4. What types of activity have been provided in the past three years for faculty
development in professional education?
7. What are the procedures and processes that are followed in the
employment of new faculty?
Criteria 70. The unit effectively carries out its authority in making decisions
affecting professional education programs.
165
Criteria 71. The unit effectively carries out its responsibility and discharges
its authority for identifying and using appropriate resources for professional
education.
3. How do the resources available for teacher education compare with those
available with other divisions of the institution?
4. What evidence is there that the students actually make effective use of the
resources provided?
Criteria 72. The unit effectively carries out its responsibility and discharges
its authority in developing and maintaining appropriate linkages with other
units, operations, groups, and offices within the institution and with schools,
organizations, companies and agencies outside the institution.
1. By what means does the teacher education unit assure working relationships
with schools, organizations, companies and agencies outside the institution?
2. What has been the result of the activities conducted with schools,
organizations, companies and agencies outside of the institution within the
past three years?
4. Are there problems that exist between teacher education and other units at
the institution?
5. If Question 4 was answered "Yes," what problems, if any, have been dealt
with in the last three years and how?
166
7. Is there respect for each of the varying roles of faculty in the institution to
the extent that there is a high degree of cooperation and a spirit of learning
from one another?
Criteria 73. The unit has, and regularly monitors, a long-range plan.
1. Has the teacher education unit had adequate input into the long-range plans
of the institution?
5. How, by whom and how often are the long-range plans of the institution
monitored?
6. How, by whom and how often are the long-range plans of the teacher
education program monitored?
2. By whom was the person in Question 1 appointed and how was it made
official?
1. What evidence is there that assures that the plan is systematic for ensuring
the involvement of teachers, education students and other education
professionals in those policy-making and/or advisory bodies recommending
requirements and objectives for the professional education programs?
2. By whom were the individuals selected and what was the process?
4. Has any of the above bodies, on their own initiative, made specific
recommendations that have been considered, approved, or rejected by the
governing unit?
Criteria 76. Policies in the unit guarantee due process to faculty and students.
2. Who is the person responsible for monitoring the affirmative action goals
of the institution?
3. Is there equity regarding salary, working conditions, and other factors among
faculty in teacher education and other faculty in the institution?
4. What is the appeals procedure used by the institution if a student feels that
due process has been violated or he/she has been treated unfairly?
6. Have there been instances of the due process procedures being utilized during
the past three years?
REFERENCES
Mary F. Berney
Tennessee Technological University
Educational facilities have come a long way from the days when the primary
demand was for some structure to protect the students and teachers from the
elements (Castaldi, 1987). The facility is now considered an integral part of the
instructional process, and evaluation of physical facilities is a component in the
accreditation processes established by most national, state, regional, and local
accrediting agencies. In its 1986 revision of standards, the National Council for
Accreditation of Teacher Education (NCATE) placed added emphasis on the
evaluation of resources. Standard V. B: Resources includes criteria for
compliance under Personnel Resources; Funding Resources; Physical Facilities;
and Library, Equipment, Materials, and Supplies. Similar standards can be found
in state policy guidelines for teacher education program approval as well as in
the guidelines for other agencies at the national, regional, and local levels.
This chapter is not intended to make physical facilities experts of the reader,
or to replace more detailed facilities evaluation guidelines that might exist for
individual institutions. It calls attention to steps that can be taken to incorporate
facilities evaluation into the routine planning and evaluation processes of an
educational unit. The reader is asked to keep in mind that facilities planning can
refer both to the construction, or anticipated construction, of a new facility and
to the continual upkeep of an existing one. Much of what is written on the
topic of facilities assumes new construction, but the principles can be applied to
the renovation of existing facilities as well. As with any form of evaluation, the
evaluation of physical facilities should not be relegated to crisis status, or
become a task which is conducted by tht unit only in its self-study for an
accreditation visit. If evaluation is to be used to direct improvement, evaluation
must occur routinely.
This chapter will address the evaluation of physical resources with an
emphasis on existing facilities. Financial and Library Resources are covered in
separate chapters, and although specific information about those topics will not
170
the data base for the facility planner must include valid, reliable,
objective information concerning the level of adequacy of existing
facilities to provide for (1) meeting current ... program requirements,
(2) effective and efficient utilization of space, (3) a structurally and
mechanically sound physical plant, and (4) a healthy and safe
environment for building occupants (p. 8).
Those four points resemble the criteria set forth by the various state and regional
agencies and approval boards which address teacher education programs.
Educational Specifications
"The design of the educational facility should facilitate and promote the
freedom of movement of all handicapped students and provide for their
participation in as many regular activities" as possible (Castaldi, 1987, p. 141).
We cannot discriminate against the physically handicapped by the design of our
facilities. Access to buildings, and within them, to restrooms, offices,
classrooms, and other instructional spaces must be provided. Beyond that,
however, constant monitoring is required to ensure that facilities remain safe and
accessible to everyone.
In addition to providing access to the facilities, teacher educators must
provide instruction designed to address the needs of handicapped learners. The
educational program must offer students who major in Special Education training
and research activities which are designed to develop and enhance their skills in
working with handicapped learners. Human development, psychology, and
motor skills laboratories are among the specialized facilities that should be
provided and evaluated.
172
These guidelines are by no means inclusive but they can serve as the
starting point for an evaluation. Many state standards include guidelines for
facilities evaluations in their standards for program approval and faculty at other
institutions might also be good sources of additional points for a checklist.
Garten provides a detailed set of evaluation guidelines which covers print
materials in libraries. In the interest of conserving space, that infonnation will
not be repeated in this chapter.
Site
1. Is the facility easily accessible to all students, faculty, staff, and visitors in
tenns of:
Location
Parking
Handicapped Access
2. Is the site free from traffic hazards?
174
Energy Conservation
1. Are heating and cooling systems inspected regularly and kept in good repair?
2. Is the heating and cooling system controlled by an automatic timer?
3. Are windows properly sealed?
4. Are there curtains and blinds or shades for all windows?
5 Do the exterior doors fit properly?
6. Are there leaks or drips in the plumbing?
Instructional Spaces
Audio-Visual/Microcomputer Center
Handicapped Access
1. Are all buildings barrier-free? This includes libraries, the student center,
classrooms, and administrative offices.
2. Are water fountains accessible to wheelchair-bound persons?
3. Are access ramps and elevators provided in parking areas as well as
buildings?
4. Is the furniture and equipment arranged so as not to create barriers?
5. Is parking made and kept available for handicapped persons?
6. Are restrooms designed to provide access?
Administrative Areas
Ease of Maintenance/Housekeeping
Restroorns
Public Areas
1. Are directories within each building prominently displayed and kept current?
2. Are public areas well-lighted, well-ventilated, and pleasant in appearance?
3. Do distinct non-smoking areas exist and are the regulations enforced?
REFERENCES
Robert L. Saunders
Memphis State University
Summarizing the reform efforts in teacher education for the past half
century, Bush (1987) asserts that, historically, teacher education has been
economically impoverished, receiving much less funding than parent institutions
allocate for other professional fields. Bush blames this funding inadequacy for
the fact that past efforts at reform have not resulted in big changes.
Lanier and Little (1986) cite the findings of several researchers which
support the conclusion that "the record of financial support for teacher education
is low" (p. 556). Institutional analyses conducted by Clark and Marker (1975)
support the earlier point that the low funding of teacher education is attributable
in part to the prestige factor, noting that "teacher training is a low prestige, low
cost venture in almost all institutions of higher education " p. 57).
A consistent pattern of apparent underfunding of teacher education was
found by Peseau and Orr (1980). The title of their research findings, "The
Outrageous Underfunding of Teacher Education," has been repeated many times
in discussions about teacher education. Virtually all the findings from a
longitudinal study of teacher education funding in 63 leading institutions in 37
states support the premise that teacher education programs are indeed poorly
supported. For the year 1979-80, for example Peseau (1982) reported that
SCDEs received "only about 65% as much as for a public school student and
only 50% as much as the average cost per undergraduate student in all university
disciplines" (p.14).
One reason, among others, for the underfunding of teacher education,
according to Peseau and Orr (1980), is found partially in state funding formulas
which typically place teacher education with undergraduate programs of low
complexity. Exacerbating this problem are two related conditions: (a) In many
states, funds are allocated on historical patterns whict~ were built on traditional
assumptions that were unfavorable to teacher educl:ltion (Temple & Riggs,
1978); and (b) When university administrators reallocate funds derived from state
formulas they give less to teacher education and more to programs that, in their
judgment, deserve or need a higher level of support (peseau & Orr, 1980). The
lack of leverage which SCDEs have in influencing the institutional reallocation
of state-derived funds was noted by an External Review Committee for the
Improvement of Teacher Education in Georgia (1986).
Nutter (1986) described three negative funding situations found in teacher
education which contribute to the general evidence concerning the depth and
scope of current underfunding. The first situation is Starving Institutions.
Some, perhaps many, of the country's approximately 1200 institutions with
teacher education programs have declined from historically mediocre institutions
into fiscal (and intellectual) poverty, unable to support adequately any of their
programs. The second situation is seen at Research-Oriented Institutions. In
some large, well-supported, research-oriented institutions, teacher education
programs tend to be ignored at the expense of more "academically respectable"
endeavors. Finally, the third situation is Institutions. In some institutions, the
attitudes of central administration ranges from lukewarm to hostile. Some
administrators regard teacher education as an embarrassment, a peripheral
activity, perhaps a necessary nuisance and certainly not something on which to
spend much money. As pointed out by Monahan, Denemark, Egbert, Giles and
180
McCarty (1984), inhospitable institutions can be found even among those with
strong funding bases and with well-developed and highly respected programs in
other fields.While citing the needs for stronger financial support by state and
national governmental agencies, and from philanthropic foundations, Howsam,
Corrigan, Denemark and Nash (1985) use strong language in identifying funding
inadequacies at the institutional level:
Current program approval procedures and standards at the state level as well
as regional and national accreditation standards are of insufficient help to teacher
education administrators and others trying to evaluate the adequacy of financial
resources given teacher education programs. This shortcoming lends credence to
the importance of the Accreditation Plus Model being developed by the Center
for Teacher Education Evaluation at Tennessee Technological University.
In this section, a summary analysis is given of how state, regional and
national approval and accreditation procedures address the matter of financial
resources.
It was beyond the scope of this chapter to identify and analyze the program
approval policies and procedures for the various states. Some generalizations are
possible, however, based on the general available knowledge about such
181
The six regional accreditation agencies (Middle States, New England, North
Central, Northwest, Southern and Western Associations of Colleges and
Schools) have clearly stated standards which address the financial resources of
institutions; however, the standards are institutionally focused and these are of
limited used to those seeking to evaluate the financial resources for teacher
education programs.
It is true, of course, that teacher education programs benefit when parent
institutions are in compliance with the standards for resources. Like other
programs within a given institution, a teacher education program benefits from
being in an institution that has "a history of financial stability" and where
"financial planning and budgeting are ongoing, realistic and based upon
institutional educational objectives" (Western Association of Colleges and
Schools, 1988, p. 82). Moreover, in addition to being institutionally oriented,
the standards emphasize processes and procedures (e.g., budget control,
purchasing and inventory control, refund policies, cashiering, etc.).
It is safe to say, then, that standards used by the regional accrediting
associations were not designed and are not intended to be used as instruments for
evaluating the financial resources of teacher education programs--or any other
specific program area. An exception to this generalization might be when a
program within an institution is so poorly funded that this condition prevents
the institution as a whole from meeting the standards of overall financial
sufficiency (generally found in the preamble).
National Accreditation
resources are limited and when a long- standing historical pattern of inequity
exists.
One way, perhaps the best way, to approach the problem would be to
reconceptualize teacher education as a clinically based instructional mode (like
medicine, veterinary medicine, nursing, and clinical psychology) and remove it
from the lecture-discussion mode (like history and English). Smith (1980)
makes this point forthrightly: "Without it [a new rationale and formula for
financial support], a clinical program is impossible, for clinical work
accompanying courses in pedagogy requires at most a ratio of 10 students per
instructor" (p. Ill).
A reconceptualization of teacher education is unlikely if attempted in a
vacuum, however. For it to occur, as Clark and Marker (1975) wrote, teacher
education must in fact remove itself from the "classic mold of undergraduate
lecture courses where students end up being taught to teach by being told how to
teach" (p. 57). Kerr (1983) extends this notion by suggesting that this
fundamental change is not likely to occur so long as teacher education remains
an undergraduate program.
Encouraging in the reconceptualization of teacher education along the line
of a clinically-based instructional mode is the cost model for clinical teacher
education developed by a legislative task force in Florida (peseau, Backman &
Fry, 1987). The unique activities of clinical preparation were identified and
applied to actual cost data gathered from nine teacher education programs.
Recommending that teacher education programs should be about two-thirds
clinical (and then translating that to one-third clinical instruction and two-thirds
clinical practice), a budget comparison was made on the basis of 33% classroom
instruction, 22% clinical instruction, and 44% clinical practice. After applying
weights of 1.5, 2.0, and 2.5, these calculations resulted in "an increase in
indirect program costs of 106% for the nine undergraduate programs and an
overall (undergraduate and graduate) forecast budget increase of 47%" (p. 33).
Support for the reconceptualization of teacher education as a clinically oriented
professional model was provided by the report of the National Commission for
Excellence in Teacher Education (1985) referred to earlier. "At least three
factors," the report states,
Another way to improve the rationale by which funding decisions are made
would be the use of the consent of peer institution comparisons (peseau, 1988).
Dependent upon the availability and analysis of quantitative data on resources and
productivity variables, this concept can help teacher education administrators by
providing less biased justification for needed additional resources, that is,
comparative quantitative information on resources and productivity of similar
institutions. Peseau (1988) notes that the peer identification process does not
preclude comparisons with other institutions of perceived better quality. The
concept of peer institution comparisons is not unlike the concept of peer
program comparisons, suggested several places throughout this chapter.
Correcting the funding rationale would enable SCDEs to engage in budgeting
processes with more confidence, optimism, and success than is currently the
case.
Personnel Resources
SCDEs must have faculty with sufficient expertise and 'in sufficient
numbers to enable teacher education units to achieve their tripartite mission of
instruction, research and service. The tripartite, three-fold mission is
fundamental, the raison d'etre of a professional school as contrasted with an
academic school, a research unit, or a service agency as separate and disjointed
entities.
Funding should be in accordance with the gestalt of this three-fold mission.
Funding most often is based on the instructional function alone, resulting in
either severe overloading of faculty or diminished roles in research and service
(Orr and Peseau, 1979). Central administrators tend to require teacher education
units to generate their own research dollars through externally-funded grants.
This practice often leads faculty into research activities of questionable relevance
and utility to K-12 schools, creating a public relations problem. Research that is
highly relevant to schools, as noted by Howsam, et al. (1985), is not unlike
the complementary research and development functions of agriculture schools and
extension programs. Faculty need time to engage in research and development
activities in collaboration with local education agencies. They also need
financial support, along with assigned work time, for the research activities,
even when external funds are unavailable.
The professional service role is endemic to teacher education units. Failure
to provide professional service to schools and other appropriate agencies is a sure
way to earn the labels of aloofness, unresponsiveness, and indifference, the
consequence of which can have a disastrous impact on recruiting programs,
settings for field-based experiences for students, and other collaborative efforts.
Howsam, et al. (1985) provide an instructive and useful way to
conceptualize both the research and professional service roles - and their
interdependence - of teacher education programs. Noting that the most
fundamental of the purposes of a university is "the pursuit of valid knowledge,"
185
and that "all the basic activities of a university are directly concerned with the
search for valid knowledge, with its presentation and dissemination, and with its
use (emphasis added)" (p. 57), the authors suggest three continua as ways of
conceptualizing the search for new knowledge and the development and use of
that knowledge. The three continua are shown in Figure 1.
other professional program areas within the institutions. SCDEs are often
penalized in this regard. Central administrators are quick to apply the concept of
"market sensitivity" in negotiating salaries for faculty in such areas as business,
engineering, law, and medicine. In applying the concept to teacher education,
however, they tend to think of classroom teachers as the basis of comparison,
and often even that comparison is misread. It is not uncommon to find teacher
education faculty in curriculum and instruction departments, for example, with
salaries less than what they would have if they transferred to K-12 classroom
positions. Faculty in departments of administration and supervision, as a further
example, often receive salaries lower than those in positions being held by their
students (principals, supervisors, superintendents, etc.). If the concept of market
sensitivity is valid in program areas such as law, medicine, engineering and
business, it is also valid in teacher education.
Salaries of teacher educators should be commensurate with and equitable
with salaries in other program areas. To set them lower is to give credence to the
widespread belief that state funding authorities and central administrators see less
value in teacher education programs and are willing to relegate these programs to
second class, low prestige status. Faculty should be sufficient in number and
expertise to perform the clinical, laboratory experience programs of the teacher
education unit. As noted earlier, Smith (1980, p. 111) recommends a ratio of 10
students per instructor in the clinical program. The same ratio is required in the
clinical supervision of nursing interns (Tennessee Board of Nursing, 1988).
NCATE (1987) requires a ratio of one full-time faculty member for 18 full-time
equivalent students in practicum experiences. A ratio of five students as the
equivalent of a three-hour course is recommended by the Council for the
Accreditation of Counseling and Related Educational Programs (1988).
Resources should be sufficient to preclude the need for faculty having these
responsibilities assigned because their scheduled courses failed to make.
Sufficient faculty qualified and available for supervising clinical and laboratory
experience would obviate the need for graduate assistants and part-time faculty
(often employed at the eleventh hour) to provide this instruction in a program
component which students consistently say is the most important of all.
Programs for faculty development should be operated with sufficient funds
to permit them to be ongoing, viable and effective. It is not enough to say that
such programs should be "at least at the level of other units in the institution"
(Standard V. B: 80, NCATE, 1987). Meeting this standard would be
meaningless if the institution had a weak or non-existent program for faculty
development, by no means a far-fetched possibility given the state of the art in
this matter in higher education generally. More meaningful standards for faculty
development programs would include the presence of official plans, procedures
policies regarding faculty development leaves, budgeted funds for the program,
the dissemination of information regarding eligibility, application procedures,
kinds of developmental leaves possible and preferred, and clearly stated
expectations for accountability.
Funds for faculty travel beyond the travel involved specifically in faculty
development leave programs should be available and accessible. Funding should
be sufficient to permit each faculty member to attend professional meetings at
187
the state and national level, annually and biannually respectively, as a minimum.
Cost sharing between the institution and the faculty member should be
encouraged. The judicious use of discretionary funds or cost sharing
arrangements and for exceptionally productive faculty would be advisable.
SCDEs should be funded sufficiently to enable them to employ
practitioners from the field as part-time, adjunct faculty. NCA TE properly
prohibits the overuse of part-time faculty and graduate assistants teaching in the
professional program (Standard V. B: 81). The other side of that coin, however,
is that SCDEs increasingly need clinical, field-based, practitioners to play
essential roles in the education of teachers (External Advisory Committee on
Teacher Education and Certification in Mississippi, 1989). Greater use of
practitioners in establishing practicum and internship programs and supervising
students in such experiences seems to be gaining force. To expect such
programs to function with the meager (and professionally insulting) stipends
characteristic of current student teaching programs is to be naive and short-
sighted. Needed are bolder, more imaginative (and more expensive), approaches
such as the one proposed by Goodlad (1983):
Goodlad predicts that without efforts of this type and the substantial funds
necessary to mount such initiatives,
Reports of the Carnegie Forum on Education and the Economy (1986) and
the Holmes Group (1986) recommended significantly different approaches to the
way students gain clinical and internship experience and the increased use of
"proven teachers" or mentors. These recommendations, as Goodlad's, would
require substantial increases in funds allocated teacher education programs to
cover, among other costs, the employment of mentoring teachers and increased
time allocation of university faculty.
Some institutions and some state agencies are moving toward the
employment of K-12 educators who are assigned responsibility for specialized
instructional methodology, incorporating this highly criticized instructional
component into extended and intensified internships. Should this movement run
full course, significant amounts of new funds would be necessary, at least until
reductions in faculty size occur as a result of the shifted responsibility.
No aspect of teacher education is more seriously underfunded than the area
of support personnel. SCDEs with adequate numbers of secretarial and clerical
personnel, technicians, artists, computer programmers, and media specialists are
clearly exceptions. It is not uncommon to find as many as 12 or 15 facuIty
188
Physical Facilities
Again, the criteria used by NCA1E do not go far enough and thus require
examiners to furnish the specifics, to flush out the nuances, and tease out
dysfunctional and restricting circumstances and conditions. An Accreditation
Plus Model should contain standards that cover these nuances, dysfunctions and
restrictions. No longer can teacher education be regarded as an inexpensive
program, not requiring complex and expensive technology, laboratory
equipment, and abundant supplies. The truly effective and sophisticated
189
This section sets forth a set of criteria which can be used in the evaluation
of financial resources for teacher education programs. The criteria were extracted
from citations in the above sections. In some instances the criteria were
developed from 32 years' personal experiences in evaluating financial resources as
an administrator of teacher education programs.
The criteria go beyond those found in state approval systems and in
regional and national accrediting agencies. In some instances, however, the
criteria have embedded in them the standards used in state, regional and national
accrediting/approval systems. For each statement the user can indicate the degree
to which each statement is characteristic of the teacher education program being
evaluated, by circling the appropriate number from "1" (the statement is not at all
true) to "4" (the statement is always true).
1234 Funds allocated to the teacher education unit are equitable with
those allocated to other program areas within the institution,
based on whatever unit measure (size, scope and depth of
program, etc.) is used in the allocation process.
Personnel
1234 There are sufficient budgeted line positions for faculty to enable
the unit to achieve the threefold mission of instruction,
research, and professional service.
1234 Employment policies permit and funds are available for hiring
qualified adjunct, part-time faculty to augment the regular
faculty, exceeding the traditional arrangements for evening and
weekend courses by using such persons as clinical professors,
supervisors of interns and as mentors.
1234 Funds are available and institutional policies permit the unit to
operate a structured and on-going program of faculty
development. Included are provisions for professional leave and
sabbaticals sufficient to enable all faculty to be eligible for such
a program at least once during each seven-year period. Funds
are sufficient to preclude colleagues having to take on overloads
while a facuIty member is on professional development leave.
1234 Discretionary funds are available and can be used to fund faculty
travel beyond the above level to accommodate faculty of
exceptional productivity and prestige.
Physical Facilities
1234 Faculty and staff in the teacher education unit have office space,
instructional space and other space that enables the unit to
achieve its goals and objectives at a high level of attainment and
are, also, comparable in size, maintenance and functionality to
192
1234 Both the branch library and the learning resource center are
staffed sufficiently to enable students involved in late-evening
and weekend courses to have full access.
REFERENCES
Ayers, J. B., Gephart, W. J., & Clark, P. A. (1988). The accreditation plus
model. Journal of Personnel Evaluation in Education, 1, 335-
348.
Bush, R N. (1987). Education reform: lessons from the past half century.
Journal of Teacher Education, 38 (3), 13-19.
Carnegie Forum on Education and the Economy. (1986). A nation
prepared: Teachers for the 21st century. New York: The
Carnegie Foundation.
Clark, D. L. & Marker, G. (1975). The institutionalization of teacher
education. In K. Ryan (Ed.), Teacher Education. (74th Yearbook of the
National Society for the Study of Education, Part 2). Chicago, IL:
University of Chicago Press, 53-86.
Council for the Accreditation of Counseling and Related Educational Programs.
(1988). Accreditation procedure manual and application.
Washington, DC: American Association of Counseling Department.
External Review Committee for the Improvement of Teacher Education in
Georgia. (1986). Improving undergraduate teacher education in
Georgia. Atlanta, GA: The University System of Georgia Board of
Regents, 14-15.
External Review Committee on Teacher Education and Certification in
Mississippi. (1989). Recommendations for the improvement of
teacher education and certification in Mississippi. Jackson, MS:
State Board of Education and Board of Trustees of Institutions of Higher
Learning, 14-15.
Goodlad, J. I. (1983). A Place Called School. New York: McGraw-Hill
Book Company.
Holmes Group (1986). Tomorrow's teachers: A report of the
Holmes group. East Lansing, MI: The Holmes Group.
Howsam, R, Corrigan, D., Denemark, G., & Nash, R (1985). Educating a
profession, (2nd ed.). Washington, DC: American Association of
Colleges for Teacher Education.
Kerr, D. H. (1983). Teaching competence and teacher education in the U. S. In
L. S. Shulman and G. Sykes (Eds.), Handbook of teaching and
policy, New York: Longman., 126-149.
194
Edward D. Garten
The University of Dayton
Central to this chapter is the assumption that each college or university library
system--and the education library or collection within those systems--is unique
and therefore should determine its own criteria for performance and evaluation.
Any evaluation should be undertaken within the framework of the college or
university's mission and goals. While recognizing that all academic libraries are
unique and individual. this chapter is. occasionally. prescriptive and will refer to
specific standards developed by the Association of College and Research Libraries
(ACRL). a division of the American Library Association (ALA). All
suggestions within this chapter are intended to assist non-librarians who are
responsible for determining priorities and evaluating the performance of a library
which supports a teacher preparation program. While the standards and
guidelines noted in this chapter cannot be stated as absolutes applicable to all
college and university libraries. such standards and guidelines do set forth a
process by which expectations may be established. Further, they suggest the
topics that must be addressed in some fashion during any evaluation of a library
which supports a teacher education program.
Underlying Assumptions
This chapter and any evaluation of librafy facilities as they support teacher
education programs are based on three assumptions which are described below.
and success of scholarship and research afforded both students and faculty. It is
assumed that the academic library which supports a teacher education program
has a mission statement which has been reviewed by appropriate senior
university officials and that this mission statement is congruent with the
mission statement of the School, College, or Department of Education (SCDE).
Library evaluators should be certain that a library mission statement exists.
That statement should be current and widely distributed on campus. If teacher
education is an important mission of the institution, then that should be reflected
at some point in the library's mission statement. In a larger university, which
has many colleges and schools, and in particular a school or college of education,
it is important that the university library have a specific undergraduate mission
statement. In October 1987, The ACRL published guidelines for the
development of such a mission statement. Teacher education program evaluators
must obtain their academic library's statement, if it exists, and make judgements
on that statement's adequacy.
3. The library should be directly related to the institution's mission and its
programs of instruction.
4. The institution should have its own library or collection of resources and
while cooperative relationships and arrangements with other institutions are
encouraged, the institution's own library should have the resources to
support its programs.
7. The chief administrative officer of the library and the professional staff are
responsible for administering the total program of library services within the
college or university. An advisory committee of faculty and students should
assist them in the planning, utilization, and evaluation of library resources.
8. The library budget should include all expenditures related to the operation
and maintenance of the library. Sufficient funds should be provided to
support a sound program of operation and development, with planning for
subsequent years and a systematic program for removing obsolete resources.
The majority of standards and guidelines which have been published by the
ALA have been developed and promulgated by ALA's ACRL. The guidelines
and standards have been developed and tested through extensive participation by
librarians, administrators, and others involved with higher education. These
documents are impressive because they were prepared by professionals who are
dedicated to the ideal of superior libraries which are related to the college or
university'S mission, are adequately staffed, and are continuously evaluated.
Perhaps more importantly, each of the statements has been developed and refined
through a meticulous process that includes development of an initial draft by a
committee; circulation of the draft for review and revision; development of a
final document that is approved by the Board of Directors of the ACRL; and
finally, publication and wide circulation of the standards. The documents are
well-written and can be understood and appreciated by non-librarians. They
include introductions, definitions, explicit standards with supporting
commentaries, and supplementary checklists or quantitative analytical techniques
to assist in the application of standards.
The guidelines and standards published by ACRL (this also can be said of
the regional accreditation bodies' documents) are standards which focus mainly on
199
inputs or process criteria. Until recently the guidelines rarely addressed outcomes
or evidence of effectiveness. To the credit of both, however, ACRL and the
regional accreditation bodies have begun to recognize the need for more tangible
outcomes information and are developing better ways to obtain concrete evidence
of effectiveness.
Perhaps the most valuable and comprehensive standards document for use by
the evaluator is ACRL's "Standards for College Libraries, 1986," a revised
version of the 1975 standards. The 1986 version reflects the integration of new
technologies into libraries. Because all of the standards which are found in this
document are germane to the specialized education library or the education
resources which are part of the larger college or university library, evaluators
especially are urged to read these standards with care. Addressing the questions
raised by each standard is critical to a fair and comprehensive assessment of
library support.
In May 1986, ACRL released the "Guidelines for Audiovisual Services in
Academic Libraries: A Draft." Evaluators of libraries and learning resource
centers which support teacher education programs will want to familiarize
themselves with this document as it offers excellent guidance relative to media.
Microcomputer use has increased within both general academic libraries and
specialized education libraries. While no guidelines for evaluating
microcomputer and software support for teacher education programs have been
written by the ALA or the ACRL, guidelines and standards do exist. See, for
example, Cohen's (1983) list of evaluative criteria for software. The ACRL's
"Administrator's Checklist of Microcomputer Concerns in Education Libraries"
is another suggested resource.
In "Guidelines for Extended Campus Library Services," ACRL recognized
the tremendous growth in off-campus instruction. This document reflects the
concern that librarians and accreditation agencies share about the quality of off-
campus education: Are library facilities at off-campus sites equivalent to those
on-campus? Numerous other issues are raised in the "Guidelines" and evaluators
are urged to review this paper with some care.
Budget. Most libraries are heavily dependent on yearly base funding from
the parent institution for operating expenses, including personnel and
200
"one time," though it must also fit into any process established by the
institution. The goal for performance evaluation, of course, is to arrive at the
clearly stated set of expectations which can be matched against the resources
needed.
There is no single way of measuring achievement, thus a variety of
procedures and methods should be explored. All activities for performance
review should, within economic and political constraints, provide a setting
within which an open, supportive assessment can occur. Inevitably,
comparisons will be made with libraries in other colleges and universities.
Although such comparisons are difficult to avoid, they should be made carefully.
Any performance study should be aided by appropriate quantitative measures and
should never be based solely on subjective measures. Finally, any performance
evaluation requires that the responsibility for the evaluation be clearly assigned;
that the procedures to be followed are understood fully by the participants; and
that the goals are stated and defined clearly. Resources exist for securing data and
other performance appraisal information and should be consulted for useful
instrumentation and methodology.
Finally, the Evaluative Criteria which follow suggest a range of questions
that explore ways libraries have gathered data for performance evaluation. The
evaluator will want to assess what the library is currently doing to obtain
performance data and perhaps recommend studies or methodologies which will
obtain useful data for future review.
APPENDIX
Dictionaries
Encyclopedias
Thesauri
Indices to Education
Directories
Handbooks
Bibliographies
This resource is intended to serve as the first tool in the mutual department
or school of education/librarian assessment of library services supportive of
teacher education programs. It is not meant to be used alone; rather, it or the
alternative list of questions which follow on the next document are intended to
be used in a process of discussion and services description.
The questions which follow are suggested as one means of reaching a proper
description and assessment of the library which supports the teacher education
program. This guide and the more procedural and methodological one which
follows include many of the measures for evaluation now commonly accepted
and practiced within academic libraries.
206
Budget Adequacy
Collection Adequacy
1. Are the policies governing access and use of the collections clearly stated
and are they readily available?
2. Are the bibliographic records adequate and accurate?
3. Are the collections properly housed?
4. How readily can the library provide materials now owned?
5. Is the staff provided for technical services and other collection-related
services sufficient for the task?
4. Does the library have adequate safeguards against loss, mutilation, and theft?
Adequacy of Services
The evaluator should use these questions as a guide for discussion with
those librarians and curriculum materials specialists responsible for providing the
services and resources which support the college or university'S teacher
preparation programs.
As a caution, it should be noted that not every academic library does all of
the following evaluative activities. Libraries may find some evaluative practices
more relevant than others. Additionally, and somewhat unfortunately, many
libraries simply may not have adequate staffing to afford the time to conduct
many of the evaluative activities noted here. However, several of the activities
are considered more important than others.
Collection Evaluation
Catalog Evaluation
7. Does the library test document delivery success rate by use of any accepted
documents delivery test?
8. Does the library analyze the proportion of interlibrary loan requests which
are satisfied?
9. Does the library assess the time required to satisfy interlibrary loan requests?
10. Are education students and faculty members satisfied with the quality of
interlibrary loan services?
1. Does the library periodically study its facilities and, in particular, if there is
a separate education library, are these facilities periodically studied? If such a
study is done, are physical arrangements of materials, service points,
furniture, and equipment evaluated?
2. Does the library analyze the use of space for stacks and seating by
comparison with accepted standards?
3. If there is a separate education library, does seating and stack space compare
favorably with facilities provided in other areas of the library?
4. Does the library periodically survey patrons on their evaluation of
surroundings, e.g., environmental conditions, attractiveness, etc.?
1. Does the library maintain statistics on the number of patrons entering the
library?
2. Does the library measure average time patrons spend in the library?
3. Does the library measure patron use by hourly counts in individual areas of
the library?
4. If there is a separate education library, is this done here as well?
5. Does the library compare hours of service with those of similar libraries?
6. Does the library periodically survey patrons on types of materials used in
the library including personal materials?
1. Does the library maintain use statistics? Are they divided by discipline
including education?
2. Has the library ever requested users to indicate which retrieved
citations/items were relevant?
3. Has the library studied searcher performance by comparing a search against
standard searches conducted solely for the purpose of evaluation?
4. Does the library survey patrons on their use of the on-line search services to
obtain information?
REFERENCES
The year 1973 marked the beginning of what Madaus, Stufflebeam, and Scriven
refer to as the "age of professionalization" (1983, p. 15) in the field of
evaluation. In the 1970s, eight journals devoted partially or entirely to
evaluation were established, universities began offering graduate programs in
evaluation, and centers for the study of evaluation were established. Educators
and legislators called for accountability and more evaluation models were
developed. A recent count yielded over fifty extant evaluation models. How
does this proliferation of models and theories about modeling affect the routine
operation of teacher education programs? Most decisions can be traced to the use
of some model. Understanding the models and the rolf of empirical evidence in
decision-making will enhance the ability to make good decisions. This chapter
is written for use by individuals who have addressed the questions relative to
accreditation for their units but who face additional questions about program
evaluation and improvement. The emphasis is on models and how they can be
applied to improve teacher education programs. Readers who find this chapter
too elementary will want to go directly to the cited sources for more
sophisticated treatments of the content. Background information, definitions,
and suggestions for choosing the appropriate model(s) for a given situation are
provided.
Definitions of Terms
The following terms will guide the discussions in the remainder of the
chapter. The first of the definitions has already been seen in the second chapter,
but bears repeating here. "A model ... is a representative of the entity being
represented" (Ayers, Gephart, & Clark, 1988, p. 336). Stufflebeam and Webster
define educational evaluation as a study "that is designed and conducted to
assist some audience to judge and improve the worth of some educational object"
(1983, p. 24). That definition, as the authors observe, is common, despite the
fact that it is not often adhered to in practice. It is also important to remember
212
The Accreditation Plus Model was developed by Ayers, Gephart, & Clark
(1988) and is based in part on the professional judgment model of evaluation and
includes the use of other extant models as necessary to answer evaluation
questions which are not addressed by the accreditation process. The latter part is
the "Plus." In the process of developing the model, its creators searched the
literature and found reference to some fifty extant evaluation models. They are
not all relevant to educational evaluation, or more specifically, to the evaluation
of teacher education programs, but some types of evaluations are and will be
discussed later in this chapter.
The Accreditation Plus Model was designed for use by those seeking
approval or accreditation from some national, state, or regional agency. A
survey of states revealed that in most, if a teacher preparation program meets the
standards set forth by the National Council for the Accreditation of Teacher
Education (NCATE), it will also meet or exceed the standards for other national,
state, or regional approval processes. So its creators chose the accreditation
process, the professional judgment approach to evaluation, as the basis of the
model.
213
Professional Judgment
Of systems analysis House writes, "In this approach one assumes a few
quantitative output measures, usually test scores, and tries to relate differences in
programs to variations in test scores" (p. 46). Patton (1986) further describes
systems analysis as looking "at the relationship between program inputs and
outputs" (p. 68). Efficiency is the desired outcome and the model is used to
answer the question, "What combination of inputs and implementation strategies
will most efficiently produce desired outcomes?" (patton, 1986, p. 68). Other
questions include, "Are the expected effects achieved? What are the most efficient
programs? (House, 1983, p. 46). Cost benefit analysis, linear programming, and
computer-based information systems are used to generate answers.
Another model or approach is objectives-based. Ralph Tyler originated
this approach, which is described by its name. Program objectives are written in
terms of "specific student performances that can be reduced to specific student
behaviors" which are "measured by tests, either norm-referenced or criterion-
referenced" (House, 1983, p. 46). The obvious question addressed by this model
is, "Are the students meeting the objectives?" Test scores and measures of
discrepancy are sources of answers to that question. It is possible, but somewhat
dangerous, to use this approach to evaluate teacher productivity. It has been
stressed throughout this text that no single approach should be used to evaluate
anything and as Centra points out elsewhere in this text, there are too many
intervening variables associated with any measure of student learning to use it as
a single, fair indicator of teacher performance.
215
Question One
Question Two
Question Three
SUMMARY
were presented. The discerning reader will have noticed that there is rarely one
best approach or answer where evaluation is concerned. Some authors, it is true,
will advocate rigid adherence to a single approach, but that perspective is not
shared by the authors of this chapter or the editors of the book. The informed
eclecticism described earlier is more than just a catchy phrase. As evaluation is
integrated into a program and is utilized for decison-making, the questions that
arise will dictate the use of a variety of approaches or models.
REFERENCES
Ayers, J. B., Gephart, W. J., & Clark, P. A. (1988). The accreditation plus
model. Journal of Personnel Evaluation in Education, 1, 335-
343.
Herman, J. L. , Morris, L. L., & Fitz-Gibbon, C. T. (1987). Evaluator's
handbook. Newbury Park, CA: Sage Publications, Inc.
House, E. R. (1983). Assumptions underlying evaluation models. In G. F.
Madaus, M. S. Scriven, & D. L. Stufflebeam (Eds. ), Evaluation
models: Viewpoints on educational and human services
evaluation, (pp. 45-64). Hingham, MA: Kluwer Academic Publishers.
Joint Committee on Standards for Educational Evaluation. (1981). Standards
for Evaluations of Educational Programs, Projects, and
Materials. New York: McGraw-Hill Book Co.
Madaus, G. F., Stufflebeam, D. L., & Scriven, M. S. Program evaluation: a
historical overview. (1983). In G. F. Madaus, M. S. Scriven, & D. L.
Stufflebeam (Eds.), Evaluation models: Views on educational
and human services evaluation (pp. 3 - 22). Hingham, MA: Kluwer
Academic Publishers.
National Council for Accreditation of Teacher Education. (1987). Standards,
procedures, and policies for the accreditation of professional
education units. Washington, DC.: Author.
Patton, M. Q. (1986). Utlization·focused evaluation (2nd ed.).Beverly
Hills, CA: Sage Publications, Inc.
Stake, R. (1983). Program evaluation, particularly responsive evaluation. In
G. F. Madaus, M. S. Scriven, & D. L. Stufflebeam (Eds.) Evaluation
models: Views on educational and human services
evaluation, (pp. 287-310). Hingham, MA: Kluwer Academic
Publishers.
Stufflebeam, D. L. & Webster, W. J. (1983). An analysis of alternative
approaches to evaluation. In G. F. Madaus, M. S. Scriven, & D. L.
Stufflebeam (Eds.) Evaluation models: Views on educational
and human services evaluation, (pp. 23-43). Hingham, MA:
Kluwer Academic Publishers.
Worthen, B. R. & Sanders, J. R. (1973). Educational evaluation:
Theory and practice. Worthington,OH: Charles A. Jones Publishing
Company.
17
William L. Rutherford
The University of Texas at Austin
1. The same information may be interpreted and used quite differently by two
parties.
3. The movement from evaluation to acceptance and use of the information can
easily result in conflict rather than change if steps are not taken to prevent
it.
Too often teacher education evaluation activities are conducted by one person
or group and when the work is completed they hand the fmdings over to another
person or group. Most institutions would deny that the division between
evaluation and implementation is that pronounced but many faculty and
administrators would concur. When this dichotomy occurs it greatly reduces the
probability that evaluation outcomes will be implemented.
1. The collective ideas of all these persons will certainly provide for a broader
knowledge base upon which to make decisions than would the ideas of just a
few persons.
2. When the individuals feel they had a sincere, meaningful opportunity for
input and influence on the evaluation and implementation activities they
will have a better understanding of what transpired and why. With this
understanding there should be a few surprises or shocks at findings and
recommendations that emerge.
likely state that they did not have time in one class to fully develop in their
students proficiencies in all aspects of these subjects. Therefore, general
impressions or even quantitative ratings of general statements of proficiency
would have little implementation value. But, if the instructors identified specific
teacher proficiencies they felt they had taught, and if evaluators could determine
how well prepared the teachers felt in those specific areas and why they felt that
way, the faculty would have information upon which they could act in revising
courses.
Implementation Is a Process
Should a Dean decide to make a minor change in the form used by faculty to
report final grades, that is a simple change and might be accomplished by a
printed directive (i.e., a memo). A complex change would be the directive,
"Every methods course must incorporate a practicum component so that students
can apply theory and methods as they learn them." Faculty will have many
uncertainties about their own role in the new process and what is expected of
them. Certainly there will have to be a complete and radical revision of course
and grading procedures to accommodate the expectations of the practicum
component. If time allocations for courses are not increased faculty will be faced
with the irritating task of eliminating content they believe to be important.
Should time allocations be increased without increasing teaching load credit
faculty will surely be unhappy. In addition to these problems there will be the
need to arrange for the space, time, materials, students, transportation, and other
variables necessary for conducting the practica and these arrangements will have
to involve public or private schools. In short. this is a highly complex change.
Research has shown that to effectively implement a complex change such as
this one may take several years even under ideal conditions (Hord, Rutherford,
Huling-Austin, & Hall, 1987; Hall & Hord, 1987; Fullan, 1982). If the change
effort is not properly supported and facilitated or if individuals are asked to
implement more than one complex innovation. At the same time, the rate and
effectiveness of implementation will be reduced. (An innovation is any change
that is introduced into an organization.)
Change is Personal
Stages or Concern
One theory focuses specifically on the stages adults pass through when they
are faced with making a change. The stages of concern in this theory operate
independent of age or other situations in life. In fact, unlike other theories of
development, this one holds that individuals recycle through the stages of
concern with each new change they experience. The Stages of Concern theory
proposes seven stages a person might pass through with each change effort
(Hord, Rutherford, Huling-Austin, & Hall, 1987; Hall & Hord, 1987; Hall,
George & Rutherford, 1977; Newlove & Hall, 1976). The seven stages and a
brief description of each are shown in Figure 1.
Attempts to force movement may impede the development process and, in turn,
implementation efforts.
It is commonly assumed that people are resistant to change, but the theory
of concerns suggests that there is an explainable (and predictable) reason behind
the seeming reluctance to change and thus that people may not be cantankerous
or uncooperative in personal efforts to thwart change.
For a brief insight into the way concerns may influence individual response
to change, consider the matter of faculty evaluation at Steady State College (a
fictitious institution that was created for this book). There is already a rating
instrument in use for obtaining student evaluations of faculty teaching and a
change is proposed in the design of the instrument. Faculty would surely be
concerned about what kinds of changes were being proposed and why (Stage I,
Information). They would also want to know if the modifications in the
instrument would affect in any way decisions to be made relative to their tenure,
promotion and salary (Stage 2, Personal). If the modifications in the instrument
and its use are minor these concerns should not be very intense and could be
facilitated rather easily.
Now suppose that evaluation results suggest a completely new system of
evaluating teaching, one that involves supervisor observations and ratings of
performance. Without a doubt informational (Stage 1) and Personal (Stage 2)
concerns will be quite intense throughout the faculty and responding to these
concerns will require skill, effort and time. Also, Management concerns (Stage
3) will likely become quite intense as faculty try to modify their teaching to
satisfy the requirements of the assessment procedure.
An instrument that is easy to administer and score is available for measuring
Stages of Concerns about an innovation (Hall, George, & Rutherford, 1977).
With this instrument it is easy to collect data that can be applied directly to
implementation. Systematic attention to the personal element in change is
essential and the use of a model such as stages of concern could be the key to
success. Readers are advised to contact the authors for further details about the
instrument.
Too often the following scenario happens: from evaluation, the need for a
change is identified and this need is conveyed to a dean or chairperson who then
presents the problem to those persons who will be expected to make the change.
That group is then given a semi-formal charge to implement the needed changes
or a committee will be formed to generate recommendations for how it should be
225
2. having patience;
5. being willing and able to collect, analyze, and use data that inform the
implementation process.
Finally, the change facilitator must be given the time, resources and support
necessary to carry out the required responsibilities. It has already been
emphasized that the facilitator must be formally identified and assigned the
responsibility. Once this is done the facilitator must be given the administrative
support and the time needed to carry out the responsibilities. This support and
time allocation must be provided throughout the change process, not just at the
beginning.
Few, if any, institutions can afford a full-time change facilitator. This
means the task will likely be given to someone among the faculty or staff,
someone who already has a full work load. If this person is not chosen
carefully, given some training for the role (if not already trained) and given an
ample allocation of time to do the work of change facilitator, both the facilitator
and the change effort are likely to be less than successful.
Consider the situation where evaluation reveals the need for a better blend of
theory and practice in the preservice program. Based on this information a
faculty planning committee recommends earlier field experiences for all students
and the addition of practicum components to many education courses.
Implementation of this very complex change will involve and influence many
people in significant ways and may require additional resources or reallocation of
resources. For all these reasons such an innovation will require that the chief
administrator give it full support and that such support be evident to all.
To be supportive does not mean the chief administrator must be the primary
change facilitator for the innovation. It does mean, however, that administrators
must make clear and evident their support of the change and that they monitor
closely how implementation progresses. Saying, "Let's do it," appointing a
facilitator, and then leaving implementation to take its course is not being
supportive. If the chief administrator is not interested enough in the innovation
to commit his or her own time and effort it is unlikely faculty will have any
greater commitment.
A note of caution is in order here. While administrative support is
essential, it is not sufficient. For implementation to succeed all the qualities of
effective change must be in place along with the support. Also, support should
not be confused. It is improbable that an administrator can cause a faculty or
staff to change through the application of power or force. Resistance to and
resentment of the innovation and the administrator will be the most immediate
outcome of such an attempt.
Intend To Succeed
This principle is closely related to the one above. Organizations that have
experienced successful change are more likely to attempt and to be successful
with implementation of the next change. The reverse is also true. The key
implication of this principle is that no implementation effort should be initiated
unless those in key leadership positions have every intention of it succeeding and
they plan to work toward that end. It is unforgivable for leaders to permit the
initiation of a change if they will not support it actively or, even worse, if they
subvert the effort through inaction or negative actions.
A corollary to this implementation principle is the evaluation principle,
"Plan to use it." Money and effort should not be expended on evaluation
activities that are not intended to inform future actions. Granted, some
evaluation activities are undertaken to meet the requirements of state and national
regulatory agencies but these too can be structured to serve the needs and
interests of the individual institution.
Any change that is made in one part of the teacher education "body" will
influence other parts of the "body." A change in admission standards may affect
class size and even some teaching assignments. Introduction of new technology
with the intent of increasing teaching effectiveness may create a need for an
equipment maintenance unit that competes for available dollars. Administrative
reorganization designed to expedite program efforts may result in communication
blocks and create new obstacles to efficient management.
These ripple effects are inevitable, but action can and should be taken to
keep the ripples from becoming waves. This can be done by first taking a broad
view of the proposed change, one that anticipates all possible influences on
individuals and the organization. When this has been done then steps can be
taken to reduce or eliminate negative ripples and maybe even convert them into a
positive force.
Even when initial anticipation is skilled and thorough, once implementation
is underway unanticipated influences are still likely to occur. Most often a
number of small incidents will accumulate until they form what has been termed
a mushroom (after the atomic bomb cloud) by some (Hall & Hord, 1987).
When this occurs the consequences for the implementation effort may be
positive or negative but more often than not they are negative. Change
facilitators must be alert to any suggestion that a mushroom is beginning and
take whatever action they deem necessary to control it before it interferes with
implementation.
EVALUATING IMPLEMENTATION
If change efforts are evaluated at all, typically, they are evaluated only in
terms of their outcomes. While assessment of outcomes is something that
should occur, alone it is inadequate and may be misleading. Before evaluating
outcomes two other critical assessments must be made:
The importance of these types of evaluative data and the ways they might be
collected will be discussed in the paragraphs that follow.
229
In almost every change effort there will be a portion of the intended users
who do not actually use the new program or procedures (Individuals who are
expected to make a particular change to use the innovation and do not are termed
non-users. Hord, Rutherford, Huling-Austin, & Hall, 1987; Hall & Hord,
1987). There may be many reasons why individuals do not implement an
innovation as was intended. These can include lack of understanding of what is
expected, inadequate resources, inability to make the change or outright refusal.
These reasons can be very informative when making decisions about how to best
facilitate implementation but the important point for evaluation is that the
presence of these non-users should not be ignored.
To get a sense of the importance of the use/non-use issue, consider again the
hypothetical innovation of adding a practicum to all methods courses at sse.
Assume that three years after the innovation was introduced, a large group of
students (and former students) participate in an evaluation to determine what
influence the new methods courses had made. A comparison was made of their
responses with those of earlier graduates who had not taken practicum methods
courses. The outcomes indicated no difference between the two groups. This
could have led to the conclusion that the practicum component was ineffective.
But additional information showed that during the first year of implementation
only 35 percent of the methods courses included a practicum. During the second
year this number increased to 55 percent and by the end of the third year it had
risen to 75 percent (when acceptable use of an innovation is the criterion for
implementation these percentages are not atypical for complex innovations).
Given the fact that many of the students classified in the practicum group for the
follow-up evaluation had not even been in a practicum methods course, it would
have been grossly misleading to draw any conclusions about the differences
between practicum and no-practicum students. Simply stated, one cannot
accurately evaluate innovation outcomes without knowing how the innovation
has been used.
One procedure for assessing use of a change or innovation is called Levels of
Use of the Innovation (Hord, Rutherford, Huling-Austin, & Hall, 1987; Hall &
Hord, 1987; Loucks, Newlove & Hall, 1975). Eight discrete levels of use are
proposed. These levels range from "no knowledge of the innovation" and
"effective use of it" to "active searching for innovation improvements." This
procedure can identify those individuals who have not made the expected change
(non-users); those who are using the innovation but in disjointed, uncertain
manner; those who are using it routinely; and those who have gone beyond the
routine to do things to increase the effectiveness of the innovation.
Levels of Use information is most valuable when it is used as a part of
formative evaluation rather than as a summative procedure. Data from Levels of
Use can and should be used throughout the implementation process to inform
and guide the efforts of the change facilitator in personalizing support and
assistance.
230
Level IV -B Refinement. State in which the user varies the use of the
innovation to increase the impact on clients within immediate sphere of
influence. Variations are based on knowledge of both short- and long-term
consequences for clients.
Level III Mechanical Use. State in which the user focuses most effort
on the short-term, day-to-day use of the innovation with little time for reflection.
Changes in use are made more to meet user needs than client needs. The user is
primarily engaged in a stepwise attempt to master the tasks required to use the
innovation, often resulting in disjointed and superficial use.
Level II Preparation. State in which the user is preparing for first use
of the innovation.
Level 0 Non-Use. State in which the user has little or no knowledge of the
innovation, no involvement with the innovation, and is doing nothing toward
becoming involved.
As a change effort gets underway there will be grumbles, groans and puzzled
musings about what is going to happen. These utterances will often reflect what
seems to be a negative attitude in the school or college relative to the
impending change. The negative attitude may not be as real or widespread as it
is perceived to be but a typical response from administrators or facilitators is to
set in motion some actions or efforts to modify these attitudes. The underlying
assumption is that until attitudes are changed there will be no implementation of
the innovation.
It is humane and proper to be concerned about the attitudes of individuals
but this concern should not override common sense. In the first place, so much
time and energy might be spent attempting to adjust attitudes that the
implementation effort is damaged. Secondly, it is not necessarily correct to
assume that attitudes have to change before behavior will change. Guskey
(1986) makes a strong case for the position that often attitudes change after the
users try the innovation and have evidence that it works. This does not mean
there should be no concern for user attitudes, it suggests that one effective way
to change attitudes is to have a good innovation based on solid findings from
evaluation and coupled with a good implementation effort.
At the cognitive level there may be few people who would believe in this
assumption but actual practices suggest considerable acceptance of it. Any time
233
By now the reader has probably concluded that moving from evaluation to
effective implementation of change in teacher education is not a perfunctory task.
This is an accurate conclusion. Yet, there are no viable alternatives. To avoid
change altogether means there will be no improvements in the teacher education
program. To approach change in a half-hearted way or through a series of short
cuts can create more problems than are solved. By far the best solution is to
determine what is required to effectively implement a change and do it. This
means all who are responsible for the evaluation to implementation process
must communicate and collaborate. The results will be personally and
professionally rewarding for all who are involved.
Involve in the evaluation process all persons or groups who will have
any responsibility for implementing findings.
Seek from evaluation outcomes that address meaningful needs.
Seek evaluation outcomes that are useful and practical enough to
implement.
Use evaluation to determine changes to be implemented.
Use evaluation to determine if the changes were made.
Use evaluation to determine what difference these changes made.
Evaluating Implementation
REFERENCES
Erikson, Erik H. (1959). Identity and the life cycle. Psychological Issues,
1(1),9-171.
Fullan, M. (1982). The meaning of educational change. New York:
Teachers College Press, Columbia University.
Guskey, T. R. 1986). Staff development and the process of teacher change.
Educational Researcher, 15(5),5-12. .
Hall, G. E. & Hord, S. M. (1987). Change in schools: Facilitating
the process. Albany, NY: State University of New York Press.
Hall, G. E., George, A., & Rutherford, W. L. (1977). Measuring stages of
concern about the innovation: A manual for use of the SoC
questionnaire. Austin: The University of Texas as Austin, Research and
Development Center for Teacher Education (ERIC Document Reproduction
Service No. ED 147342).
Heck S., Stiegelbauer, S. M., Hall, G. E., & Loucks, S. F. (1981).
Measuring innovation configurations: Procedures and
applications. Austin: Research and Development Center for Teacher
Education, The University of Texas at Austin.
Hord, S. M., Rutherford, W. L., Huling-Austin, L. & Hall, G. E. (1987).
Taking charge of change. Alexandria, VA: Association for
Supervision Curriculum Development.
Huberman, A. M. & Miles, M. S. (1984). Innovation up close how
school improvement works. New York: Plenum Press.
Kohlberg. L. (1973). Continuities in childhood and adult moral education
revisited. In P. Baltes and K. Schale (Eds.). Life-span developmental
236
Joan L. Curcio
University of Florida
Fourteenth Amendment
Section 1983
The federal law which will be used to sue the institution or the individual
administrator who may violate a student's or faculty member's civil rights in
employing given strategies of an evaluation model is 42 U.S.C., Sec. 1983. It
reads in part:
Another area with potential for violating rights, specifically student rights,
is that of privacy. The privacy of students' educational records is guaranteed
239
Defamation
1. Are there "property interest" issues involved (where the evaluation model
includes faculty evaluation, in particular)?
There is case law supporting the assertion that there are legal aspects
involved in the process of teacher education evaluation. While it would not be
possible in this brief chapter to include all of the decisions which might shed
light on what legal principles are significant here, there are specific cases--
sometimes landmark cases--which should be examined as indicators of when and
how legal errors can be made as part of an evaluation program.
Testing Issues
decision that employment standards and tests that are not significantly related to
job performance are violative of Title VII of the Civil Rights Act of 1964.] The
issue of reliability and validity will be discussed again later in reference to
instruments used to evaluate the performance of students and teachers who are in
the teacher education program.
1. They will, in fact, intervene should the institution exercise its power in an
arbitrary, capricious, or irrational manner.
2. The implied contract between the institution and the student demands that it
act in good faith in dealing with students (if the student meets the terms, he
gets what he seeks) (Olsson v. Board of Higher Education, 1980).
Teacher Dismissal
LIBERTY INTEREST
Even when a property interest is not the issue, the Supreme Court
recognized in Board of Regents of State Colleges v. Roth (1972) that
damaging an educator's reputation or ability to be re-employed as a result of an
employment action could violate his/her Fourteenth Amendment liberty rights
(whether or not a faculty member has tenure). In other words, the institution has
to avoid actions which would "stigmatize" the employee. Of course, not all
defamation by a public official would convert to a deprivation of liberty.
According to Roth, unless an employer stigmatizes an employee by a public
statement that might seriously damage his/her standing and community
associations, or foreclose opportunities for his/her employment, liberty interests
are not affected. On the other hand, if comments are made in private, even if
false, they cannot be the basis of a liberty violation (Bishop v. Wood, 1976).
Very recent decisions have looked upon this liberty interest as "liberty of
occupation." To "fire a worker to the accompaniment of public charges that
244
make it unlikely that anyone will hire him for a comparable job infringes his
liberty of occupation" (Colaizzi v. Walker, 1987).
There was no offer to prove that other colleges are open to the
plaintiffs. If so, the plaintiffs would nonetheless be injured by the
interruption of their courses of studies in midterm. It is most unlikely
that a public college would accept a student expelled from another
public college of the same state. Indeed, expulsion may well prejudice
the student in completing his education at any other institution.
(Dixon v. Alabama State Board of Education, 1961).
DISCRIMINATORY ISSUES
The Family Educational Rights and Privacy Act of 1974 (FERPA), known
as the Buckley Amendment, has carved out a role for the federal government
which is substantial in regard to student records. The Act's implementing
regulations are extensive and thereby represent a "predominant legal consideration
in dealing with student records" (Kaplin, 1985, p. 358). Essentially, the Act
245
ensures the confidentiality of students' records and provides that federal funds will
be withheld from an educational institution which:
SUMMARY REMARKS
The activities, processes, and interactions that occur around teacher education
program evaluation are subject to legal restraints. Those legal restraints are
constitutional, statutory--federal and state, and even institutional. The people
involved with program evaluation and those who support it administratively need
to be aware of the pitfalls and build in processes to prevent uncomfortable and
expensive legal situations. There is no substitute for the assistance and advice of
a good attorney. However, an attorney may be required less if the items on the
following checklist are noted and given attention.
DUE PROCESS - means fundamental fairness; there are two aspects to due
process:
INV ASION OF PRIVACY - violating the right that someone has to be left
alone, and if he chooses, unnoticed; publicizing someone's private affairs with
which the public has no legitimate concern.
_ _4. Has the faculty had training in the legal implications of program
evaluation?
_ _5. Has the faculty had training in the legal implications of student
evaluations?
__20. Are faculty and students' privacy rights and confidentiality preserved?
REFERENCES
Ever since a chance glance at the title of an article caught the eye of one of the
editors some months ago, it seemed that this chapter could have only one
possible title, the one you see here. The article is titled, "Which Way to
Millinocket?" In it, Gideonse (1987) refers to the punchline of a classic example
of Downeast comedy, which reads, (sans dialect), "you can't get there from here."
It is true that education has been in trouble, heading along the path which is
clearly marked with that punchline. It is possible to improve education, but
educators must assume the responsibility for directing those improvements or
state legislatures and governing boards will take that responsiblity upon
themselves, as has been demonstrated in such states as Texas, Virginia, West
Virginia, and California. Evaluation is a crucial part of the reform effort. We
must begin to implement the findings from our research efforts, particularly the
follow-up studies of teacher education graduates.
For the past eleven months we have collected, edited, written, and discussed
materials which address teacher education evaluation. One of the initial goals of
the project from which the book grew was to create a practical guide that could
be used to evaluate a teacher education program. We believe that we have
succeeded; time and the readers/users will tell us if that is true. One of our
regrets is that we were not able to include all of the excellent materials the
authors sent and that we uncovered in various searches. Rating instruments,
annotations, and some graphics were omitted, but references to the instruments
were provided and the substance of the graphics was put into text. One of our
sources of pride is that the materials contained in this book are current. Indeed,
we received word of one organizational change and a copy of the most recent
edition of a referenced book the day before we sent the pages to the publisher.
That information is included either in this chapter or in the appropriate place
elsewhere in the text. We are not in possession of a crystal ball, but we do want
to make some suggestions and to hazard a prediction at this time.
252
As we attended national, regional, and state conferences and met with teacher
education faculties from other institutions, it soon became apparent that
accreditation, particularly the need to meet the new National Council for
Accreditation of Teacher Education (NCATE) standards, was among the primary
concerns of educators. As the result of that realization, we decided to emphasize
the standards wherever possible throughout the book. This was not done to the
exclusion of the standards and policies of other organizations. We determined
that in most instances, a program that meets the new NCATE standards is likely
to meet or exceed the standards and policies of other accrediting or approval-
granting agencies or associations. This book, then, is intended to help teacher
education program faculty meet accreditation and other accountability needs.
In Chapter 1, J. T. Sandefur observed that efforts such as this book and the
related activities which are sponsored by the Center for Teacher Education
Evaluation at Tennessee Technological University are providing much needed
information relative to teacher education program evaluation. A statement he did
not make was that five years after the beginning of the current educational reform
movement, much remains to be accomplished before evaluations assume their
rightful place in teacher education programs. We will make suggestions relative
to that later in this chapter. In Chapter 2 we stated that institutions of higher
education need to develop and implement evaluations of their teacher education
programs, that these evaluations should encompass both formative and
sumllative techniques, and that they must be implemented at reasonable costs.
Tho~e chapters, along with the description of the Accreditation Plus Model,
provide the background, or setting, for the next twelve chapters.
Pankratz, Loadman, and Hearn in Chapters 4, 5, and 12, discuss the
knowledge base, quality controls, and governance. These areas are the basis of a
teacher education program; without strong systems for governance and quality
controls and the presence of a knowledge base it would be extremely difficult to
ascertain how effective a program is. The remaining nine chapters address
separate aspects of teacher education programs, ranging from admissions criteria
through follow-up studies of program graduates and from resources to faculty
evaluation.
In the last section of the book, Berney and Gephart describe, briefly, a few
of the evaluation models or approaches that could be used to implement the
"Plus" of Accreditation Plus. Rutherford provides excellent suggestions for
implementing the results of an evaluation, and Curcio discusses legal aspects of
evaluation that teacher educators should be aware of. The emphasis of this book
has been on the practical aspects of evaluating teacher education programs. The
recommendations it contains were made by scholars who have been involved in
conducting research and development work in the areas they address. At this
point, let us tum to the future. Where do we, members of the educational
community, go from here?
accountable. All fifty states have examined their programs and most have made
changes in the requirements for the preparation and certification or licensure of
teachers. Many times, however, these changes were made without a thorough
investigation, through evaluation, of the effects of the proposed changes on
students in grades K through 12 now and in the years to come.
The on-going efforts to develop a national system of licensure for teachers,
the work of researchers at several universities on improved means for evaluating
teachers in clinical settings, and the increased emphasis on general education in
the curriculum are pointing the way to the future. The policies which are
developed as an outgrowth of this work will direct teacher education in years to
come. We urge that the groups continue to monitor their own activities and that
they apply the Joint Committee Standards to evaluate them for utility,
feasibility, propriety, and accuracy. The Joint Committee recently released a
related set of standards for evaluating personnel. These should also be studied
and applied to faculty evaluations.
Additional Details
Epilogue
REFERENCE