Вы находитесь на странице: 1из 19

829214

research-article2019
RCI0010.1177/1745499919829214Research in Comparative and International EducationKim et al.

Article

Research in Comparative &

Improving 21st-century teaching International Education


2019, Vol. 14(1) 99­–117
© The Author(s) 2019
skills: The key to effective 21st- Article reuse guidelines:
sagepub.com/journals-permissions

century learners
DOI: 10.1177/1745499919829214
https://doi.org/10.1177/1745499919829214
journals.sagepub.com/home/rci

Sharon Kim
New York University, NY, USA

Mahjabeen Raza
New York University, NY, USA

Edward Seidman
New York University, NY, USA

Abstract
The development of competencies known as 21st-century skills are garnering increasing attention as
a means of improving teacher instructional quality. However, a key challenge in bringing about desired
improvements lies in the lack of context-specific understanding of teaching practices and meaningful ways
of supporting teacher professional development. This paper focuses on the need to measure the social
quality of teaching processes in a contextualized manner. We do so by highlighting the efforts made to
develop and measure teacher practices and classroom processes using the Teacher Instructional Practices
and Processes System© (TIPPS) in three different contexts: Uganda (secondary), India (primary), and Ghana
(pre-school). By examining how such a tool can be used for teacher feedback, reflective practice, and
continuous improvement, the hope is to pave the way toward enhanced 21st-century teacher skills and, in
turn, 21st-century learners.

Keywords
Classroom observation, classroom quality, low- and middle-income countries, 21st-century skills, reflective
practice, teacher professional development

Classroom instructional quality (and its relationship to learning outcomes) can serve as a critical lever
for educational change. However, there is much still to be learned about what actually goes on inside
classrooms, particularly in low- and middle-income countries (LMICs). Though an abundance of

Corresponding author:
Sharon Kim, New York University, New York, NY 10003, USA.
Email: sharon.kim@nyu.edu
100 Research in Comparative & International Education 14(1)

observational instruments now exist, most have not undergone rigorous methodological develop-
ment, and even fewer have been used across different contexts, cultures, and interventions (Bruns,
2011; Crouch, 2008). Many of these observational instruments have taken the form of checklists or
time on task measures, which have traditionally been more popular for their cost-effectiveness and
ease of use for intervention studies. Nevertheless, a recent comparative study of observational instru-
ments by Bruns et al. (2016) states that time on task measures are too coarse to be used for teacher
feedback or performance evaluation. Furthermore, time on task measures are unable to distinguish
key aspects of the 21st-century classroom environment such as student engagement, the effective use
of instructional strategies, or the emotional factors that support child development (Seidman et al.,
2018). Thus, it follows that there is a need to turn away from checklists and time on task measures.
Global interest in how teaching practices and classroom processes affect student learning out-
comes and their psychosocial development is growing – and with good reason. Instructional qual-
ity has proven to be more strongly associated with child learning than structural aspects of schools
in both Western (Pianta et al., 2009) and developing countries (Chavan and Yoshikawa, 2013;
Patrinos et al., 2013; Yoshikawa and Kabay, 2015). However, the breath of skills required for qual-
ity student learning, and concomitantly quality teaching, call for essential competencies and skills
beyond literacy and numeracy, otherwise known as 21st-century skills.
The 21st-century skillset is generally understood to encompass a range of competencies, includ-
ing critical thinking, problem solving, creativity, meta-cognition, communication, digital and tech-
nological literacy, civic responsibility, and global awareness (for a review of frameworks, see
Dede, 2010). And nowhere is the development of such competencies more important than in devel-
oping country contexts, where substantial lack of improvements in learning outcomes has sug-
gested that the task of improving instructional quality is urgent. A challenge in bringing about the
desired improvements lies in the lack of context-specific understanding of teaching practices as
well as meaningful ways of supporting teachers in their professional development (Seidman et al.,
2018; UNESCO, 2016; Wolf et al., 2018). In other words, how can we improve teacher’s 21st-
century skills to help produce 21st-century learners?
Feedback of performance has been demonstrated to be a powerful tool in improving practice in
a wide array of arenas from individual behavior to organizational performance (Butler and Winne,
1995). In recent years, there has been ample demonstration of the power of feedback in teaching
and other human services (Allen et al., 2011; Becker et al., 2013; Cappella et al., 2012; Glisson
et al., 2006; Smith and Akiva, 2008). This paper focuses on: (a) the need to measure the social
quality of teaching processes in a contextualized manner; (b) the efforts that we have made to
develop and measure, with reliability and concurrent validity, teacher practices and classroom
processes in secondary, primary, and pre-school classrooms in Uganda, India, and Ghana, respec-
tively, with the Teacher Instructional Practices and Processes System© (TIPPS; Seidman et al.,
2013); (c) how these tools can be fed back to teachers and trainees to facilitate reflective practice
and continuous improvement; and (d) how such professional supports can lead to enhanced 21st-
century teacher skills and, in turn, 21st-century learners.

The social processes (or how) of quality teaching


Classroom observations are being increasingly used in LMICs to improve education quality
through information about current teacher/classroom practices or measuring change in practices
over time (UNESCO, 2016). Yet in order to fully understand how we can best help teachers, we
need to take a step back and learn to regard teachers as learners and to ensure that the learning we
want to see in our children is taking place with our teachers. Learning as an active process is rooted
in the educational philosophy of social constructivism (Vygotsky, 1962), which established the
Kim et al. 101

belief that knowledge itself is situated within a social context; an individual’s ability to learn is
regarded as a series of social processes that are inextricably shaped and influenced by his or her
context. Though the perspective is rooted in and remains a predominantly Western ideology, it has
taken hold in many countries around the world, and constructivist beliefs for education remain
widely relevant for teachers across the globe (OCED, 2009). Nevertheless, constructivist perspec-
tives should not be assumed as ubiquitous in education. For example, in cultures where verbal
exchange is not the primary means through which knowledge is conveyed (see Treviño, 2006), we
must be mindful of how such cultural variation and nuances affect ways of learning. Differences in
sociocultural practices could dictate how children (or in this case teachers) may better learn through
practices such as observation, listening, or sharing responsibilities rather than verbalization or
actions (Rogoff, 2003; Treviño, 2006).
Social constructivism puts greater emphasis on context and also highlights the important role of
culture and how knowledge derived from social processes also exist within cultures (McMahon,
1997; Schunk, 2000). Culture becomes a great influence into not only what patterns of social pro-
cesses can emerge within a context but also how they emerge (Rogoff, 2003). This perspective
calls us to think more carefully, not only about social processes (e.g. classroom interactions) and
the knowledge that is generated through those processes, but also about how highly dependent
those processes are on the cultures and context in which they reside. For example, a study of a
teacher in-service program in South Africa (Brodie et al., 2002) calls attention to how situational
constraints, particularly in low-resource contexts, can heavily hinder the ways in which teachers
can develop alternative practices that are more learner-centered; the authors draw a critical distinc-
tion between the “form” (i.e. techniques such as questioning or group work) versus the “substance”
(i.e. content such as engaging with learners’ ideas and interests) of learner-centered teaching.
Based upon this work and her own in Tanzania, Vavrus (2009) set forth the notion of a contingent
constructivist pedagogy, which considers the pedagogical spectrum between formalism and con-
structivism and calls for the adaptation of pedagogy to the material conditions, local traditions, and
the cultural politics of a context. Such considerations as the ones outlined here have large implica-
tions on how social processes could best be measured.
Additionally, the notion of teachers as learners calls us to define what it is we feel that teachers
need to know. An increasingly globalized and complex world has propelled a movement toward a
vast array of skills that fall under the label of “21st century.” Most frameworks focus on various
types of higher-order skills such as complex thinking, communication, collaboration, and creativ-
ity (also known commonly as the 4Cs) (e.g. Dede, 2010; Saavedra and Opfer, 2012; Soulé and
Warrick, 2015). These skills are increasingly being recognized as the gold standard for student
abilities, as well as requirements to meet the demands for success in work and life (Binkley et al.,
2012). Yet the practice of delivering knowledge to students via a transmission process (e.g. lecture,
dictation) remains dominant in large portions of the world (OECD, 2009). Therefore, if what stu-
dents are to learn needs to go beyond rote, then there needs to be a concomitant shift in teacher
pedagogy to match. Twenty-first-century teachers need to know not only how to use a practice but
also when to use a practice to accomplish their goals with students in varying contexts (Darling-
Hammond, 2006). This requires teachers to have a deeper knowledge of how to address a diverse
array of learners and more refined diagnostic abilities to inform their decisions (Darling-Hammond,
2006). The ability to communicate in such a complex environment requires constant information
flow and adjustment (Levy and Murnane, 2004), and a skilled teacher should be adroit at regulating
the flow of classroom discussion as it ebbs and flows (Dede, 2010).
Social settings frameworks have historically emphasized the importance of looking at teachers as
facilitators of an individual’s learning experience (Cohen et al., 2003; Pianta and Hamre, 2009;
Tseng and Seidman, 2007). Learning and development rests within the daily interactions and
102 Research in Comparative & International Education 14(1)

experiences that take place in the classroom (Seidman and Tseng, 2011; Tseng and Seidman, 2007;
Wolf et al., 2018) in addition to being a product of the culture where the processes reside (Stigler
et al., 2000). In any given classroom, the core processes and practices are working concurrently.
However, from the perspective of classroom observations, there is a need to be able to make clear
distinctions between concurrent behaviors because doing so will better enable us to discern how
these behaviors relate to key dimensions of the classroom environment that support student learn-
ing. Focus is put toward processes and practices in the classroom, thereby reducing the singular
focus on what is being taught to how something is being taught. This is no easy task, but this strategy
is both conceptually and programmatically aligned to support rigorous evaluation.

How we can measure quality teaching processes


Quantitative approaches to assessing classrooms have the ability to be more systematic and can
broadly be broken down into two major categories: checklists and rating/categorization scales.
Checklists typically require the classroom observer to mark the presence or absence of the item in
the classroom. They are very low inference methods and can catalogue a range of desired con-
structs from artifacts in the classroom to the practices of the teacher. Rating or categorization scales
are often higher inference methods, capable of focusing on the quality of specific behaviors as well
as on their frequency of occurrence in the classroom (Waxman et al., 2004). As assessments of
quality, rating or categorization scales have much greater potential to be fed back to teachers to
change their practices.
Instruments to assess teacher practices and classroom processes used in LMICs have primarily
focused on measuring student use of their time in the classroom, primarily due to the framework
put forth by the Global Campaign for Education (2002) that set the learner in the center of all edu-
cation quality endeavors. In the past, however, instruments such as the Stallings (Stallings, 1978)
were based on Carroll’s (1963) model of school-based learning that focused on the importance of
a learner’s time engaged in learning as well as his or her learning rate. Though this modality could
still provide a general picture of classroom alignment with policy and expectations, understanding
specific behaviors run the risk of being underreported when using a snapshot method unless inter-
vals are quite frequent (UNESCO, 2016).
At one time, time on task was the prevailing method behind classroom observation, but more
recent findings question this mode of measurement. Even as they support it, Benavot and Gad
(2004: 293) note that, “researchers disagree over the magnitude of this [time engaged in learning
and learning rate] relationship, the relative importance of various intervening factors, and the
nature of the socio-economic contexts in which the relationship is more or less salient.” Therefore,
though defined as systematic observation instruments, time and frequency measurement instru-
ments, such as the Stallings Snapshot, are far too narrow in scope to meet the need of measuring
the quality use of classroom time.1 Furthermore, time on task measures cannot capture teacher
competencies around “soft skills” such as emotional support nor can they differentiate levels (i.e.
quality) of instruction (Bruns et al., 2016).
Additionally, many internationally used instruments are intervention-specific. The main draw-
back to tailoring measures to evaluate the specific facets of an intervention is that the measure
becomes insensitive to experimental contrast, and use and comparison of data in other evaluations
and contexts are not possible. Only a few instruments – primarily the various iterations of the
Stallings Observation System (SOS) – have been used across different contexts, cultures, and inter-
ventions (Bruns, 2011; Crouch, 2008). The issue is further compounded by the fact that most
instruments in use internationally do not provide a clear conceptual framework that serves as a
foundation for instrument validity. The development of a high-quality instrument for LMICs would
Kim et al. 103

require further psychometric development. Furthermore, while professional development and ped-
agogy need to be based on research and existing best practices, it is also of the highest importance
that the research that informs it continues to be localized, adapted, and refined to the day-to-day
realities of the teacher’s context (Burns and Lawrie, 2015; Vavrus, 2002).

Behavioral observation
Generally speaking, there is relatively little knowledge from LMICs about what happens in class-
rooms. A review of classroom research in developing countries supported by the World Bank
(Venäläinen, 2008) continues to echo the recommendations of Schaffer et al. (1994) and goes on to
outline the need for improved instruments and methodologies to gauge classroom quality (e.g.
student engagement, effective instructional strategies) and other elements of a classroom that can-
not be gleaned from the purview of a “snapshot.” Similarly, the need for greater quality teacher
professional development has elicited a call for “a focus on the how of teaching” (Burns and
Lawrie, 2015: 43), including a focus on more structured, facilitated opportunities for teachers to
learn, a better understanding of the contexts in which they work, and the use of improved data to
determine what really works (Burns and Lawrie, 2015).
Westbrook et al. (2013) suggest that one way to improve the knowledge gap on classrooms is
through systematic behavioral observations to record teaching practices. Evidence from Western
countries suggests that teachers and school leaders trust classroom observations more than other
value-added measures (Harris and Herrington, 2015). This is perhaps due to the fact that what they
see in classrooms directly relates to what teachers do in practice and is a more concrete basis for
information that teachers need to improve (Harris and Herrington, 2015).
A body of knowledge on effective pedagogical practices and classroom processes has begun to
emerge (Seidman, 2012). An existing Western research base (e.g. Danielson, 2011; Kane and Staiger,
2012; Mashburn et al., 2014; Pianta, 2011) and emerging international research base (e.g. Araujo et al.,
2016; Hu et al., 2016; Leyva et al., 2015) are beginning to rigorously assess the educational quality in
the classroom. In some cases, classroom process quality has even been successfully linked to student
learning outcomes (Allen et al., 2013; Leyva et al., 2015; Pianta et al., 2008a; Wolf et al., 2018).

Contextualization
As the value of understanding process quality in classrooms has become recognized more broadly,
behavioral observation instruments are now being used more regularly to evaluate classrooms –
particularly the Classroom Assessment Scoring System (CLASS; Pianta et al., 2008b). CLASS
research now has a global base in both developed countries (Bell et al., 2012; Gettinger et al., 2011;
Pianta and Hamre, 2010; Tayler et al., 2013) and increasingly in LMIC contexts (Araujo et al.,
2016; Hu et al., 2016; Leyva et al., 2015). An empirical base for the CLASS across all these various
contexts has established a three (domain) factor structure of Emotional Support, Classroom
Organization, and Instructional Support (Hamre et al., 2013).
However, it is unclear whether or not the cross-country similarities are the result of the tool
(Pastori and Pagani, 2017) and a de facto predefinition of quality that has been defined by the tool
that is measuring it (Vandenbroeck and Peeters, 2014). Some critical questions have been raised
about whether standards-based instruments can be applied out of the context from which they
originated without serious consideration for their cultural consistency and ecological validity
(Pastori and Pagani, 2017). Considerations need to be made for underlying cultural complexities
and the fact that values may manifest or be implemented differently across locales, though some
may be similar or common (Pastori and Pagani, 2017; Rogoff, 2003).
104 Research in Comparative & International Education 14(1)

Some emerging empirical evidence now suggests some psychometric inconsistencies with the
CLASS three-factor structure outside of the US (see Pakarinen et al., 2010 and von Suchodoletz
et al., 2014). Furthermore, in low resource contexts such as sub-Saharan Africa (SSA), the CLASS
tool appears never to have been used. Wolf et al. (2018) suggest that based on some piloting exer-
cises of the CLASS for an Early Childhood Education study in Ghana, the tool was not feasible due
to the cost implications and potential need for significant adaptations. Much like what was sug-
gested by Pastori and Pagani (2017), measuring classroom quality in a context such as SSA may
require a contextually developed and anchored tool (e.g. allowing for incorporation of culturally
specific examples) that is designed with the intent of adaptability and ease of use.

Teacher Instructional Practices and Processes System (TIPPS)


In any given classroom, the core processes and practices are working concurrently. However, clear
distinctions need to be made between concurrent behaviors – isolating specific practices and pro-
cesses can better enable us to discern how particular behaviors in the classroom environment serve
to support student learning. Looking at processes and practices in the classroom shifts the focus
from exclusively what is being taught to how something is also being taught. In other words, both
“form” and “substance” of teaching (Brodie et al., 2002) are being considered to determine quality
of classroom processes. Assessment of classroom process quality also necessitates a standard, in
the form of a specific, conceptually based lens with which observers can view the classroom.
Through such a lens, observers can be trained to observe more objectively, spot biases, and mini-
mize subjectivity through clearly defined dimensional guides for rating.
The TIPPS is a classroom observation instrument designed with such considerations in mind
and specifically to assess process quality in LMIC contexts. Moreover, TIPPS was developed to
look at teaching processes in a granular, nuanced, and culturally relevant manner so that informa-
tion could be fed back to teachers to improve their performance as well as student academic and
social-emotional outcomes. Classroom observers are trained to connect what they observe, using
concrete behavioral evidence, to indicators under each dimension of the tool. Observers are guided
through common observational biases (e.g. sympathy scoring or appearance bias) so that they can
better recognize them should such tendencies arise.
The nature and format of the instrument is meant to ease the process and reduce the cost of
administration. We strove to keep the dimensions aligned with general classroom characteristics
(rather than subject-specific) as well as to be succinct, focusing on only several key domains of
classroom practices and processes. The tool includes items around classroom management, per-
sonal learning support, and cognitive activation of students (Praetorius et al., 2014) as well as items
that address the importance of a teacher’s efforts to stimulate student interests (Patall et al., 2010)
and promote inclusion, for example, gender parity (Stromquist, 2007), in the classroom.
In Table 1, a sample item from the training manual – scaffolding – is presented. First the con-
cept/dimension is defined and next indicators are noted. The instrument’s “structured alternative
format” was adapted from Susan Harter’s Perceived Competence Scale for children (1982) with
the goal of facilitating more objective and reliable ratings (Seidman et al., 2018). Conceptually and
visually, an observer is presented with a dimension and is required to make a dichotomous decision
of whether, based upon observed behaviors, that particular dimension is absent or present in the
classroom (Column A or Column B). Next, the observer is asked to make a second dichotomous
rating, determining if the first dichotomous rating is “somewhat accurate” or “very accurate.”
Finally, we felt an instrument that was meant to capture social processes needed to also allow
for the integration of culturally unique processes (Hughes et al., 1993; Vavrus, 2009). The manifes-
tations of “common” or genotypic processes would vary in each unique context (i.e. expressed
Kim et al. 105

Table 1.  TIPPS structured format and example.


Concept: Teacher uses scaffolding to promote student learning and understanding of subject
matter.
Importance of Concept: A teacher can provide a step-by-step framework to help students understand and
learn the subject matter; this process is known as scaffolding. Scaffolding can work in various ways – both by
building on a base concept or by breaking down a concept into smaller parts. In either case, the teacher leads the
learner through to the larger concept by providing prompts, hints, and assistance.
Indicators of scaffolding may include:
•  teacher describes or models the thought process behind a concept;
• teacher elaborates on a part of the lesson that students are having difficulty with, and connect it to the
larger concept;
•  teacher gives examples, hints, and assistance appropriate to level of student understanding;
•  teacher elaborates on knowledge student already possesses (vertical scaffolding);
• teacher shares routinized, predictable activities with students to provide structure for development
(sequential scaffolding).
Column A Column B

Teacher does not use scaffolding to provide a step- Teacher uses scaffolding to provide a step-by-step
by-step framework to help students learn and framework to help students learn and understand
understand subject matter. subject matter.

Very Accurate Somewhat Accurate Somewhat Accurate Very Accurate


Teacher writes Teacher writes Teacher writes Teacher writes
out double-digit out double-digit out double-digit out double-digit
multiplication on the multiplication on the multiplication on multiplication on the
board. “Sarah, come board. “Sarah, come the board. “I have board. “I have broken
to the blackboard to the blackboard and broken it into three it into three steps,”
and do problem one do problem one from steps,” teacher says, teacher says, “and each
from the book,” says the book,” says the “and each step must step must be completed
the teacher. When teacher. When Sarah is be completed in in order to get to the
Sarah is stuck, teacher stuck, teacher repeats order to get to the answer. Sarah, come to
points to a part of the same explanation answer, let’s do a the blackboard and do
the example and asks of all the steps again. few problems on the problem one from the
her to “do that step board.” Sarah, come to book.” When Sarah is
correctly.” the blackboard and do stuck, teacher points
problem one from the out which step she is
book.” When Sarah is stuck in and explains the
stuck, teacher points step again. Once Sarah
out which step she is completes her problem,
stuck in and explains teacher says, “this is a
the step again. tricky step and many
get stuck on it, let’s do
another problem so that
we can all review how
this step is done” and
focuses on the step that
is difficult.
106 Research in Comparative & International Education 14(1)

phenotypically). Essentially, the way in which we structured the tool – particularly the concrete
examples for each dimension – would allow for contextual-uniqueness or specificity.
Preliminary development of the instrument has taken place across multiple LMICs (Democratic
Republic of the Congo, India, Tanzania, and Uganda) and at different developmental levels (pri-
mary, secondary, and early childhood) in a series of iterative endeavors. Dimensions could be
added or deleted as appropriate to the context. In the following section, we present empirical data
on the reliability and validity of the TIPPS in three different levels of schooling (secondary, pri-
mary, and early childhood) and countries (Uganda, India, and Ghana).

TIPPS secondary (Uganda)


Initial reliability and validity of the TIPPS instrument was established in Uganda (Seidman et al.,
2018). In this first empirical study, data were collected from 197 secondary schools (i.e. 737 class-
rooms), across various regions of the country. Pairs of locally recruited observers were trained to
observe live classrooms using the initial version of TIPPS and were asked to match their observa-
tions to a manual that outlines 18 behavioral indicators known as “TIPPS dimensions.” These
dimensions were constructed to typify a core set of teacher practices and classroom processes,
based upon a review of the most commonly used classroom observation instruments and pedagogi-
cal literature. The commonalities we found could be a result of educational reforms in many devel-
oping countries that have long been driven by notions of modernity (Inkeles, 1975). However, the
overlap in core domains are more likely due to the similarity in the conceptual understanding of
“teacher quality” (UNESCO, 2006), especially based on the type of professional development that
teachers often receive driven largely by Western constructivist beliefs for education across the
globe (OECD, 2009).
In spite of the theoretical and conceptual relevance of the dimensions, initial on-the-ground
training in Uganda on TIPPS was met with some difficulties. Local enumerators were finding some
of the terminology unfamiliar and difficult to internalize. To remedy this issue, the tool required
some revisions based on qualitative work. Unfamiliar concepts were discussed to gauge whether
such concepts did in fact exist within the Ugandan context; if they did exist, the appropriate word-
ing was decided upon as a group to find the most relevant language in Ugandan English. The
aforementioned process was critical to the contextualization of the instrument, and this process has
been repeated for all subsequent versions since.
Classroom-level TIPPS dimensions were to be matched with 8th grade biology, English, and
mathematics achievement scores. The results were promising in terms of the instrument’s coher-
ence, rater-reliability, and concurrent validity (Seidman et al., 2018). Thirteen dimensions revealed
sufficient variance and good to high levels of rater-reliability (range, median). Factor analysis
revealed a three-factor structure: “Instructional Strategies,” which includes student-centered learn-
ing such as the encouragement of student questions and ideas; “Sensitive & Connected Teaching,”
which refers to a teacher’s ability to connect lessons to everyday life and their sensitiveness to
respond to students’ needs; and “Deeper Learning,” which characterizes a teacher’s ability to break
down concepts to help facilitate student learning.
The factors revealed significant and intriguing subject-specific associations with biology,
English, and mathematics scores, leading us to consider the possibility of a differential specificity
hypothesis: student performance in an academic discipline could be differentially related to the
successful enactment of particular practices within that discipline (Seidman et al., 2018). For
example, a teacher’s ability to connect what students are learning to everyday life manifested a
trend toward better performance in biology only. Additionally, individual items of the tool also had
Kim et al. 107

significant, meaningful associations to learning outcomes. For example, in English, a teacher’s use
of specific feedback related directly to student performance in English.

TIPPS primary (India)


We developed and piloted a primary classroom version of the TIPPS as a precursor to developing
a feedback loop for Kaivalya Education Foundation (KEF) to improve practice for teachers. All of
the original dimensions from the TIPPS secondary version were mutually deemed relevant by New
York University and KEF teams at the primary level. However, the concrete examples for each
dimension were adapted to be more culturally and developmentally appropriate for primary class-
rooms. KEF’s design team was tasked with vetting the examples as well as translating the manual
into Hindi. A full translation, back-translation protocol was undertaken, and in the end, a side-by-
side English-Hindi manual was produced.
Data were collected from 256 classrooms throughout Rajasthan, India. Of these 256 classrooms,
148 were sampled to reflect variation by (a) region of the country, (b) rural/urban/tribal, (c) number
of years of KEF programming received by school/principal, and (d) number of students in the
classroom. Observers were locally recruited KEF staff (project managers from different KEF teams
across India), who were trained to observe video-taped classrooms using the TIPPS. Low inter-
rater reliability did not permit for higher-order analyses. Nevertheless, the study did provide impor-
tant learnings for the development and implementation of the tool (see Jayaram et al., in press).

TIPPS ECE (Ghana)


Unlike the primary version of TIPPS, adaptation for an early childhood version required more than
translation and a few adjustments in examples. Instruction in ECE classrooms required different
teacher competencies and instructional foci. For example, a key principle of learning in ECE that
was absent for primary and secondary was play, which provides young children the opportunity to
acquire physical, cognitive, and social skills (Britto and Limlingan, 2012). Thus, the development
of the ECE version of the TIPPS focused on theoretical constructs and key quality indicators in the
classroom that support early child development, including the use of structured free-play in the
classroom, ECE-focused instructional practices, social-emotional support, classroom management
and environment (Wolf et al., 2018).
Reasoning and problem solving are among the most commonly included elements for the oth-
erwise variedly defined term “critical thinking” (Lai, 2011), and research on young children’s
thinking shows that it is possible for early childhood educators to promote children’s critical think-
ing skills through their classroom practice (Whittaker, 2014). Nevertheless, the teaching practices
toward critical thinking need to manifest in a developmentally appropriate manner in order to
facilitate effective learning.
Table 2 provides a comparative example of how a dimension (critical thinking) was adapted and
made developmentally appropriate for the ECE classroom. The dimension is largely similar for
primary and secondary. However, the wording of the dimension, while still maintaining an empha-
sis on reasoning and problem solving, is slightly altered. This is because in the case of an early
childhood classroom, a teacher may need to do more to scaffold and lead a child through their
reflection and thinking processes than an older student who may be able to reflect and evaluate
with less prompting and more independence. Therefore, the dimension intentionally specifies “lan-
guage,” and the importance of concept emphasizes the practice of encouraging students to think
aloud. Consequently, the indicators also focus far less on the concept of discussion, which is devel-
opmentally more appropriate for older children in primary and secondary, and the concrete
Table 2.  TIPPS complex thinking dimension by developmental level.

Early childhood Primary Secondary


108

Dimension Teacher encourages children to use Teacher uses instructional strategies to aid students Teacher uses instructional strategies
language to reason and problem solve. in complex thinking or problem solving. to aid students in complex thinking or
problem solving.
Importance As children learn, it is important for By using different forms of reasoning, a teacher By using different forms of reasoning
of Concept them to process that information and helps expand the students’ understanding of the such as contrasts and comparisons, a
reflect on it. Through use of expressive lesson being taught. Discussion, too, is an important teacher also helps expand the students’
language and activities, a teacher will help instructional approach because it enables the teacher understanding of the lesson being
children by posing questions, comparing to further student learning beyond lecturing and taught. Discussion, too, is an important
and contrasting, or discussing the material memorization. Rather than pointing out an answer instructional approach because it enables
that they are learning. By encouraging and limiting thinking around a topic, the teacher can the teacher to further student learning
children to think for themselves and to provide ways for students to figure out an answer, beyond lecturing and memorization.
share their reasoning aloud, a teacher is promoting exploration and critical thinking. Rather than pointing out an answer, the
training children to process information teacher provides ways for students to
and draw their own conclusions. figure out an answer.
Indicators • Helping children verbalize their ideas • Teacher use of techniques such as comparisons • Teacher use of techniques such as
and opinions, helping with word and contrasts, story-telling and/or problem comparisons and contrasts, story-
choice and sentence formation solving activities telling and/or problem solving activities
• Using of open-ended questioning to • Teacher use of questions or prompts to invite • Teacher use of questions to invite
elicit children’s thoughts (to problem students to analyze, reflect, or problem solve students to problem solve
solve) in addition testing knowledge of • Teacher use of various tactics to facilitate • Teacher use of various tactics to
facts through closed-ended questions discussion (teacher-to-student or student-to- facilitate discussion
• Reframing information to help student) • Teacher enables students to engage in
children to problem solve • Teacher enables students to engage in discussion discussion directly with one another
• Providing time to reflect on work and directly with one another • Rote learning is used for new and
activities (i.e. summarizing activities or unfamiliar concepts, not for
for the day) memorizing answers to questions
Examples At the end of the school day, the teacher Teacher asks students, “In the morning, if your mom Teacher asks students to formulate their
asks the children, “Before we go home, asks you to please get an umbrella before you go own definitions of an unfamiliar word
who wants to share what they enjoyed outside. What does that mean about the weather from a text; he/she then helps them
today?” As children give their answers, outside?” The teacher allows time for several compare the different answers to come
teacher asks, “Why did you enjoy it?” students to respond. Student says, “because it is up with a definition everyone agrees upon.
to prompt children to think about their raining outside.” Another student responds, “but it
reasoning. Teacher chooses several could also be very hot.” “Good point, it could be hot
children so that many may contribute. or rainy,” says the teacher, “let’s thinks of the uses
Research in Comparative & International Education 14(1)

of an umbrella” and invites students to participate.


Kim et al. 109

examples serve to ground the theoretical list of processes surrounding the item. Again, the concepts
of form (critical thinking) and substance (how the child is engaged in critical thinking) play critical
roles in the construction of the TIPPS dimension.
In this empirical study, data were collected from 240 primary schools (i.e. 317 kindergarten
classrooms) in Ghana, from six districts in the Greater Accra Region. A set of locally recruited
observers were trained to observe classrooms using the TIPPS. Factor analysis revealed a distinct,
but different three-factor structure: Facilitating Deeper Learning (FDL), which includes instruc-
tional support strategies used to encourage learning such as scaffolding and providing high-quality
feedback; Supporting Student Expression (SSE), which includes considering student ideas and
interests, as well as the development of higher-order thinking skills such as reasoning and problem
solving, connecting concepts to students’ lives outside of the classroom, and language modeling;
and Emotional Support and Behavior Management (ESBM), which includes concepts related to
both student emotional support (e.g. sensitivity and responsiveness, tone of voice) and positive
behavior management strategies (e.g. providing a consistent routine) (Wolf et al., 2018).
Two factors, Supporting Student Expression and Emotional Support and Behavior Management,
predict classroom end-of-school-year academic outcomes. One factor, Supporting Student
Expression, was shown to predict classroom end-of-school-year social-emotional outcomes. The
findings reveal that the TIPPS was successful in identifying some critical elements of process qual-
ity in Ghanaian pre-primary classrooms. Furthermore, a teacher’s ability to support student expres-
sion during instruction is significant for improved classroom outcomes (Wolf et al., 2018).

Summary
TIPPS observers have been reliably trained and calibrated, with a median AC1 statistic (Gwet,
2002) of .86 in Uganda and an intraclass correlation coefficient (ICC) of 71.1% (variance shared
across raters) in Ghana. While Ghanaian observers were trained to use the Early Childhood version
of TIPPS, Ugandan observers were trained to use the Secondary version of TIPPS that overlaps
with the primary version of TIPPS utilized in India. The training process itself was similar in all
developmental levels and includes concept familiarity, bias awareness, and practicing to hone
observer behavioral observation techniques. Rater reliability in multiple contexts is a testament to
the fact that the TIPPS has been able capture some common dimensions of classroom quality
across contexts (as inherent to the context) or, at the very least, that the instrument has appropri-
ately expressed dimensions of classroom quality in a culturally appropriate manner (as external but
learned constructs to the context). Yet in the case of India, we see that content and linguistic adapta-
tion of the tool was not enough, as described below.
With the exception of three concepts (of the 19), the Indian observer cohort had low inter-rater
reliability. We attribute the difference in rater reliability to the fact that Indian cohort of observers
had a great deal of expertise in a particular curriculum and pedagogy. An individual’s quality of
thinking around a particular topic is inextricably linked to the context in which that thinking must
be done (Bailin et al., 1999). Part of that context includes not only background knowledge but also
dispositions and situations (Bailin et al., 1999; Halpern, 2013; Han and Brown, 2013). Therefore,
knowledge and experience in a given area of study or practice can be a significant determinant of
the ability to think critically in that area (Bailin et al., 1999) and even a potential impediment to
critical thinking (Nosich, 2012).
The pedagogical expertise of KEF staff often emerged as a barrier to enhancing objectivity in their
classroom observations. This adherence to “practices that work” as well as “supporting teachers to do
better” was counterproductive to our efforts to sharpen their observational lens to apply TIPPS as an
effective heuristic for objective observation; they could not focus on “what was happening” rather
110 Research in Comparative & International Education 14(1)

they focused on trying to “fix what is happening” in the classroom. Also, knowledge of the curricu-
lum being taught in the classroom hindered observer objectivity as some trainees pointed out that the
teacher in the practice video was not teaching the curriculum as instructed. A final counter-productive
factor was observers’ familiarity with the teachers being observed. During practice videos, many
observers indicated that they were familiar with teachers being observed and made assumptions
about the classroom that were not indicated in the classroom footage they were observing. While
these well-intentioned critiques indicate the high level of commitment to KEF, their profession, and
their determination to make a difference, it did not make for objective observers.
These initial studies provide insights into the previously unknown mechanisms of interventions
and everyday practice. However, the studies also underscore what still needs to be done to fully
realize the goals of an observational tool to assess pedagogical practices and classroom processes
in LMICs that (a) is easy to use; (b) has cross-cultural and developmental reach; and (c) has poten-
tial as a feedback tool to improve teacher and student performance. Having methods that have
some equivalence across countries could dramatically reduce costs of evaluation and be the cata-
lyst for more regular and rigorous cross-national analyses, and an instrument that allows for the
integration of culturally unique processes is especially desirable, particularly for its crucial contri-
butions for quality teacher feedback. Yet to be fully ascertained, this may require further ethno-
graphic approaches and contingent constructivist pedagogy as described by Frances Vavrus (2009),
not only for the form and substance of the tool itself but also, as we have learned through experi-
ence, for the training process. We will continue to gather more information on the cultural equiva-
lence of behavioral observation methods and persist in refining the various versions of TIPPS
based on the data we have thus far. With knowledge in hand from a tool that has been contextual-
ized in several cultural milieus and developmental contexts, we now turn to how it can be used via
feedback and reflective practice to improving 21st-century teaching skills.

Developing 21st-century teacher skills


The education delivery system has a substantial impact on the way in which 21st-century skills
develop in learners. Pedagogy, curriculum, school rules and climate, assessments, and benchmark-
ing skill acquisition are all key factors in the way 21st-century skills develop and are monitored.
Nevertheless, the classroom is the primary environment where the aforementioned factors culmi-
nate to bring knowledge acquisition and skills development. Furthermore, the classroom is the
space where learners observe the modeling of these skills by their teachers and can practice them-
selves. Therefore, it is equally important to prepare and train teachers in not only the acquisition of
21st-century skills but also the dissemination of these skills. Measuring the classroom processes
and teacher practices that are enabling and supporting the development of 21st-century skills in the
classroom can serve as an important first step.

The role of feedback and reflective practice


We posit feedback to the teacher on his/her own teaching performance as a key deliverable of
observation instruments. Thus, our focus shifts from earlier efforts that focused on utilizing class-
room observation as a means for understanding and highlighting facets of teaching quality to that
of identifying key contributors to teacher feedback in a cycle of continuous professional develop-
ment (Arbour et al., 2015; Yoshikawa et al., 2015). However, for teachers to improve their prac-
tices, the manner in which professional feedback occurs needs to change. Sustained, meaningful
changes in processes of the classroom requires feedback that is purposed for the continuous
improvement and ongoing support of teachers. There is promising evidence to endorse coaching
Kim et al. 111

and feedback interventions, with an emphasis on actual performance or practices, as effective


means to alter setting-level regularities (Seidman, 2012). Moreover, the tools used to measure
complex social settings such as the classroom continue to require further development and contex-
tualization as well as validation. For classroom assessment in particular, an effective tool would
need to be sufficiently granular and nuanced (Pianta, 2011).
Critically, classroom observations must not focus on what training the teacher may have received
but rather what the teacher does in class (Burns and Lawrie, 2015) and in a culturally appropriate
contextualized manner. Therefore, receiving low-stakes, high-support feedback, such as from obser-
vation, allows the teacher to be able to reflect on his or her in-class performance and empower him/
her to make changes. In addition, an observation instrument must deliver on two core aspirations –
clarity and granularity. The need is to identify both the strong and the weak elements of the class-
room in a manner that is both comprehensible and actionable for the teacher. Generic feedback is
unhelpful toward improving core practices of the teacher because it is not providing information that
is actionable. Granular feedback allows the teacher to reflect on past performance as well as support
action for the future change; this degree of specificity has proven to be the most valuable informa-
tion to feedback to teachers (see Allen et al., 2011; Jones et al., 2013; Rivers et al., 2013).
Individualized support for the teacher is also necessary to support a successful educational
intervention or a professional development system. An observational instrument can become a key
component in that effort by tying the teacher’s own growth and improvement to student learning
and development. Using feedback of information on social regularities in a supportive and empow-
ering context stimulates reflective practice and lasting and effective behavior change (Seidman,
2012). Long term, in-school integration of classroom observations, not just one-off or short term,
is a key factor for an instrument’s utility for teachers. Teachers can chart their own progress, and
not through data with which they cannot relate but through tangible evidence derived from live or
video classroom observation, coaching, and feedback on performance by an experienced mentor.
Teachers are empowered to be active participants in their professional development as well as in
the refinement of their pedagogy to best serve the students (e.g. 4Rs: Brown et al., 2010; Jones
et al., 2011; My Teaching Partner: Allen et al., 2011; RULER: Rivers et al., 2013) – two aspects
currently underserved in SSA’s teacher education and continuing professional development pro-
grams (Hardman et al., 2011).
Mentoring and reflective practice are not new concepts in LMICs,2 and reflective practice, in
particular, emphasizes the ability to recognize and monitor one’s own thinking, understanding, and
knowledge about teaching (Parsons and Stephenson, 2005). Here too, constructivist ideologies
undergird and reinforce the importance of the dynamic, active process of learning for teachers.
Throughout this process, it is fundamental that information gained from the acts of monitoring and
recognition be mobilized for deep rather than rote thinking (Rodgers, 2002). Cursory practices,
such as feedback from behavior checklists, should be eschewed in favor of thinking that highlights
critical thinking and the rigorous analyses of information. This level of depth is often not easily
achieved through intervention, and reflection that is forced may not cultivate true reflective prac-
tice because such exercises can lack critical thinking and trigger social desirability bias (Hobbs,
2007; Orland-Barak, 2005).
As pointed out previously, true reflective practice can be a powerful medium to affect behavior
change. But oftentimes, teachers need assistance, guidance and/or support structures in order to
improve their practice. This type of teacher support frequently comes in the form of teacher men-
torship and coaching, which is generally understood to be intensive, ongoing, responsive, and col-
laborative job-support (Ackland, 1991). Western education research suggests that teachers who are
provided with specific feedback and opportunities to practice these changes in the classroom are
112 Research in Comparative & International Education 14(1)

able to increase the effectiveness of their teaching (Allen et al., 2011; Jones et al., 2013; Rivers
et al., 2013). However, there are worthy examples from LMIC contexts as well.
The Kaivalya Education Foundation has pioneered education leadership and behavior change
management in India that works to foster meaning, learning, joy, and pride in every stakeholder in
the system by focusing on aspects of self-motivation and engagement. Their work with teachers
over the last decade has revealed several critical insights – more specifically that teachers want to
access tangible support to improve their practice, broaden their perspectives of education, and
further develop their ability to learn new topics. Teachers have also expressed struggles with their
limited capacity for self-reflective practice and an inability to advance themselves or support the
development of others.
To support teachers, KEF deploys Gandhi Fellows, community-embedded education improve-
ment workers, for one-on-one engagement in each school. This training has included implementa-
tion and use of the TIPPS, generating tangible metrics and insights about teachers’ instructional
practices and classroom processes (Jarayam et al., in press). Data are used by a Fellow to provide
teacher feedback, focusing on areas of strengths and improvement. The combination of a Fellow’s
support and TIPPS feedback has been used to empower teachers to self-assess their abilities in a
low-stakes environment. Simultaneously, through KEF’s Principal Leadership Development
Program (PLDP), a Fellow works with the principal of the school to envision the way they can use
the TIPPS to devise an improvement plan for their school. The principal’s coaching and mentoring
support becomes an integral, structural part of the school organization.

Conclusion
As competencies such as self-awareness, collaboration, and critical thinking continue to be empha-
sized as key competencies for sustainable development (see Rieckmann, 2017), it is likely that
teacher training interventions based on Western constructivist beliefs will continue to pervade the
education developmental landscape in LMICs. Nevertheless, as we have contended here, there is
still a great need to proceed with greater consideration for the respective contexts in which teachers
are evaluated and receive their professional development. In order to create 21st-century learners,
we must focus on teachers’ 21st century skills and re-conceptualize how we can evaluate and train
teachers. To achieve this, we have invoked constructivist understandings of what goes on in class-
rooms and, in particular, teachers’ practices. Beyond common dimensions of practices, we sought
to discover and construct dimensions that were expressed in contextually and culturally meaning-
ful ways. In this vein, we were able to identify reliable dimensions with concurrent validity that are
capable of being fed back to teachers.
Even so, the process of developing TIPPS was not without its challenges as we navigated various
developmental and cultural contexts. Some issues, such as that of translation and language adjustment
for content validity, are perhaps simpler and more easily solved. Yet, translating the dimensions of the
tool for observers and practitioners is a bare minimum first step. Many of the greater challenges have
been in getting observers and practitioners to understand how what they assert they understand on
paper actually manifests as actions in the classroom. For example, what does it look like for a teacher
to incorporate student interests into a lesson? Is it simply to allowing a student to answer questions or
give examples? Or is it something deeper than that? An observer, practitioner, or even a teacher who
has never experienced this in a classroom themselves will likely have a very difficult time identifying
the behavioral manifestations of such a dimension, let alone employing the practice.
As was mentioned earlier, the classroom is the space where learners observe the modeling of skills
by their teachers. If teachers do not know how to identify teaching practices, they certainly will not
know how to model them. This is a critical issue not only for observational training but also
Kim et al. 113

for feedback and professional development of teachers and a fundamental reason why identifying
culturally relevant manifestations of teaching practices is necessary. Having observers, practitioners,
and/or teachers study classrooms from their own cultures (whether through live observation or videos)
has been supremely important for successful training, rater reliability, and overall relevance of the tool.
In addition, we are still attempting to determine the relevance of certain dimensions of the tool in
various contexts if at all. For example, observing inequitable treatment of students (or favoritism) in
the classroom has been generally difficult (i.e. invariant) in a few contexts thus far. We have yet to
determine whether this is due to the particular contexts or because such dynamics are much more
nuanced than can be discerned by an independent third party. A dimension such as cooperative
learning also remains to be seen with much frequency, yet its importance to the learning process (as
emphasized by many interventions) seems to merit its continued presence in the tool for now.
In spite of the challenges we have outlined, with the right tool in hand to support the process,
we feel the key to improving teacher practices is with granular and clear feedback. This needs to
be done on a frequent and regular basis to foster self-reflection and continuous improvement.
Teaching skills such as critical thinking require that teachers be educated in a manner that is reflec-
tive of that process – through professional development that engages ongoing reflection and con-
tinuous learning (Han and Brown, 2013). Only with successful accomplishment of such 21st-century
teaching skills will we be able to enhance the 21st-century learning of students in LMICs.

Funding
The author(s) disclosed receipt of the following financial support for the research, authorship, and/or publica-
tion of this article: The projects described in this manuscript were supported in part by grants from the
Economic and Social Research Council (ESRC) [grant number ES/M004740/1]; the UBS Optimus Foundation,
World Bank Strategic Impact Evaluation Fund (SIEF), and NYU Abu Dhabi Reseasrch Institute; and a con-
tract from the Kaivalya Education Foundation.

Notes
1. Other examples include: Stallings Five Minute Interaction (FMI), Special Strategies Observation System
(SSOS), QAIT Model, TELL Classroom Observation Protocol (I-TELL), and the Virgilio Teacher
Behavior Inventory (VTBI).
2. For example, Teacher Learning Circles (TLCs) in the Democratic Republic of Congo serve to help
teachers adopt innovative techniques for teaching and to create supportive environments, all through
the provision of subject-based content knowledge and instructional practices as well as enhancements
to teacher motivation and overall well-being (Frisoli, 2013; IRC, 2011). Coordinating Centre Tutors
(CCTs) in Uganda are responsible for providing training and support in various areas that they identify in
consultation with head teachers and teachers, also serving to complement school inspectors in assessing
performance targets (Kayabwe et al., 2014).

References
Ackland R (1991) A Review of the Peer Coaching Literature. Journal of Staff Development 12(1): 22–27.
Allen J, Gregory A, Mikami A, et al. (2013) Observations of effective teacher–student interactions in second-
ary school classrooms: Predicting student achievement with the classroom assessment scoring system—
secondary. School Psychology Review 42(1): 76.
Allen JP, Pianta RC, Gregory A, et al. (2011) An interaction-based approach to enhancing secondary school
instruction and student achievement. Science 333(6045): 1034–1037.
Araujo MC, Carneiro P, Cruz-Aguayo Y, et al. (2016) Teacher quality and learning outcomes in kindergarten.
The Quarterly Journal of Economics 131(3): 1415–1453.
Arbour M, Yoshikawa H, Atwood S, et al. (2015) Quasi-experimental study of a learning collaborative to
improve public preschool quality and children’s language outcomes in Chile. BMJ Quality & Safety
24(11): 727–727.
114 Research in Comparative & International Education 14(1)

Bailin S, Case R, Coombs JR, et al. (1999) Common misconceptions of critical thinking. Journal of Curriculum
Studies 31(3): 269–283.
Becker KD, Bradshaw CP, Domitrovich C, et al. (2013) Coaching teachers to improve implementation of the
good behavior game. Administration and Policy in Mental Health and Mental Health Services Research
40(6): 482–493.
Bell CA, Gitomer DH, McCaffrey DF, et al. (2012) An argument approach to observation protocol validity.
Educational Assessment 17(2–3): 62–87.
Benavot A and Gad L (2004) Actual instructional time in African primary schools: Factors that reduce school
quality in developing countries. Prospects 34(3): 291–310.
Binkley M, Erstad O, Herman J, et al. (2012) Defining twenty-first century skills. In: Griffin P, McGaw B and
Care E (eds) Assessment and Teaching of 21st Century Skills. Dordrecht: Springer, pp.17–66.
Britto PR and Limlingan MC (2012) School Readiness and Transitions. A Companion to the Child Friendly
Schools Manual. New York: Unicef.
Brodie K, Lelliott A and Davis H (2002) Forms and substance in learner-centered teaching: Teachers’ take-up
from an in-service programme in South Africa. Teaching and Teacher Education 18(5): 541–559.
Brown JL, Jones SM, LaRusso MD, et al. (2010) Improving classroom quality: Teacher influences and exper-
imental impacts of the 4rs program. Journal of Educational Psychology 102(1): 153.
Bruns B (2011) Building better teachers in the Caribbean. World Bank Regional Learning Event: Improving
Teaching and Learning Outcomes in the English-speaking Caribbean Countries with ICT. Bridgetown,
Barbados, 13–16 April.
Bruns B, De Gregorio S and Taut S (2016) Measures of effective teaching in developing countries. Working
Paper, 16. Oxford: Research on Improving Systems of Education (RISE).
Burns M and Lawrie J (eds) (2015) Where it’s needed most: Quality professional development for all teach-
ers. Inter-Agency Network for Education in Emergencies. New York: INEE.
Butler DL and Winne PH (1995) Feedback and self-regulated learning: A theoretical synthesis. Review of
Educational Research 65(3): 245–281.
Cappella E, Hamre BK, Kim HY, et al. (2012) Teacher consultation and coaching within mental health
practice: Classroom and child effects in urban elementary schools. Journal of Consulting and Clinical
Psychology 80(4): 597–610.
Carroll JB (1963) A model of school learning. Teachers College Record 64(8) 723–733.
Chavan M and Yoshikawa H (2013) The future of our children: Lifelong, multi-generational learning for sus-
tainable development. Thematic Group 4 on Early Childhood Development, Education and Transition to
Work. Sustainable Development Solutions Network.
Cohen DK, Raudenbush SW and Ball DL (2003) Resources, instruction, and research. Educational Evaluation
and Policy Analysis 25(2)119–142.
Crouch L (2008) The snapshot of school management effectiveness: Report on pilot applications. Prepared
for USAID under the EdData II project, 20.
Danielson C (2011) Enhancing professional practice: A framework for teaching. N.p.: ASCD.
Darling-Hammond L (2006) Constructing 21st-century teacher education. Journal of Teacher Education
57(3): 300–314.
Dede C (2010) Comparing frameworks for 21st century skills. In: Bellanca J and Brandt R (eds) 21st Century
Skills: Rethinking How Students Learn. Bloomington: Solution Tree Press, pp.51–76.
Frisoli PSJ (2013) Teachers’ experiences of professional development in (post) crisis Katanga province,
southeastern Democratic Republic of Congo: A case study of teacher learning circles. EdD Thesis,
UMASS Amherst, USA.
Gettinger M, Schienebeck C, Seigel S, et al. (2011) Assessment of classroom environments. In: Bray MA
and Kehle TJ (eds) The Oxford Handbook of School Psychology. New York: Oxford University Press,
pp.265–270.
Glisson C, Dukes D and Green P (2006) The effects of the ARC organizational intervention on caseworker
turnover, climate, and culture in children’s service systems. Child Abuse & Neglect 30(8): 855–880.
Global Campaign for Education (2002) A Quality Education for All: Priority Actions for Governments,
Donors and Civil Society. Brussels: Global Campaign for Education.
Kim et al. 115

Gwet K (2002) Kappa statistic is not satisfactory for assessing the extent of agreement between raters.
Statistical Methods for Inter-rater Reliability Assessment, 1.
Halpern DF (2013) Thought and Knowledge: An Introduction to Critical Thinking. New York: Psychology
Press.
Hamre BK, Pianta RC, Downer JT, et al. (2013) Teaching through interactions: Testing a developmental
framework of teacher effectiveness in over 4,000 classrooms. The Elementary School Journal 113(4):
461–487.
Han HS and Brown ET (2013) Effects of critical thinking intervention for early childhood teacher candidates.
The Teacher Educator 48(2): 110–127.
Hardman F, Ackers J, Abrishamian N, et al. (2011) Developing a systemic approach to teacher education
in sub-Saharan Africa: Emerging lessons from Kenya, Tanzania and Uganda. Compare: A Journal of
Comparative and International Education 41(5): 669–683.
Harris D and Herrington C (2015) Value added meets the schools: The effects of using test-based teacher
evaluation on the work of teachers and leaders [Special issue]. Educational Researcher 44(2): 71–76.
Harter S (1982) The perceived competence scale for children. Child Development 53: 87–97.
Hobbs V (2007) Faking it or hating it: Can reflective practice be forced? Reflective Practice 8(3): 405–417.
Hu BY, Fan X, Gu C, et al. (2016) Applicability of the classroom assessment scoring system in Chinese pre-
schools based on psychometric evidence. Early Education and Development 27(5): 714–734.
Hughes D, Seidman E and Williams N (1993) Cultural phenomena and the research enterprise: Toward a
culturally anchored methodology. American Journal of Community Psychology 21(6): 687–703.
Inkeles A (1975) Becoming modern: Individual change in six developing countries. Ethos 3(2): 323–342.
IRC (2011) Creating healing classrooms: A multimedia teacher training resource.
Jayaram M, Raza M, Sharma V, et al. (in press) People, not numbers: Using data to humanize and strengthen
teacher support systems in India. New York,NY, USA: Department of Applied Psychology, New York
University.
Jones SM, Bouffard SM and Weissbourd R (2013) Educators’ social and emotional skills vital to learning.
Phi Delta Kappan 94(8): 62–65.
Jones SM, Brown JL and Lawrence Aber J (2011) Two-year impacts of a universal school-based social-
emotional and literacy intervention: An experiment in translational developmental research. Child
Development 82(2): 533–554.
Kane TJ and Staiger DO (2012) Gathering feedback for teaching: Combining high-quality observations with
student surveys and achievement gains. Research Paper. MET Project. Bill & Melinda Gates Foundation.
Kayabwe S, Asiimwe W and Nkaada D (2014) Successful Decentralization: The Roles and Challenges of
DEOs in Uganda. Paris: International Institute for Educational Planning.
Lai ER (2011) Critical thinking: A literature review. Pearson Research Reports 640–641.
Levy F and Murnane RJ (2004) Education and the changing job market. Educational Leadership 62(2): 80.
Leyva D, Weiland C, Barata M, et al. (2015) Teacher–child interactions in Chile and their associations with
prekindergarten outcomes. Child Development 86(3): 781–799.
Mashburn AJ, Downer JT, Rivers SE, et al. (2014) Improving the power of an efficacy study of a social and
emotional learning program: Application of generalizability theory to the measurement of classroom-
level outcomes. Prevention Science 15(2): 146–155.
McMahon M (1997) Social constructivism and the World Wide Web-A paradigm for learning. ASCILITE
conference, Perth, Australia.
Nosich GM (2012) Learning to Think Things Through: A Guide to Critical Thinking across the Curriculum.
Boston: Pearson.
OECD (2009) Creating effective teaching and learning environments: First results from TALIS. Organisation
for Economic Co-operation and Development.
Orland-Barak L (2005) Portfolios as evidence of reflective practice: What remains ‘untold’. Educational
Research 47(1): 25–44.
Pakarinen E, Lerkkanen M-K, Poikkeus A-M, et al. (2010) A validation of the classroom assessment scoring
system in Finnish kindergartens. Early Education and Development 21(1): 95–124.
Parsons M and Stephenson M (2005) Developing reflective practice in student teachers: Collaboration and
critical partnerships. Teachers and Teaching: Theory and Practice 11(1): 95–116.
116 Research in Comparative & International Education 14(1)

Pastori G and Pagani V (2017) Is validation always valid? Cross-cultural complexities of standard-based instru-
ments migrating out of their context. European Early Childhood Education Research Journal 25(5): 682–697.
Patall EA, Cooper H and Wynn SR (2010) The effectiveness and relative importance of choice in the class-
room. Journal of Educational Psychology 102(4): 896.
Patrinos HA, Velez E and Wang CY (2013) Framework for the reform of education systems and planning for
quality. Policy Research Working Paper 6701. Washington, DC: World Bank.
Pianta RC (2011) Teaching Children Well: New Evidence-Based Approaches to Teacher Professional
Development and Training. Washington DC: Center for American Progress.
Pianta RC, Barnett WS, Burchinal M, et al. (2009) The effects of preschool education: What we know, how
public policy is or is not aligned with the evidence base, and what we need to know. Psychological
Science in the Public Interest 10(2): 49–88.
Pianta RC, Belsky J, Vandergrift N, et al. (2008a) Classroom effects on children’s achievement trajectories in
elementary school. American Educational Research Journal 45(2): 365–397.
Pianta RC and Hamre BK (2009) Conceptualization, measurement, and improvement of classroom processes:
Standardized observation can leverage capacity. Educational Researcher 38(2): 109–119.
Pianta RC and Hamre BK (2010) Classroom environments and developmental processes: Conceptualization
and measurement. In: Meece J and Eccles J (eds) Handbook of Research on Schools, Schooling and
Human Development. Abingdon: Routledge, pp.43–59.
Pianta RC, La Paro KM and Hamre BK (2008b) Classroom Assessment Scoring System™: Manual K-3:
Boston: Paul H Brookes.
Praetorius A-K, Pauli C, Reusser K, et al. (2014) One lesson is all you need? Stability of instructional quality
across lessons. Learning and Instruction 31(June): 2–12.
Rieckmann M (2017) Education for Sustainable Development Goals: Learning Objectives. Paris: UNESCO.
Rivers SE, Brackett MA, Reyes MR, et al. (2013) Improving the social and emotional climate of classrooms:
A clustered randomized controlled trial testing the RULER approach. Prevention Science 14(1): 77–87.
Rodgers C (2002) Defining reflection: Another look at John Dewey and reflective thinking. Teachers College
Record 104(4): 842–866.
Rogoff B (2003) The Cultural Nature of Human Development. Oxford: Oxford University Press.
Saavedra AR and Opfer VD (2012) Teaching and learning 21st century skills. Phi Delta Kappan 94(2): 8–13.
Schaffer E, Nesselrodt P and Stringfield S (1994) The contributions of classroom observation to school effec-
tiveness research. Advances in School Effectiveness Research and Practice 133–150.
Schunk DH (2000) Learning Theories: An Educational Perspective. Upper Saddle River, NJ: Prentice-Hall.
Seidman E (2012) An emerging action science of social settings. American Journal of Community Psychology
50(1–2): 1–16.
Seidman E, Kim S, Raza M, et al. (2018) Assessment of pedagogical practices and processes in low and mid-
dle income countries: Findings from secondary school classrooms in Uganda. Teaching and Teacher
Education 71: 283–296.
Seidman E, Raza M, Kim S, et al. (2013) Teacher Instructional Practices & Processes System–TIPPS:
Manual and Scoring System. New York: New York University.
Seidman E and Tseng V (2011) Changing social settings: A framework for action. Empowering Settings and
Voices for Social Change. Oxford: Oxford University Press.
Smith C and Akiva T (2008) Quality accountability: Improving fidelity of broad developmentally focused
interventions. In: Shinn M and Yoshikawa H (eds) Toward Positive Youth Development: Transforming
Schools and Community Programs. Oxford Scholarship Online, pp.192–212.
Soulé H and Warrick T (2015) Defining 21st century readiness for all students: What we know and how to get
there. Psychology of Aesthetics, Creativity, and the Arts 9(2): 178.
Stallings JA (1978) The development of the contextual observation system. Annual Meeting of the American
Educational Research Association, Ontario, Canada, March 27–31.
Stigler JW, Gallimore R and Hiebert J (2000) Using video surveys to compare classrooms and teaching across
cultures: Examples and lessons from the TIMSS video studies. Educational Psychologist 35(2): 87–100.
Stromquist NP (2007) The gender socialization process in schools: A cross-national comparison. Background
paper prepared for the Education for All Global Monitoring Report 2008, Education for All by 2015: Will
We Make It?
Kim et al. 117

Tayler C, Ishimine K, Cloney D, et al. (2013) The quality of early childhood education and care services in
Australia. Australasian Journal of Early Childhood 38(2): 13.
Treviño Villarreal E (2006) Evaluación del aprendizaje de los estudiantes indígenas en América Latina.
Desafíos de medición e interpretación en contextos de diversidad cultural y desigualdad social. Revista
Mexicana de Investigación Educativa 11(28): 225–268.
Tseng V and Seidman E (2007) A systems framework for understanding social settings. American Journal of
Community Psychology 39(3–4): 217–228.
UNESCO (2006) Teachers and Educational Quality: Monitoring Global Needs for 2015. Paris: UNESCO
Inst for Statistics.
UNESCO (2016) Measures of Quality through Classroom Observation for the Sustainable Development
Goals: Lessons from Low-and-Middle-Income Countries. Paris: UNESCO.
Vandenbroeck M and Peeters J (2014) Democratic experimentation in early childhood education. In: Biesta
G, De Bie M and Wildenmeersch D (eds) Civic Learning, Democratic Citizenship and the Public Sphere.
Dordrecht: Springer, pp.151–165.
Vavrus F (2009) The cultural politics of constructivist pedagogies: Teacher education reform in the United
Republic of Tanzania. International Journal of Educational Development 29(3): 303–311.
Venäläinen R (2008) What Do We Know About Instructional Time Use in Mali? Assessing the Suitability of
the Classroom Observation Snapshot Instrument for Use in Developing Countries. Washington, DC:
World Bank.
von Suchodoletz A, Fäsche A, Gunzenhauser C, et al. (2014) A typical morning in preschool: Observations of
teacher–child interactions in German preschools. Early Childhood Research Quarterly 29(4): 509–519.
Vygotsky LS (1962) Thought and Language. Cambridge, MA: MIT Press.
Waxman HC, Tharp RG and Hilberg RS (2004) Observational Research in US Classrooms: New Approaches
for Understanding Cultural and Linguistic Diversity. Cambridge: Cambridge University Press.
Westbrook J, Durrani N, Brown R, et al. (2013) Pedagogy, Curriculum, Teaching Practices and Teacher
Education in Developing Countries: A Rigorous Literature Review. London, UK: DFID/Centre for
International Education, University of Sussex.
Whittaker JV (2014) Good thinking! Fostering young children’s reasoning and problem solving. Young
Children 69(2): 80–89.
Wolf S, Raza M, Kim S, et al. (2018) Measuring and predicting process quality in Ghanaian pre-primary
classrooms using the Teacher Instructional Practices and Processes System (TIPPS). Early Childhood
Research Quarterly 4518–30.
Yoshikawa H and Kabay S (2015) The evidence base on early childhood care and education in global con-
texts. Education for All Global Monitoring Report 2015.
Yoshikawa H, Leyva D, Snow CE, et al. (2015) Experimental impacts of a teacher professional development pro-
gram in Chile on preschool classroom quality and child outcomes. Developmental Psychology 51(3): 309.

Author biographies
Sharon Kim, MA, is a doctoral student in the Psychology & Social Intervention Program and project Co-
director for TIPPS at New York University. Her research interests include how cultural contexts affect the fac-
tors that influence a quality learning environment as well as how they affect the implementation quality of
interventions.
Mahjabeen Raza, MA, is the co-director of the Teacher instructional Practices and Processes System (TIPPS)
Project at New York University. Her research focuses on improving measurement quality in international
development, and creating meaningful professional development programming for teachers and principals.
Edward Seidman, PhD, is a professor of Applied Psychology and Director, Psychology & Social Intervention
Doctoral Program, New York University. His current research interests focus on understanding and improv-
ing classroom, school and other social settings, especially in developing nations.

Вам также может понравиться