Вы находитесь на странице: 1из 15

The Journal of Special Education

http://sed.sagepub.com Classroom-Based Research in the Field of Emotional and Behavioral Disorders: Methodological Issues and Future Research Directions
Maureen A. Conroy, Janine P. Stichter, Ann Daunic and Todd Haydon J Spec Educ 2008; 41; 209 DOI: 10.1177/0022466907310369 The online version of this article can be found at: http://sed.sagepub.com/cgi/content/abstract/41/4/209

Published by: Hammill Institute on Disabilities


Additional services and information for The Journal of Special Education can be found at: Email Alerts: http://sed.sagepub.com/cgi/alerts Subscriptions: http://sed.sagepub.com/subscriptions Reprints: http://www.sagepub.com/journalsReprints.nav Permissions: http://www.sagepub.com/journalsPermissions.nav

Downloaded from http://sed.sagepub.com by on February 17, 2009

Classroom-Based Research in the Field of Emotional and Behavioral Disorders

Methodological Issues and Future Research Directions
Maureen A. Conroy
Virginia Commonwealth University

The Journal of Special Education Volume 41 Number 4 Winter 2008 209-222 2008 Hammill Institute on Disabilities 10.1177/0022466907310369 http://journalofspecialeducation.sagepub.com hosted at http://online.sagepub.com

Janine P. Stichter
University of MissouriColumbia

Ann Daunic Todd Haydon

University of Florida
Classrooms serving students with or at risk for emotional and behavioral disorders (EBD) are complex environments that include multiple interactions such as those between (a) students and teachers, (b) students and peers, and (c) temporally distant or concurrent classroom-setting factors and subsequent behavioral episodes. As a result, the scientific processes and methods used to investigate the nature of these interactions are often as varied and complex. The purpose of this article is to review and discuss the extent to which research methods and practices evident in current classroombased studies measure and predict these relationships accurately. To this end, the authors present an overview of common research methodology and related measurement strategies and some considerations for conducting research using these methods in classrooms serving students with or at risk for EBD. Keywords: emotional and behavioral disorders; measurement; research; student-teacher interactions

lassrooms are complex environments that include a host of dynamic, intersecting variables, such as classroom-setting factors (e.g., classroom arrangement), instructional strategies (e.g., use of scaffolding), and individual student factors (e.g., ability, skill level), with the overall goal of producing positive student outcomes. Needless to say, capturing how these variables interconnect and the relative influence they have on student outcomes is difficult. Current research in the field of emotional and behavioral disorders (EBD) has addressed some of these factors, but much of the methodology has failed to deal adequately with the complexity of these relationships while maintaining sufficient scientific rigor (Conroy & Stichter, 2005). The national emphasis on scientifically based practices, in combination with a critical need for interventions that are socially valid, necessitates increasingly multifaceted, rigorous methods, both in our current work and as we consider future research directions. In this article, we discuss methodological issues related to classroom-based research in the field of EBD. Specifically, the purpose of this article is twofold:

(a) to provide an overview of the current methods used in classroom-based research about students with or at risk for EBD, including a discussion of the limitations and strategies that expand the potential and increase the rigor of that methodology and (b) to suggest future research directions, with the goal of contributing to a discussion of experimental methodology that will ultimately advance the field of EBD. For the purpose of this article, classroom-based research was defined as investigations of environmental variables (e.g., specific teacher behaviors, instructional components including planned interventions, physical arrangements) in relation to student behaviors (e.g., problem behaviors, time on task, academic achievement; cf. Brophy & Good, 1986; Gunter, Hummel, & Conroy, 1998; Stichter, Lewis, Johnson, & Trussell, 2004). Our discussion encompasses students at risk
Authors Note: Address correspondence to Maureen A. Conroy, Virginia Commonwealth University, Department of Special Education and Disability Policy, Box 843016, Richmond, VA 23284-2020; e-mail: maconroy@vcu.edu.

Downloaded from http://sed.sagepub.com by on February 17, 2009

210 The Journal of Special Education

for or identified as EBD who are served in either general or special education classrooms; thus, it includes research aimed at the primary, secondary, and/or tertiary prevention of EBD. The unit of analysis in a given study could be at the individual student, classroom, or school level, depending on the study design, but the common factor across studies is that the components of interest occur within the context of the classroom; that is, they are based in the classroom rather than in clinical or other settings.

Current Methods in Classroom-Based Research

The application of experimental research methods in EBD within educational settings has evolved over the past 40 years (for a discussion, see Van Acker, Yell, Bradley, & Drasgow, 2004). When one considers the early research efforts and work of the founding fathers (e.g., Haring & Phillips, 1962; 1972; Hobbs, 1965; Morse, Cutler, & Fink, 1964; Whelan, 1974), our research designs and measures have expanded considerably as we have addressed more complex instructional questions. For example, Haring and Phillips (1962) conducted one of the earliest quasi-experimental classroombased studies, comparing the influence of structured and unstructured classrooms on student behaviors. Their findings supported the notion that structured classrooms that included consistent organization and presentation of materials increased the success and learning of students with EBD. In addition, Hobbs (1965) conducted some of the initial investigations of a comprehensive intervention program. Specifically, he developed and experimentally examined the success of a program for students with EBD called ReED, which integrated a number of global and specific approaches including classroom instructional strategies, family and community interventions, and cognitive and behavioral interventions. Study outcomes indicated an increase in behavioral and academic gains for the students served under this model. On the other hand, the early efforts of applied behavioral analysts in the field of EBD were more narrowly focused on the systematic manipulation of discriminative stimuli and subsequent events (i.e., consequences) to produce behavioral change in children. For example, Hall, Lund, and Jackson (1968) examined the effects of contingent teacher attention on elementary-age students rates of on-task behavior. Using a single-subject reversal design, they found that contingent presentation of teacher attention following on-task behavior served as a reinforcer and,

thus, increased students on-task behavior. In addition, A. M. Baer, Rowbury, and Baer (1973) investigated the effects of differential reinforcement of compliance on the task completion behavior of young children who demonstrated chronic problem behaviors. Differential reinforcement, including contingent access to free play, specific materials, or snacks, resulted in increased compliance for one of the participants, whereas the use of differential reinforcement and timeout for noncompliance increased compliance in the other two participants. Early researchers in applied behavioral analysis eventually expanded their work to include the examination of classroomwide interventions and classroomsetting factors that influence the relation between discriminative stimuli and responses as well as the value of the reinforcers that maintain behavior (Bijou & Baer, 1961; Wahler & Fox, 1981). Shores and Haubrich (1969), for example, found that study carrels increased the task-related behaviors of individual students with EBD. Other investigators examined classroomwide behavioral interventions on problem behaviors demonstrated by students. For instance, Harris and Sherman (1973) examined the effectiveness of the Good Behavior Game in two elementary-level classrooms. They found that the intervention reduced disruptive talking and out-of-seat behavior but had little effect on academic performance. Throughout the evolution of behavioral research, leaders in the field have discussed the need to (a) integrate descriptive and experimental methods (Bijou, Peterson, & Ault, 1968; Sasso et al., 1992), (b) enhance measurement precision (Brothers & Cammilleri, 2005), and (c) improve research design (Barlow & Hayes, 1979; R. D. Horner & Baer, 1978). At the crux of these discussions has been an emphasis on improving the rigor of research practices while conducting socially valid studies in applied settings (D. M. Baer, Wolf, & Risley, 1968, 1987). Although EBD researchers have made substantial gains in understanding classroom contexts and their relation to student outcomes, a number of methodological issues continue to limit findings. In the following sections, we discuss these issues as they relate to the group design and single-subject methods currently used in classroom-based research.

Group Design
Some classroom-based studies that are focused on students with or at risk for EBD employ group design either experimental or quasi-experimentalmethods. Group design studies systematically investigate

Downloaded from http://sed.sagepub.com by on February 17, 2009

Conroy et al. / Methodological and Research Issues 211

cause-and-effect relationships between interventions and student outcomes through a multistep process: (a) participant selection, (b) random (if the study is an experiment) assignment of participants to conditions (e.g., treatment vs. control, Treatment 1 vs. Treatment 2), (c) exposure to treatment (which includes measurement of treatment fidelity and dosage), and (d) measurement of outcome variables and analysis of effects (Van Acker et al., 2004). Through this process, researchers attempt to describe or predict probable outcomes while eliminating sources of error (Sasso, 2005). Some good examples of classroom-based interventions for students with or at risk for EBD investigated through group design research methods reside in efforts to prevent or ameliorate early problem behaviors (e.g., Daunic, Smith, Brank, & Penfield, 2006; McConaughy, Kay, & Fitzgerald, 2000; Walker et al., 1998). In such prevention studies, participants are most likely served in general, rather than special education classrooms. For instance, Daunic and colleagues investigated the effects of a general education classroom-based preventive intervention designed to reduce student aggression in anger-provoking situations. Using random assignment at the school level, they found treatment-related changes in student knowledge and aggressive behavior, but they noted the lack of observational data and longitudinal measures as methodological limitations. Classroom-based interventions often entail multiple components, challenging researchers to design studies that can sort out the specific features responsible for desired outcomes. Walker et al.s (1998) evaluation of a preventive intervention for antisocial behavior in at-risk kindergarten children (First Step to Success) is an example. Using a cohort design with random assignment of children to treatment or wait-list control conditions, they established a causal link between treatment involving home- and school-based components and student outcomes but did not establish separate and combined component effects. Other recent examples of complex intervention packages include the Fast Track longitudinal studies aimed at the prevention of conduct disorder and aggressive behavior (Conduct Problems Prevention Research Group [CPPRG], 2002, 2004) and studies of the Check & Connect Model designed to increase student engagement and decrease truancy for students at risk (Lehr, Sinclair, & Christensen, 2004). In both cases, the interventions involve multiple components, making it methodologically difficult to determine the specific effects of classroom-based, universal (e.g., Fast

Tracks PATHS curriculum) components versus other components such as parent training, peer support, or tutoring. Although it is conceptually and theoretically sound from the standpoint of etiology to address multiple risk factors, measurement of treatment dosage and fidelity and determining the contribution of individual treatment components become more complex in these comprehensive models. Balancing internal, external, and social validity. Another challenge associated with multicomponent treatments studied through group design research is that of intervention feasibility and sustainability. Researchers are often faced with the delicate balance of conducting research with sufficient external and social validity (i.e., feasibility, sustainability, and practical significance across real classroom settings) while maintaining adequate internal validity (i.e., scientific rigor; Smith & Daunic, 2004). On one hand, investigators need to ensure consistent treatment implementation to draw meaningful conclusions about the relation of intervention to student outcomes at either the individual or classroom level. On the other, teachers need some leeway to adapt interventions within particular classroom contexts (Gresham, MacMillian, Beebe-Frankenberger, & Bocian, 2000). Attempts to increase treatment fidelity and internal validity include manualizing interventions to the extent possible (e.g., Lane, Bocian, MacMillan, & Gresham, 2004). Examples of providing for context-specific adaptations include allowing parent-teacher teams to determine meeting frequency in designing and implementing individualized interventions for kindergartners at risk for EBD (see McConaughy et al., 2000) and letting teachers determine the frequency of lesson delivery (one vs. two times per week) in a universally delivered social problem-solving curriculum (see Daunic et al., 2006). To what extent an intervention is implemented as planned and with adequate integrity is critical to drawing conclusions about its efficacy. Due to the dynamic nature of classrooms, however, assuring adequate scientific control and treatment integrity in these settings can be particularly difficult. Arguably, the most serious threat to internal validity in classroom-based group design studies is the existence of confounding or extraneous variables that are unaccounted for but that exert significant influence on student outcomes in treatment and comparison groups. The bottom line is that researchers cannot control for all the factors that influence relations

Downloaded from http://sed.sagepub.com by on February 17, 2009

212 The Journal of Special Education

among independent and dependent variables within classroom settings, but it is incumbent on them to account forthat is, measure and include in statistical models, to the extent possiblepotentially critical variables when designing experiments, including the degree of treatment integrity. Random assignment. The most effective way to address internal validity is through random assignment of participants to experimental conditionsthe hallmark of true experimental design. For classroombased research, this could entail random assignment at the student (see McConaughy et al., 2000) or classroom (see Sutherland & Wehby, 2001) level to treatment or comparison conditions, or if contamination within the school is an issue, random assignment at the school level may be preferable (see CPPRG, 2004; Daunic et al., 2006). Regardless of the unit chosen, random assignment is the best way to minimize systematic differences between or among groupsprior to treatmentthat could mask or confound the effects of intervention. For example, Sutherland and Wehby (2001) used a repeated measures ANOVA design to investigate a self-evaluation of teachers instructional behaviors in classrooms serving students with EBD. Teachers were randomly assigned to treatment or control groups, and measures of the dependent variable (or variables) were taken at baseline (pretreatment), following treatment, and during maintenance. The results suggest that teachers in the treatment group had a higher rate of praise statements and a decrease in the number of reprimands in intervention and maintenance phases than in the pretreatment phase. In addition, students of teachers in the treatment group displayed higher rates of correct responses during the treatment and maintenance phases as compared to the pretreatment phase. In addition to random assignment, researchers can address potential threats to internal validity through thoughtful selection and screening of participants, classrooms, and schools and matching pairs (or sets) of participants before randomly assigning the members of each matched set to study conditions (e.g., McConaughy et al., 2000). If random assignment is at the classroom or school level, characteristics of the school, such as percentage minority, average socioeconomic status, size, and average level of academic achievement, are examples of possible matching variables. Selection of these variables should be based on a thoughtful consideration of characteristics that may affect intervention efficacy or mask intervention

effects. Once these variables are selected, statistical tests using the appropriate unit of analysis (i.e., classroom or school) should be used to determine whether there are significant pretreatment differences between intervention and comparison groups. When selecting study participants identified as having or being at risk for developing EBD, whether as individuals (e.g., for a pullout group intervention) or as part of a classroom population, the lack of a standardized definition of EBD in the research literature can be problematic. Students with EBD are a heterogeneous group, and they exhibit a number of diverse learning and behavioral characteristics (Stichter, Conroy, & Kauffman, 2008). Simply characterizing students as emotionally disturbed, as defined by the Individuals With Disabilities Education Improvement Act (2004)or conversely, selecting at-risk students with broadly defined externalizing behaviorsmay not provide enough information about participant characteristics (Mooney, Epstein, Reid, & Nelson, 2003). For some investigations, it may be important to select students with more narrowly defined types and amounts of behavioral excesses (e.g., aggression) or deficits (e.g., social withdrawal), when possible, based on systematic evaluations of these behaviors using standardized measures (e.g., the Child Behavior Checklist, Achenbach, 1991; Systematic Screening for Behavior Disorders, Walker & Severson, 1992; Social Skills Rating Scale, Gresham & Elliot, 1990). Of course, the precision and selectivity designed to increase internal validity also limits generalizability, that is, external validity. Research designs, therefore, need to reflect the questions of most interest to the researchers. Similarly, if the population under study consists of intact classrooms and the selection of particular students is not an option, individual student variables (e.g., behavioral characteristics, intelligence, language development, academic achievement) can provide potential covariates in statistical models as long as they are systematically identified and measured prior to intervention. Although random assignment is the preferred way to control for threats to internal validity, it often is one of the more challenging aspects of large-group designs in classroom settings. If assignment occurs at the school or classroom level, for example, school personnel and/or teachers cannot be informed when recruited whether they will fall into an intervention or comparison group because telling them before they make a commitment to participate in the study would

Downloaded from http://sed.sagepub.com by on February 17, 2009

Conroy et al. / Methodological and Research Issues 213

introduce a potential source of bias. Thus, persuading teachers to agree up front without knowing precisely what they are agreeing to can present a formidable challenge. One approach to this issue is a wait-list control design, in which delayed intervention is provided to the comparison group so that children in all schools or classrooms potentially receive treatment (see Walker et al., 1998). Random assignment at the student level entails the same challenge (i.e., recruitment of teachers or classrooms prior to assignment), with the additional challenge of gaining appropriate consent from the school district and/or school principals to do so. For example, to investigate the effects of a classroom-based intervention (e.g., The Good Behavior Game) for reducing classroom aggressive and disruptive behavior, researchers randomly assigned schools, and classrooms within schools, to experimental conditions, balancing children across all first-grade classrooms. But this was not accomplished without a significant, long-term investment in a collaborative partnership between the research team and the large, urban school district (Kellam, Koretz, & Moscicki, 1999). Measurement. An additional threat to internal validity may occur from the types of measures used in many group design studies to evaluate change in dependent (outcome) variables. As discussed by Forness (2005), measuring changes in behavioral outcomes is different from measuring other types of skills (e.g., reading, math). In the field of EBD, many of the instruments used to detect behavioral change are indirect measures originally designed for purposes of identification rather than to indicate response to intervention. For example, the Child Behavior Checklist (Achenbach, 1991) and the Social Skills Rating Scale (Gresham & Elliot, 1990) provide valid assessments of students social deficits and abilities, but these tests may lack the sensitivity and/or reliability needed to detect relatively small changes, particularly over the short term, and may thus impede accurate determination of intervention effectiveness. Many of these measures rely on informants responses and are subject to a number of observer biases (Fox & Conroy, 1995). The informant (usually the classroom teacher) who completes the instrument may be part of the research study and thus subject to bias because of his or her investment in student outcomes. Conversely, if the informant chosen is not the classroom teacher, he or she may not always have accurate

knowledge of specific behavioral changes. Either source of error may influence pre- or posttest scores and thus measurement reliability and validity. In addition, changes in typical classroom problem behaviors, such as disruption, noncompliance, or offtask behavior, may not be measured precisely with indirect or global measures designed for identifying or screening students who have, or are at risk for, EBD (Conroy & Stichter, 2005; Forness, 2005). This can be true particularly in prevention research, when participant characteristics may not be within a clinical range and/or sleeper effects may take time to emerge (see Muehrer & Koretz, 1992). One presumed cause of the reliance on formalized measures is the paucity of alternative assessments with established psychometric properties. This issue permeates measurement related to classroom-based research for students with EBD. Accurate measurement, along with experimental control, is essential for conducting sound research (see Stanovich, 2004). Although over the years additional measures have been developed and technological advances have improved our ability to track complex sequences of events, many measures currently used by researchers are not sensitive enough to capture constructs of interest and/or the intricate relations among teacher behaviors, student behaviors, and classroom-setting factors. For example, in group design research, use of (a) multiple informants (preferably some blind to experimental condition and other information that could influence their responses) and (b) multiple measures with strong psychometric properties (i.e., established reliability and validity) can help counter these limitations, but researchers should be aware of these issues as they design and implement classroom-based studies. In sum, a major strength of group design experimental research is the ability to control for rival hypotheses through random assignment of participants to study conditions. Through random assignment, valid and reliable measurement, and appropriate research designs, threats to internal validity can be minimized. Experiments thus maximize the opportunity to establish causal relations among variables in classroom-based research. Although experimental research is always preferred, some group design classroom-based intervention researchers may legitimately use quasi-experimental, descriptive, or correlational designs because their programmatic investigations are in a developmental stage, such as initial hypothesis setting, exploration of interrelated variables (e.g., Shores et al., 1993; Wehby, Symons, & Shores,

Downloaded from http://sed.sagepub.com by on February 17, 2009

214 The Journal of Special Education

1995), or controlled demonstrations (e.g., Teeple & Skinner, 2004). If fruitful, such studies may contribute important information and lead to experiments and randomized clinical trials. Another method frequently used in classroom-based EBD research to explore potential questions and hypotheses, as well as to demonstrate treatment effects experimentally across individual participants in classroom settings, is single-subject design, to which we turn next.

Single-Subject Design
Single-subject design has a long history in EBD and can be used to investigate the manipulation of environmental variables across students, settings, and contexts as well as complex relationships and contextual factors that are difficult to include systematically in group design studies. Rather than randomly assigning matched sets of participants to conditions, singlesubject designs control for extraneous variables by using within-participant analysesthat is, individual participants serve as their own control. However, some of the issues discussed in the section on group design, such as participant selection, are still relevant. Balancing internal, external, and social validity. Historically, most single-subject researchers have analyzed data using visual analysis to determine change (Scruggs, Mastropieri, & Regan, 2006). This type of analysis, along with designs that demonstrate functional relations, helps eliminate factors that can compromise internal validity, such as statistical regression, maturation, and history (R. H. Horner et al., 2004). Researchers have argued that the finegrain analysis gained by individual interpretation of multiple data points reduces the probability of Type I errors (Kennedy, 2005; Parsonson & Baer, 1992). This method, therefore, may be particularly useful in the initial development of conceptual theories and exploratory hypotheses that can be further tested through systematic replication with additional singlesubject studies or with group design studies. Visual inspection of data across individuals can help in describing the characteristics of those who may be considered nonresponders to treatment (R. H. Horner et al., 2004) and aid with the interpretation of repeated measures across multiple studies that include subjects with similar characteristics (Scruggs et al., 2006). To enhance the external validity of single-subject design, therefore, researchers should document participant characteristics along with other relevant variables,

such as settings and change agents, so that they can be varied in systematic replications. Without documentation of participant characteristics, the question of generalizability is difficult to answer. Because single-subject design has its origin in applied behavioral analysis, many of the classroombased studies target participants who demonstrate specific, socially significant behaviors. For example, single-subject studies typically target classroom behaviors such as disruption or off-task behavior. Less emphasis is placed on selecting or matching participants on marker characteristics such as gender, race/ ethnicity, or age; these variables are simply described for each individual (for a discussion, see Kauffman, Conroy, Gardner, & Oswald, 2006). Although participant inclusion based on socially significant behaviors makes sense, this type of selection can also limit external validity. For example, Sutherland, Wehby, and Copeland (2000) examined the effect of a special education teachers use of praise on the behavior of 9 fifth-grade students in a self-contained classroom for students with EBD. Thus, rather than controlling for variables such as teacher experience, student gender, age, intellectual ability, and/or specific diagnostic criteria, these researchers used teacher nomination and direct observations to identify participants who could benefit from the increase of socially significant target behaviors (i.e., teacher use of praise, student on-task behavior). Results indicated that students on-task behavior increased with the teachers use of specific praise, but the researchers could not conclude that findings would be similar for other teachers or for students with other characteristics (e.g., first-grade students, students identified as having conduct disorders). Researchers also need to consider and report the practical significance of their findings. For example, Sutherland et al. (2000) demonstrated increased use of praise and on-task behavior; however, they did not investigate if increases in these behaviors resulted in better long-term outcomes for students (e.g., success in their classrooms or in other classrooms). Inclusion of practical significance, as well as marker variables, contributes important information about external and social validity. Measurement. In single-subject design research, operational definitions of specific target behaviors and interobserver agreement measures can help assure precision and reliability of behavioral change, but the measures used may not capture constructs of interest that more global measures include. For

Downloaded from http://sed.sagepub.com by on February 17, 2009

Conroy et al. / Methodological and Research Issues 215

example, a researcher may describe the relation between specific teacher behaviors, such as opportunities to respond (OTR) and praise, and a students adaptive or problem behaviors (e.g., time on task, disruption, compliance), but only the variables identified are evaluated. Thus, variables such as social status in the classroom or academic achievement may also be of interest and may covary with the dependent variable (or variables) but are not typically measured in single-subject studies. For instance, if disruption is measured but engagement is not, a researcher would not be able to determine if decreases in disruption produced collateral increases in engagement. Narrowly defined dependent variables thus may limit researchers ability to make statements about broader outcomes that have practical and social significance, such as students increases in friendships or social status in the classroom. To measure dependent variables directly and repeatedly, early classroom-based single-subject studies used low-tech strategies such as paper-and-pencil recordings and stopwatches. Along with assessments of reliability, these strategies accurately measured dependent variables and surrounding classroom context variables. With the advance of computerized technology, however, classroom-based measurement has increased exponentially. Most classroom-based researchers now collect real-time observational data via computerized systems. (For a review of data collection software, see Kahng & Iwata, 2000.) Several of these systems provide measures of sequential relations between teacher and student behaviors in addition to direct measures of outcomes and classroom context variables (e.g., Tapp, 2006; Tapp & Walden, 2000). Researchers can thus examine the sequence of (a) specific teacher behaviors (e.g., OTR) and (b) target student behaviors (e.g., compliance and engagement) in the presence of (c) different types of classroom-setting factors (e.g. academic tasks, large-group activities) and evaluate the outcomes of those sequences (e.g., escaping tasks or activities). With this type of sophisticated data collection, the relations among a larger number of variables can be captured with precision and accuracy. Stichter and colleagues (Stichter et al., 2004; Stichter, Lewis, Richter, & Johnson, in press), for example, have examined teacher-student interactions within various classroom contexts using a hybrid of paper-and-pencil recording and computer-based assessment. By combining information obtained through a teacher interview, descriptive checklist,

and computer-generated direct observation interval recording system, they were able to assess more than 34 simultaneously occurring events (student, teacher, and classroom factors), affording preliminary characterizations of relations between teacher behavior and student outcomes across various classroom-based conditions (i.e., large-group vs. independent seat work). The information gained has implications for changing interaction patterns in classrooms for students with EBD. Measurement is key to experimental control in single-subject design because control is achieved through the following components: (a) the participant (within classroom or school) as the unit of analysis; (b) direct, repeated, reliable measurement; and (c) the demonstration of predictable changes in the target behavior measure (or measures) as a result of the application of the independent variable (Kazdin, 1982; Kennedy, 2005). Baseline measures of the target behavior (or behaviors) are obtained, and experimental control is gained through (a) implementing and then withdrawing or reversing the intervention following a baseline phase (i.e., withdrawal or reversal design) or (b) staggering the beginning of the intervention following the baseline phase across different participants, responses, or settings (i.e., multiple baseline design). With visual analysis, single-subject researchers evaluate the relation between dependent measures and intervention by examining the magnitude of difference between the baseline and intervention levels of the dependent variable, the trend in the data path, and the stability of the data. As stated by R. H. Horner and colleagues (2004), Experimental control is demonstrated when the design documents three demonstrations of the experimental effect at three different points in time with a single participant (within-subject replication), or across different participants (intersubject replication) (p. 168). These designs or variations have been used to study a number of classroom variables. For example, Kern, Bambara, and Fogt (2002) examined the effects of a classwide curricular modification that included embedding choices and preferences into activities on the engagement and problem behaviors of students with EBD in a self-contained classroom. Using a reversal design, the results indicated that when the classwide intervention was in place, there were increases in students engagement and decreases in their destructive behaviors. In another example, Lohrmann and Talerico (2004) implemented a group contingency classwide intervention for a

Downloaded from http://sed.sagepub.com by on February 17, 2009

216 The Journal of Special Education

group of students in elementary school. Using multiple baseline designs across subject areas, these researchers found decreases in disruptive behaviors but little difference in task completion and out-of-seat behaviors. Although the use of experiments that indicate functional relations is ultimately the preferred singlesubject method, much early-stage research is designed to explore potential relations among variables, as mentioned previously. Thus, researchers often use descriptive, as opposed to experimental, single-subject methods. For example, Gunter, Jack, Shores, Carrell, and Flowers (1993) used a lag sequential analysis of observed behaviors to determine the probable occurrence of teacher-student behavioral sequences. The limitations of descriptive studies that use singlesubject designs are similar to those in descriptive group design studies. That is, all descriptive studies provide information about correlations among variables and not about causal relationships; thus, researchers need to exercise caution in interpreting results. In summary, single-subject design methods allow experimenters to determine the effectiveness of a particular intervention for an individual through repeated, direct, and precise measurement. Although both group and single-subject studies have multiple strengths, particularly if used for appropriate purposes, researchers may want to consider strategies that may increase the rigor of classroom-based research. We focus on these in the following section.

are also influenced by teacher attitudes and behaviors and by classroom-setting factors. To improve our services for students with EBD, therefore, we need to move beyond this primary focus so that our research helps us understand the impact of the classroom environment. In continuing these efforts, we propose that researchers consider the following recommendations.

Research Questions
Odom (2004) and others (Levin, ODonnell, & Kratochwill, 2003) have discussed the need for research questions that are appropriate for different stages of knowledge in the field (e.g., initial hypothesis development and exploration of variables, controlled experiments and demonstrations, randomized clinical trials, and identification of variables adopted for practice). Because much of current classroombased research in EBD is focused on identifying variables that influence classroom-based behavioral interactions, future questions should focus on determining causal relations through controlled experiments and demonstrations. For example, teacher behaviors such as praise and OTR have been shown to influence student behavior, but further studies are needed to answer questions about the precise dosage and intensity that are most effective. In addition, questions about factors that help teachers implement and maintain such strategies in their classrooms would contribute important information.

Measurement and Replication

Increasing Rigor and Future Research Directions

Although group and single-subject designs each have distinct strengths and limitations as strategies for answering classroom-based research questions, there are general, cross-cutting issues common to both, including (a) research questions, (b) replication and measurement, (c) implementation and measurement of independent variables (i.e., treatment integrity), and (d) social validity. We have mentioned some of these issues in our prior discussion; however, our focus in this section is on future research directions and strategies that will increase rigor in classroom-based research in EBD. Although classroom-based intervention research in EBD has progressed considerably, there is still a substantial need for methods that advance the field. Typically, the focus has been on manipulating treatment variables to produce change in student behaviors, but as applied researchers know, these behaviors

Given the complexity and intricate relations among variables in classroom settings, there is a need for researchers to develop more accurate measures of student progress. The research design and measures chosen should depend on the questions asked; that is, they should fit the developmental stage of research in a particular area. The use of multiple measures (direct and indirect) to evaluate change in studentteacher interactions and associated outcomes, while capturing the influence of classroom-setting factors on these relations, is warranted. Such measurement systems can combine high and low levels of technology, such as the model developed by Greenwood and colleagues (see Greenwood, Carta, & Dawson, 2000) composed of behavioral observation checklists of classroom-setting factors and complex computerized coding systems. As noted previously, researchers should also take measures across multiple settings and informants. We support repeated measurement and study replication to strengthen the reliability and validity of findings,

Downloaded from http://sed.sagepub.com by on February 17, 2009

Conroy et al. / Methodological and Research Issues 217

particularly when examining dependent variables that are highly influenced by contextual ones. Direct observations should supplement more global outcome measures (e.g., standardized diagnostic instruments) as researchers evaluate change in behaviors of interest. The measures used in classroom-based group designs and single-subject studies both have limitations. In single-subject design, lines of research are expanding to include variables within and outside the typical three-term contingency (classroom-setting factors, teacher behaviors, student behaviors), as well as marker variables (e.g., gender), but only to a limited extent. Similarly, group designs include measures from multiple informants, some blind to experimental condition; however, measurement sensitivity, particularly to short-term change, is still an issue. To facilitate our understanding of the influence of classroom-based factors on student behaviors, multiple measures in both types of designs used in tandem could potentially provide a more complete picture. Combining the use of (a) multiple standardized measures and multiple informants (e.g., teachers, peers); (b) repeated, direct measurement of specific dependent variables of interest; and (c) informal measures that provide additional information may thus serve to strengthen our understanding of the influence of classroom factors on student behavior. As discussed by Sidman (1960) and others (e.g., see Herschbach, 1996), scientific findings are validated not only through precise and accurate measurement but through replication as well. Direct replication occurs when the investigation is repeated under the same conditions with new participants or, in the case of singlesubject design, with the same participant on repeated occasions, whereas systematic replication demonstrates that experimental outcomes are observed under different conditions from those of the original experiment (e.g., variations in participant characteristics, settings, or methods; Sidman, 1960). Whether direct or systematic, replication helps ensure accuracy in describing, predicting, or controlling phenomena of interest and provides supportive evidence for definitive statements about outcomes. It is essential for establishing the reliability and generality of findings and for increasing internal and external validity (Sidman, 1960). In single-subject research, replication of findings is synonymous with experimental control. For example, withdrawal or reversal designs provide an opportunity to replicate initial differences between baseline and intervention phases (i.e., direct replication) and further validate the intervention effects. Similarly,

systematic replication of initial findings may occur across variations in participants, settings, and/or methods. Due to naturally occurring variations in applied settings, systematic replication is often more common than direct replication in classroom-based research. One advantage is that when research findings are replicated across variations in participants, settings, or methodological approaches, the generalizability of findings is increased thus increasing our understanding and the applicability of phenomena of interest (Kennedy, 2005; Sidman, 1960). Replication in group design studies can also substantiate findings. If researchers describe procedures, treatment, participants, settings, and measures in sufficient detail, other investigators can replicate their work in additional settings or with varied populations. Thus, systematic replication is essential to internal validity (supporting the claim that change in the dependent variable is highly likely to be a result of manipulation of the independent variable or variables) and external validity (providing evidence that treatment results in changes in outcomes across different participants or settings). Even though strides have been made in classroombased research, it is still in its early stages, and replication studies by a larger number of investigators will add significantly to knowledge of effective classroom practice. Whether a replication or an exploratory study, the need remains to match design and method to the research questions and to use measures and analyses that can answer those questions most effectively. Recently, professionals have emphasized that the appropriate use of specific designs derives from understanding the process of identifying evidencebased practices, for whom they are effective, under what conditions, and for how long (Odom, 2004), that is, sound empirical research. Using single-subject design, for example, may be optimal for providing information about intervention components that work for a student with particular characteristics and not for students with different characteristics. Such a study might entail less cost or risk than a larger scale experiment when an intervention is still in the early stages of development. Moreover, increased use of statistical analyses in single-subject research could uncover patterns of outcomes that help determine the conditions under which group designs may be warranted and could then be undertaken for particular populations (Scruggs et al., 2006). This progression of research is not new but has gained increased attention, as reflected in the Institute

Downloaded from http://sed.sagepub.com by on February 17, 2009

218 The Journal of Special Education

of Education Sciences (IES) funding structure represented by Goals 2 through 4 (IES, n.d.). The directive behind these goals is to scale up research efforts through direct and systematic replication of validated effective classroom-based practices. Although researchers may be cognizant of this need, the process is not always actualized. This may be a result of differences in professional definitions of scaling up and the resulting impact on how researchers focus their hypotheses (Coburn, 2003), or it could simply be a function of limited resources. The IES model supports developmental work with both group and single-subject studies, with the expectation that subsequent work will be devoted to replication and scaling up. These levels, or goals, are outlined hierarchically to address (a) the need to define variables precisely for initial intervention development and (b) subsequent replication of findings on a larger scale to identify characteristics of populations that may influence the effectiveness of the intervention as well as additional factors in applied settings that affect student outcomes (e.g., teacher and classroom variables). In other words, this model entails ongoing, systematic replication that includes varying participant and/or procedural characteristics, as opposed to direct replication that evaluates the same intervention with additional participants. Replication becomes increasingly multidimensional and therefore requires dynamic skill sets and processes (Coburn, 2003). To effectively meet these IES initiatives and extend our literature base, we will need to develop single-subject design measures that can be translated for use in group design studies, multifaceted assessment protocols, and collaborative efforts that continue to enhance how researchers work with one another and with schools to go to scale.

Treatment Integrity
Researchers have long discussed the importance of collecting treatment integrity data (see Gresham, Gansel, & Noell, 1993). Especially if an intervention is implemented over time with repeated measures of dependent variables, monitoring treatment integrity is critical (R. H. Horner et al., 2004). Although essential to conducting high-quality research, measuring the fidelity with which independent variables are implemented occurs inconsistently in classroom-based research. For example, when training teachers to implement new instructional practices, researchers may assess teachers knowledge of newly acquired skills but not measure the implementation of these skills throughout the course of

intervention. Teachers who initially commit to implementing a classroom-based treatment over a period of time actually may not complete the intervention (e.g., all lessons of a curriculum) or may deviate from practices emphasized in the training, for a variety of reasons (Witt, Noell, LaFleur, & Mortenson, 1997). Good researchers consider these possibilities in designing their studies and typically incorporate an intent-to-treat model in which treatment group participants are included in study outcomes regardless of how much exposure (dosage) they receive. Degree of exposure is carefully measured, however, along with other variables that might moderate or confound treatment effects (e.g., CPPRG, 1999, 2002). Notably, in some classroom-based research, the independent and dependent variables may overlap, especially when researchers analyze a sequence of teacher and student behaviors. The dependent variable might be teacher use of praise statements following training, for example, but once the teacher learns to use praise in response to the target students appropriate behavior, these statements take on the role of an intervention (independent variable), and the dependent variable becomes the students behavior. Measures of treatment integrity are thus critical in both phases. Whether or not the study design includes overlap between independent and dependent variables, treatment integrity is currently undermeasured or underreported and should be a high priority and integral component of classroom-based research. As in measuring dependent variables of interest, researchers may want to consider using multiple measures. In comparison to indirect measures such as teacher reports of implementation procedures, direct observation provides a more accurate measure, even when treatments are manualized. Although preferable, direct observation (including reports of interobserver agreement) is often not used because of practical considerations. Treatment fidelity measurement needs as much emphasis as do other aspects of research rigor. In sum, to describe intervention effects accurately at any stage of the scaling up process, all aspects of the independent variable, including teacher training and treatment fidelity and implementation, require accurate measurement. Researchers should use the same stringent criteria to evaluate implementation of the independent variables, including reliability measures, as they use to measure the dependent variables. It is important to measure initial skill acquisition following training, ongoing implementation, and ultimately whether the intervention is maintained.

Downloaded from http://sed.sagepub.com by on February 17, 2009

Conroy et al. / Methodological and Research Issues 219

We also suggest that measuring independent variables should include component analyses. Often, the inability of a change agent (e.g., teacher) to implement an intervention with optimal fidelity is identified as a research-to-practice gap. The gap can be attributed to inadequate training and/or goodness of fit. Yet, it is possible that an entire intervention package is not needed to achieve a desired change in outcome (Stichter, Hudson, & Sasso, 2005). Instead, pivotal variables in specific combinations may be most effective. Thus, component analyses can shed light on the salient aspects of an intervention, its capacity fit, and relationships that are crucial to long-term implementation and maintenance.

Social Validity
The expansion of social validity is another research need. Ultimately, the goal of classroom-based research is to identify components of behavioral interventions that improve interactions among teachers and students and associated student outcomes in particular educational contexts. Therefore, the applicability, feasibility, and usefulness of findings (i.e., social validity) are important. It would be useful for researchers to approach social validity broadly, using different types of assessment, including direct validation techniques that compare teacher and student behaviors pre- and postintervention in both intervention and control conditions. They should also include teachers and students in the validation process (Schwartz, 2005). Finally, an often underreported but critical component of social validity is the maintenance of behavior change (Kennedy, 2002). Given the complexity of applied settings, this is best demonstrated through the use of multiple measures and methods in studies replicated across a variety of classroom settings. By virtue of the phenomena of interest, questions related to social validity are particularly applicable to classroom-based intervention research. The overarching goal of classroom-based research is to understand classroom-setting factors and instructional variables that influence student outcomes and how those factors affect the relations between teacher and student behaviors. Yet, social validity is not often measured or reported in classroom-based studies. In fact, the overall measurement of social validity in single-subject design research is actually decreasing (see Schwartz, 2005). Furthermore, the measures we use to evaluate social validity often lack precision. For example, common techniques for measuring social validity consist of asking teachers to respond to a Likert-type

scale that includes questions about variables such as the intrusiveness, usefulness, and/or effectiveness of the intervention (DePry & Sugai, 2002). Unfortunately, this type of indirect measure is subject to informant bias. For example, teachers may be less likely to respond accurately if the researcher administers the scale or can identify the respondent. Interestingly, measures of social validity often indicate that teachers report finding classroom-based interventions effective in changing student behavior but that teachers are not likely to use the interventions after the research study is terminated (e.g., Sutherland, Adler, & Gunter, 2003). Whether interventions are sustained is a critical aspect of social validity and important information in determining classroom-based factors that can improve student outcomes. Through accurate feedback from consumers (i.e., teachers and students) about intervention feasibility and appeal, researchers can refine their interventions to be both effective and sustained, producing potentially more durable findings.

Concluding Thoughts
Research on classroom-based practices has a long history. Clearly, over the years, we have made considerable progress understanding and predicting the relationships between classroom environments, teaching practices, and student learning. Overall, our methods in both group and single-subject design research have expanded. However, with the current federal policies, the field of EBD is being challenged to conduct more rigorous research in applied settings. The purpose of this article was to discuss current research methods on classroom-based practices in the field of EBD and to provide suggestions for future research efforts. Our intent was to provide an opportunity to explore the issues researchers in the field are encountering as they examine these complex environments. To this extent, we hope this article begins the discussion and provides direction for future research that is not only rigorous but also beneficial to teachers and students.

Achenbach, T. M. (1991). Manual for the Child Behavior Checklist/4-18 and 1991 profile. Burlington: University of Vermont, Department of Psychiatry. Baer, A. M., Rowbury, T., & Baer, D. M. (1973). The development of instructional control over classroom activities of

Downloaded from http://sed.sagepub.com by on February 17, 2009

220 The Journal of Special Education

deviant preschool children. Journal of Applied Behavior Analysis, 6, 289298. Baer, D. M., Wolf, M. M., & Risley, T. R. (1968). Some current dimensions of applied behavior analysis. Journal of Applied Behavior Analysis, 1, 9197. Baer, D. M., Wolf, M. M., & Risley, T. R. (1987). Some stillcurrent dimensions of applied behavior analysis. Journal of Applied Behavior Analysis, 20, 313327. Barlow, D. H., & Hayes, S. C. (1979). Alternating treatments design: One strategy for comparing the effects of two treatments in a single subject. Journal of Applied Behavior Analysis, 12, 199210. Bijou, S. W., & Baer, D. M. (1961). Child development: Vol. 1. A systematic and empirical theory. New York: AppletonCentury-Crofts. Bijou, S. W., Peterson, R. F., & Ault, M. H. (1968). A method to integrate descriptive and experimental field studies at the level of data and empirical concepts. Journal of Applied Behavior Analysis, 1, 175191. Brophy, J. E., & Good, T. L. (1986). Teacher behavior and student achievement. In M. L. Wittock (Ed.), Handbook of research on teaching (3rd ed., pp. 328375). New York: Macmillan. Brothers, K. J., & Cammilleri, A. P. (2005). The Baer necessities: Observation, measurement, and analysis. In K. S. Budd & T. Stokes (Eds.), A small matter of proof: The legacy of Donald M. Baer. Reno, NV: Context Press. Coburn, C. E. (2003). Rethinking scale: Moving beyond numbers to deep and lasting change. Educational Researcher, 32(6), 312. Conduct Problems Prevention Research Group. (1999). Initial impact of the Fast Track prevention trial for conduct problems: II. Classroom effects. Journal of Consulting and Clinical Psychology, 67, 648657. Conduct Problems Prevention Research Group. (2002). Implementation of the Fast Track program: An example of a large-scale prevention science efficacy trial. Journal of Abnormal Child Psychology, 30, 117. Conduct Problems Prevention Research Group. (2004). The effects of the Fast Track program on serious problem outcomes at the end of elementary school. Journal of Clinical Child and Adolescent Psychology, 33, 650661. Conroy, M. A., & Stichter, J. P. (2005). Seeing the forest and the trees: A more rigorous approach to measurement and validity in behavioral disorders intervention research. In T. E. Scruggs & M. A. Mastropieri (Eds.), Applications of research methodology: Advances in learning and behavioral disabilities (Vol. 19, pp. 131156). Oxford, UK: Elsevier. DePry, R., & Sugai, G. (2002). The effect of active supervision and pre-correction on minor behavioral incidents in a sixth grade general education classroom. Journal of Behavioral Education, 11, 255267. Daunic, A. P., Smith, S. W., Brank, E. M., & Penfield, R. D. (2006). Classroom based cognitive-behavioral intervention to prevent aggression: Efficacy and social validity. Journal of School Psychology, 44, 123139. Forness, S. R. (2005). The pursuit of evidence-based practice in special education for children with emotional or behavioral disorders. Behavioral Disorders, 30, 309328. Fox, J. J., & Conroy, M. A. (1995). Setting events and behavior problems: An interbehavioral field analysis for research and practice. Journal of Emotional and Behavioral Disorders, 3, 130140.

Greenwood, C. R., Carta, J. J., & Dawson, H. (2000). Ecobehavioral assessment systems software. In T. Thompson, D. Felce, & F. J. Symons (Eds.), Behavioral observation: Technology and applications in developmental disabilities (pp. 6170). Baltimore: Brookes. Gresham, F. M., & Elliot, S. N. (1990). Social Skills Rating System manual. Circle Pines, MN: American Guidance Service. Gresham, F. M., Gansel, K. A., & Noell, G. H. (1993). Treatment integrity in applied behavior analysis with children. Journal of Applied Behavior Analysis, 26, 257263. Gresham, F. M., MacMillan, D. L., Beebe-Frankenberger, M. E., & Bocian, K. M. (2000). Treatment integrity in learning disabilities intervention research: Do we really know how treatments are implemented? Learning Disabilities Research and Practice, 15, 198205. Gunter, P. L., Hummel, J. H., & Conroy, M. A. (1998). Increasing incorrect academic responding: An effective intervention strategy to decrease behavior problems. Effective School Practices, 17, 5562. Gunter, P. L., Jack, S. L., Shores, R. E., Carrell, D. E., & Flowers, J. (1993). Lag sequential analysis as a tool for functional analysis of student disruptive behavior in classrooms. Journal of Emotional and Behavioral Disorders, 1, 138149. Hall, R. V., Lund, D., & Jackson, D. (1968). Effects of teacher attention on study behavior. Journal of Applied Behavior Analysis, 1, 112. Haring, N. G., & Phillips, E. L. (1962). Educating emotionally disturbed children. New York: McGraw-Hill. Haring, N. G., & Phillips, E. L. (1972). Analysis and modification of classroom behavior. Upper Saddle River, NJ: Prentice Hall. Harris, V. W., & Sherman, J. A. (1973). Use and analysis of the Good Behavior Game to reduce disruptive classroom behavior. Journal of Applied Behavior Analysis, 6, 405417. Herschbach, D. (1996). Imaginary gardens with real toads. In P. Gross, M. Levitt, & M. W. Lewis (Eds.), The flight from science and reason. Baltimore: Johns Hopkins University Press. Hobbs, N. (1965). How the Re-ED plan developed. In N. J. Long, W. C. Morse, & R. G. Newman (Eds.), Conflict in the classroom (pp. 286294). Belmont, CA: Wadsworth. Horner, R. D., & Baer, D. M. (1978). Multiple-probe technique: A variation of the multiple baseline. Journal of Applied Behavior Analysis, 11, 189196. Horner, R. H., Carr, E. G., Halle, J., McGee, G., Odom, S., & Wolery, M. (2004). The use of single-subject research to identify evidence-based practice in special education. Exceptional Children, 71, 165180. Individuals With Disabilities Education Improvement Act of 2004, 31 U.S.C. Institute of Education Sciences. (n.d.). Funding opportunities. Retrieved October 31, 2007, from http://ies.ed.gov/ncser/ funding/ Kahng, S., & Iwata, B. A. (2000). Computer systems for collecting real-time observational data. In T. Thompson, D. Felce, & F. J. Simons (Eds.), Behavioral observation: Technology and applications in developmental disabilities (pp. 3542). Baltimore, MD: Brookes. Kauffman, J. M., Conroy, M. A., Gardner, R., & Oswald, D. (2006, May). New racism and related issues: What we know and what we should find out. Paper presented at the Association of Behavior Analysis Conference, Atlanta, GA.

Downloaded from http://sed.sagepub.com by on February 17, 2009

Conroy et al. / Methodological and Research Issues 221

Kazdin, A. (1982). Single case research designs. New York: Oxford University Press. Kellam, S. G., Koretz, D., & Moscicki, E. K. (1999). Core elements of developmental epidemiologically based prevention research. American Journal of Community Psychology, 27, 463482. Kennedy, C. (2002). The maintenance of behavior change as an indicator of social validity. Behavior Modification, 26, 594604. Kennedy, C. (2005). Single-case designs for experimental research. Boston: Allyn & Bacon. Kern, L., Bambara, L., & Fogt, J. (2002). Class-wide curricular modification to improve the behavior of students with emotional or behavioral disorders. Behavior Disorders, 27, 317326. Lane, K. L., Bocian, K. M., MacMillan, D. L., & Gresham, F. M. (2004). Treatment integrity: An essential but often forgotten component of school based interventions. Preventing School Failure, 48, 3643. Lehr, C. A., Sinclair, M. F., & Christensen, S. L. (2004). Addressing student engagement and truancy prevention during the elementary school years: A replication study of the Check & Connect model. Journal of Education for Students Placed at Risk, 9(3), 279301. Levin, J. R., ODonnell, A. M., & Kratochwill, T. R. (2003). Educational/psychological intervention research. In W. Reynolds & G. Miller (Eds.), Handbook of psychology: Vol. 7. Educational psychology (pp. 557581). Hoboken, NJ: John Wiley & Sons. Lohrmann, S., & Talerico, J. (2004). Anchor the Boat: A classwide intervention to reduce problem behavior. Journal of Positive Behavior Interventions, 6, 113120. McConaughy, S. H., Kay, P. J., & Fitzgerald, M. (2000). How long is long enough? Outcomes for a school-based prevention program. Exceptional Children, 67, 2134. Mooney, P., Epstein, M., Reid, R., & Nelson, J. R. (2003). Status of and trends in academic intervention research for students with emotional disturbance. Remedial and Special Education, 24, 273287. Morse, W. C., Cutler, R. L., & Fink, A. H. (1964). Public school classes for the emotionally handicapped: A research analysis. Washington, DC: Council for Exceptional Children. Muehrer, P., & Koretz, D. (1992). Issues in preventive intervention research. Current Directions in Psychological Science, 1(3), 109112. Odom, S. L. (2004). The RCT gold standard: Beware of the Midas touch. FOCUS on research: Newsletter of the Division for Research, 17(1), 12. Parsonson, B. S., & Baer, D. M. (1992). The visual analysis of data, and current research into the stimuli controlling it. In T. R. Kratochwill & J. R. Levin (Eds.), Single-case research designs and analysis: New directions for psychology and education (pp. 1540). Hillsdale, NJ: Lawrence Erlbaum. Sasso, G. (2005, April). The evidence base in emotional and behavioral disorders. Presentation at the University of Florida Research Symposium, Gainesville, FL. Sasso, G. M., Reimers, T. M., Cooper, L. J., Wacker, D., Berg, W., Steege, M., et al. (1992). Use of descriptive and experimental analysis to identify the functional properties of aberrant behavior in school settings. Journal of Applied Behavior Analysis, 25, 809821. Schwartz, I. S. (2005). Social validity assessments: Voting on science or acknowledging the roots of behavior analysis? In

K. S. Budd & T. Stokes (Eds.), A small matter of proof: The legacy of Donald M. Baer. Reno, NV: Context Press. Scruggs, T. E., Mastropieri, M. A., & Regan, K. S. (2006). Statistical analysis for single subject research designs. In T. E. Scruggs & M. A. Mastropieri (Eds.), Applications of research methodology: Advances in learning and behavioral disabilities (Vol. 19, pp. 3354). Oxford, UK: Elsevier. Shores, R. E., & Haubrich, P. A. (1969). Effect of cubicles in educating emotionally disturbed children. Exceptional Children, 36, 2124. Shores, R. E., Jack, S. L., Gunter, P. L., Ellis, D. N., Debriere, T. J., & Wehby, J. H. (1993). Classroom interactions of children with behavior disorders. Journal of Emotional and Behavioral Disorders, 1, 2740. Sidman, M. (1960). Tactics of scientific research: Evaluating experimental data in psychology. New York: Basic Books. Smith, S. W., & Daunic, A. P. (2004). Research on preventing behavior problems using a cognitive-behavioral intervention: Preliminary findings, challenges and future directions. Behavioral Disorders, 30, 7276. Stanovich, K. E. (2004). How to think straight about psychology. New York: Pearson. Stichter, J., Conroy, M. A., & Kauffman, J. (2008). Characteristics of students with high incidence disabilities: A cross-categorical approach. Columbus, OH: Merrill. Stichter, J. P., Hudson, S., & Sasso, G. M. (2005). The use of structural analysis to identify setting events in applied settings for students with emotional/behavioral disorders. Behavioral Disorders, 30, 401418. Stichter, J. P., Lewis, T. J., Johnson, N., & Trussell, R. (2004). Toward a structural assessment: Analyzing the merits of an assessment tool for a student with E/BD. Assessment for Effective Intervention, 30(1), 2540. Stichter, J. P., Lewis, T. L., Richter, M., & Johnson, N. (in press). Assessing antecedent variables: The effects of instructional variables on student outcomes through in-service and peer coaching professional development models. Education and Treatment of Children. Sutherland, K. S., Adler, N., & Gunter, P. L. (2003). The effect of varying rates of opportunities to respond to academic requests on the classroom behavior of students with EBD. Journal of Emotional and Behavioral Disorders, 11, 239248. Sutherland, K. S., & Wehby, J. H. (2001). The effect of selfevaluation on teaching behavior in classrooms for students with emotional and behavioral disorders. Journal of Special Education, 35, 161171. Sutherland, K. S., Wehby, J. H., & Copeland, S. R. (2000). Effects of varying rates of behavior-specific praise on the ontask behavior of students with E/BD. Journal of Emotional and Behavioral Disorders, 8, 29. Tapp, J. (2006). Multi-option Observation System for Experimental Studies (MOOSES). Retrieved July 15, 2006, from http://kc.vanderbilt.edu/mooses/mooses.html Tapp, J., & Walden, T. A. (2000). PROCODER: A system for collection and analysis of observational data from videotape. In T. Thompson, D. Felce, & F. J. Symons (Eds.), Behavioral observation: Technology and applications in developmental disabilities (pp. 6170). Baltimore: Brookes. Teeple, D. F., & Skinner, C. H. (2004). Enhancing grammar assignment perceptions by increasing assignment demands:

Downloaded from http://sed.sagepub.com by on February 17, 2009

222 The Journal of Special Education

Extending additive interspersal research to students with emotional disorders. Journal of Emotional and Behavioral Disorders, 12(2), 120127. Van Acker, R., Yell, M. L, Bradley, R., & Drasgow, E. (2004). Experimental research designs in the study of children and youth with emotional and behavioral disorders. In R. B. Rutherford, M. M. Quinn, & S. R. Mathur (Eds.), Handbook of research in emotional and behavioral disorders (pp. 546566). NewYork: Guilford. Wahler, R. G., & Fox, W. H. (1981). Setting events and applied behavior analysis: Toward a conceptual and methodological expansion. Journal of Applied Behavior Analysis, 14, 327338. Walker, H. M., Kavanagh, K., Stiller, B., Golly, A., Severson, H. H., & Feil, E. G. (1998). First step to success: An early intervention approach for preventing school antisocial behavior. Journal of Emotional and Behavioral Disorders, 6(2), 6680. Walker, H. M., & Severson, H. H. (1990). Systematic screening for behavior disorders (SSBD): Users guide and administration manual. Longmont, CO: Sopris West. Wehby, J. H., Symons, F. J., & Shores, R. E. (1995). A descriptive analysis of aggressive behavior in classrooms for children with emotional and behavioral disorders. Behavioral Disorders, 20, 87105.

Whelan, R. J. (1974). Richard J. Whelan. In J. M. Kauffman & C. D. Leis (Eds.), Teaching children with behavior disorders: Personal perspectives (pp. 240270). Upper Saddle River, NJ: Merrill/Prentice Hall. Witt, J. C., Noell, G. H., LaFleur, L. H., & Mortenson, B. P. (1997). Teacher use of interventions in general education settings: Measurement and analysis of the independent variable. Journal of Applied Behavior Analysis, 30, 693696. Maureen A. Conroy is a professor in the Department of Special Education and Disability Policy at Virginia Commonwealth University. Janine P. Stichter is an associate professor in the Department of Special Education at the University of MissouriColumbia. Ann Daunic is an assistant scholar in the Department of Special Education at the University of Florida. Todd Haydon is a doctoral candidate in the Department of Special Education at the University of Florida.

Downloaded from http://sed.sagepub.com by on February 17, 2009