Академический Документы
Профессиональный Документы
Культура Документы
net/publication/6682500
CITATIONS READS
38 1,256
3 authors, including:
Some of the authors of this publication are also working on these related projects:
All content following this page was uploaded by Shirin Antia on 20 May 2014.
The Author 2006. Published by Oxford University Press. All rights reserved. doi:10.1093/deafed/enl028
For Permissions, please email: journals.permissions@oxfordjournals.org
2 Journal of Deaf Studies and Deaf Education
barrier to classroom participation was the rate of class- school students at schools for the deaf. Some limited
room presentation and discussion. Rapid rates resulted and indirect data are available about participation
in a time lag in sign language interpreting; conse- difficulties of school-age D/HH students in general
quently, D/HH students were not able to respond to education classrooms. Clearly, classroom participation
instructor questions in a timely manner or responded for students who use sign language is dependent on an
inappropriately (Saur et al., 1986). Similar results interpreter. Sign language interpreters in public schools
were obtained by Stinson, Liu, Saur, and Long (1996) may not be highly skilled, especially in rural school dis-
who interviewed 50 D/HH college students at tricts (Antia & Kreimeyer, 2001). Schick, Williams, and
the National Technical Institute for the Deaf. They Bolster (1999) report that 56% of interpreters in Colo-
divided students into two groups by their preferred rado had less than the minimal skills needed for
mode of communication (oral only or oral and/or interpreting in classrooms. A more recent study that
sign). They reported that, regardless of the preferred included a national sample of 2,091 educational inter-
mode, all students perceived classroom communica- preters (Schick, Williams, & Kupermintz, 2006)
tion as a challenge. For the oral students, difficulties showed that only 38% met the minimum standards set
with teachers included their lack of clarity of speech; by the state. Thus, many interpreters may not be able to
students who depended in part or wholly on sign adequately interpret teacher talk. Moreover, any sign
As is true for college students, school-age students questionnaire used by Long et al. (1991) consisted of
who use oral communication may find it easier to join 28 statements rated on a four-point scale and yielded
classroom discussions than students who use sign ratings for four subscales. The two cognitive subscales
communication (Stinson et al., 1996). However, unlike are Understanding Teachers (UT) and Understanding
their hearing peers, they may have to visually locate Students (US); the two affective subscales are Positive
the speaker during a classroom discussion and, conse- Affect (PA) and Negative Affect (NA). This version of
quently, miss much of what the speaker says. Average the scale was validated with students in Grades 7–10 at
classroom noise levels may make it impossible for a school for the deaf (Long et al., 1991). An adapted,
D/HH students to understand the communication of revised, but unpublished, version of the scale was used
peers, and they may also have difficulty comprehend- with deaf college students (Stinson et al., 1996).
ing other students when several are talking at the same Given the large numbers of D/HH students in
time (Preisler, Tvingstedt, & Ahlstrom, 2005). The public elementary and middle schools, it is important
degree of hearing loss, classroom acoustics, and the to know whether the scale can be used with students
use of group and individual amplification all may affect in elementary and middle school as well as with high
these students’ access to classroom communication and school students. Consequently, we modified the admin-
classroom participation. istration of the most recent scale so that it could also
• Content representativeness and relevance is the additional severe cognitive disabilities, (c) received
evaluation of whether the tasks are representative of direct or consultative services from teachers of D/HH
the domain being assessed. In the case of the CPQ , an or had an Individual Education Plan, (d) attended gen-
examination of the items can show whether communi- eral education classrooms in public schools for 2 or
cation difficulties in the classroom are being addressed. more hours each day, and (e) were in Grades 2–8 at
• Internal structure evidence is the relationship the beginning of the study. Students were recruited
among the parts of the assessment. In the case of the from Arizona and Colorado through state agencies
CPQ , an examination of the relationships between the and school districts. Requests to allow students to par-
different scales provides this evidence. Internal struc- ticipate in the study were sent to parents of all eligible
ture evidence is also provided by evidence of interitem students. Initially, permission for participation was ob-
reliability. tained for 187 students. Data were obtained annually for
• Reliability over time provides information about all participating students on a battery of teacher and
the stability of the measured construct. We are able to student questionnaires that assessed academic perfor-
present data about the relationships of CPQ scores mance and social behavior. One of the instruments used
obtained for the same students over a 3-year period. was the CPQ. The CPQ and academic data presented
• External structure evidence is the evaluation of here were collected from the students in Year 3 (2003–
Table 1 Student characteristics in school; most of these students used a sign language
Number Percent interpreter in the classroom.
Total number of children 136
Grade Instruments
4 5 4
5 26 19 Classroom Participation Questionnaire. The CPQ
6 22 16 consists of 28 statements that a student rates on
7 22 16 a four-point scale (1, almost never; 2, seldom; 3, often;
8 18 13
4, almost always). The questionnaire yields four
9 29 21
10 13 10 subscale scores: Understanding Teachers (UT) (eight
Missing information 1 1 statements), Understanding Students (US) (five
Gender statements), Positive Affect (PA) (six statements), and
Male 70 52 Negative Affect (NA) (nine statements). To respond to
Female 66 49 the teachers who suggested that the CPQ was too long
Average level of hearing loss (unaided)a for annual administration, we developed a 16-item
Unilateral loss 18 13 questionnaire with four items in each subscale. The
SSRS (Gresham & Elliott, 1990). The other two scales because they only provide percentile ranks and include
yield information on the student’s social skills and large numbers of children who have taken out-of-level
problem behaviors. The Academic Competence Scale tests. Therefore, these norms do not necessarily provide
is a nine-item scale completed by teachers. Teachers comparisons to children at the same age or grade level.
rate students on a five-point scale, placing each These norms also do not provide comparisons for the
student in the lowest 10%, the next lowest 20%, children with minimal hearing loss who are in our sam-
middle 40%, next highest 20%, or highest 10% on ple. The scores reported on the Stanford are normal
reading and mathematics compared to classmates and curve equivalents; these are standard scores with a mean
grade expectations. They also rate the students on of 50 and an SD of 21.06.
motivation, intellectual functioning, classroom behav-
ior, and parental encouragement. The Academic Com-
Procedure
petence Scale yields a standard score with a mean
of 100 and an SD of 15. The SSRS was normed on Each student’s itinerant or resource teacher of D/HH
a national sample of 2,109 students in Grades 3–12. administered the CPQ in the fall semester during the
Seventeen percent of the students in that sample were regular time to meet with the student. Teachers gave
identified as having a disability. Coefficient alpha re- the CPQ using the modifications described. A space
for 69 students for the reading subtest, 66 for mathe- Table 2 Intercorrelations between subscales with
matics, and 70 for language. 28-item form correlations above the diagonal and 16-item
form correlations below the diagonal (N ¼ 136)
Subscale 1 2 3 4
Results 1. UT __ .68 .57 .28
2. US .64 __ .72 .44
The following section provides the data for each cat- 3. PA .57 .70 __ .54
egory of validity: content representativeness, internal 4. NAa .31 .44 .45 __
a
structure, external structure, reliability over time, and Reverse scoring.
convergent and discriminant aspects of validity. The Table 4 Coefficient alpha reliability estimates, means,
examination of convergence asks whether the task cor- and SDs for subscales and UT/US/PA composite
(N ¼ 136)
relates highly with those traits that it should resemble;
the examination of discriminant validity asks whether Long form Short form
a
the task correlates lower with traits that it should not K Alpha Mean (SD) K Alpha Mean (SD)
resemble. The NA subscale was originally intended to UT 8 .89 3.3 (0.56) 4 .82 3.3 (0.60)
have more convergent validity with PA and more dis- US 5 .81 3.1 (0.61) 4 .78 3.1 (0.64)
PA 6 .88 3.2 (0.65) 4 .79 3.2 (0.64)
criminant validity with respect to UT and US. In this
NA 9 .88 3.2 (0.57) 4 .82 3.5 (0.60)
respect, NA is not correlated high enough with PA UT/US/PA 19 .93 3.2 (0.53) 16 .90 3.2 (0.54)
but does have the desired lower correlations with UT composite
and US. UT and US correlate well enough with each a
K ¼ number of statements.
other, but the US with PA correlation is problematic
because discriminant validity was expected for that rela- (1991) who reported interitem consistency reliabilities
tionship by the original authors (Garrison et al., 1994). of .77 for UT, .73 for US, .63 for PA, and .80 for NA.
A possible resolution to the lack of convergent/ Means and SDs for the subscales for the 28-item
discriminant validity of the subscales is to avoid a cog- and 16-item forms are similar. Mean subscale scores
Table 5 Stability estimates (test–retest correlations) for Table 6 shows the correlations of the CPQ sub-
subscales across changing conditions scales and the UT/US/PA composite with the SSRS
Years 2–3 Years 1–2 Years 1–3 Academic Competence Scale and the Stanford read-
Student N 111 121 116 ing, mathematics, and language scores; better ear PTA;
Long form subscales and interpreter use. Student ratings of classroom par-
UT .43* .49* .36* ticipation and teacher ratings of academic competence
US .49* .39* .35*
and Stanford scores are positively related (p , .01),
PA .49* .36* .31**
NA .45* .34* .29** with the exception of the Stanford mathematics on the
UT/US/PA composite .52* .46* .37* 16-item form. A less severe hearing loss, as measured
Short form subscales by better ear PTA and not having an interpreter is
UT .37* .42* .35* associated with UT and US but not with PA ratings.
US .44* .35* .32* Degree of hearing loss and not having an interpreter is
PA .49* .36* .31** associated with NA ratings only on the 16-item form.
NA .40* .22** .20**
The composite UT/US/PA is significantly related to
UT/US/PA composite .52* .45* .38*
all the external academic measures.
*p , .001.
To determine whether the CPQ can explain vari-
Table 6 Correlations of 28-item and 16-item CPQ with academic achievement and student characteristics
UT US PA NAa UT/US/PA
28-item
Academic competence (N ¼ 132) .28** .32** .21** .27** .30**
Stanford reading (N ¼ 69) .31** .48** .36** .17 .44**
Stanford mathematics (N ¼ 66) .22* .30** .26* .18 .29**
Stanford language (N ¼ 70) .30** .43** .41** .15 .43**
Better ear PTA (N ¼ 127) .23** .22** .00 .08 .16*
Use of interpreter (N ¼ 133) .21** .15* .08 .10 .16*
16-item
Academic competence .26** .31** .23** .32** .31**
Stanford reading .22* .46** .33** .23* .39**
Stanford mathematics .15 .30** .24* .22* .27*
Stanford language .20* .43** .40** .19 .40**
Better ear PTA .19* .20* .01 .20* .14
Use of interpreter .17* .12 .06 .19* .14
a
Reverse scoring.
*p , .05 (one tailed).
**p , .01 (one tailed).
10 Journal of Deaf Studies and Deaf Education
Table 7 Predictions of academic achievement by hearing loss, interpreter use, and CPQ scores for 28-item and
16-item scales
Total R2 (Step 1 Total R2
Dependent variable interpreter use) Incremental R2 (Step 1 PTA) Incremental R2
Academic competence (N ¼ 119) Step 1a .05* .01
Step 2b .06* .01 .06* .05*
28-item Step 3c .13** .07** .13** .07**
16-item Step 3c .15** .09** .15** .09**
Stanford reading (N ¼ 60) Step 1 .16** .06
Step 2 .16** .00 .16** .10**
28-item Step 3 .32** .15** .32** .15**
16-item Step 3 .30** .13** .30** .13**
Stanford mathematics (N ¼ 56) Step 1 .09* .00
Step 2 .09 .00 .09 .09
28-item Step 3 .21* .12* .21* .12*
16-item Step 3 .20* .11* .20* .11*
Stanford language (N ¼ 60) Step 1 .12** .02
Step 2 .13* .01 .13* .11**
the remaining analysis was the same. The results for Practicality
the 28-item and 16-item scales are shown separately in
The CPQ is fairly easy for most students to complete.
Table 7. The first two columns show the results when
Only two students between Grades 4 and 10 were
interpreter use is entered first; the last two columns
reported not to be able to take the test because of their
show the results when PTA is entered first. In both
inability to comprehend the items when either read
pairs of columns, CPQ scores are entered as a third
or signed to them. Teacher comments that the 28-item
step. These data show that PTA alone does not appear
scale was too long provided the rationale for develop-
to predict much variance in academic outcomes for
ing the 16-item scale.
this sample of students or to add to the prediction
based on interpreter use. Interpreter use and PTA
together significantly predict academic outcomes,
Discussion
while the prediction is improved further when the
CPQ scores are added. The complete model (i.e., bet- The purpose of this article is to provide evidence for
ter ear PTA, interpreter use, and 28-item CPQ scores) using the CPQ to assess the participation of D/HH
explains 13% of variance for Academic Competence students in general education classrooms. The CPQ
scores, 32% of the variance for reading, 21% of the has previously been used only with high school and
variance for mathematics, and 30% of the variance for college students; the data presented here provide evi-
language scores. The CPQ scores explained the most dence that the instrument can be used with students in
additional variance (18%) for language scores. The elementary and middle school classrooms as well. Ad-
16-item scale explains less variance than the 28-item ditionally, the CPQ had previously been used with
scale for each outcome except Academic Competence high school students at a center school; our data in-
scores. dicate that elementary and middle school students can
Communication Participation 11
reliably rate their own participation in general educa- reported here are lower than those reported by Long
tion classrooms. Our sample also provides evidence et al. (1991) who reported significant correlations of
that the CPQ can be used with hard of hearing stu- .42 for mathematics and .29 for language. Differences
dents, as our sample, unlike previous samples, includes in the characteristics and school experiences of the two
a high percentage of students with unilateral, mild, sets of participants may have contributed to the dif-
and moderate hearing losses. Because classroom par- ferent results. The students in the Long et al. study
ticipation is a concern for D/HH students in these were enrolled in a school for the deaf, in contrast to
classrooms and can impact student learning, teachers this study where the students were in public schools.
can use the results of the CPQ in addition to other Despite our results, we do not at this time recommend
assessments to assist in decision making about place- deleting the NA scale. Instead, we recommend that
ment, interpreter use, and other issues. Our findings teachers use it diagnostically and consider interven-
suggest that both the long (28-item) and short (16- tions for students who complete ratings of often or
item) forms can be useful to teachers or researchers, almost always on the NA statements.
although they may be used for different purposes. We The CPQ scores are significantly, though modestly,
recommend using the long form when individual correlated with academic achievement. The correla-
decisions are to be made (e.g., changes in classroom tion between communication participation and student
teachers of D/HH administered the CPQ to the stu- of decisions being made based on test results, and
dent. The general education teachers were not aware reliabilities of .80 are acceptable in many testing
of the students’ CPQ ratings. Thus, the general edu- situations. The interitem correlations for the 16-item
cation teachers’ ratings were independent of the stu- form provide evidence that the items chosen are more
dents’ ratings. reliable than a random sample of items would be.
We had expected that degree of hearing loss and Although the correlations between years are sig-
interpreter use would be significantly correlated with nificant, they are quite modest. We expected lower
CPQ scores. Our data show that they are moderately correlations across years because students encounter
(though significantly) correlated with UT and US but different teachers and must adjust to a new environment
not PA or NA (on the 28-item scale). Our data also each year. The higher correlations across years for the
show that CPQ scores are reasonably good predictors composite (UT/US/PA) compared to the individual
of academic achievement when added to demographic subscales are expected because of the increased number
predictors such as degree of hearing loss and inter- of items when three subscales are combined. However,
preter use. Degree of hearing loss alone added very the expectation that additional items result in higher
little to the prediction of academic achievement. In- correlations was not evident in the comparison of the
terpreter use predicted achievement in reading, mathe- stability of the 16-item subscales or composite com-
Reynolds, C. R., Livingston, R. B., & Willson, V. (2005). Mea- Sechrest, L. (2005). Validity of measures is no simple matter.
surement and assessment in education. Boston: Pearson. Health Services Research, 40, 1584–1604.
Saur, R. E., Layne, C. A., Hurley, E. A., & Opton, K. (1986). Shaw, J., & Jamieson, J. (1997). Patterns of classroom discourse
Dimensions of mainstreaming. American Annals of the Deaf, in an integrated, interpreted elementary school setting.
131, 325–330. American Annals of the Deaf, 142, 40–47.
Schick, B., Williams, K., & Bolster, L. (1999). Skill levels of Stinson, M. S., Liu, Y., Saur, R. E., & Long, G. (1996). Deaf
educational interpreters working in public schools. Journal college students’ perceptions of communication in main-
of Deaf Studies and Deaf Education, 4, 144–155. stream classes. Journal of Deaf Studies and Deaf Education,
Schick, B., Williams, K., & Kupermintz, H. (2006). Look whose 1, 40–51.
being left behind: Educational interpreters and access to
education for deaf and hard of hearing students. Journal Received January 20, 2006; revisions received September 11,
of Deaf Studies and Deaf Education, 11, 3–20. 2006; accepted October 24, 2006.