Вы находитесь на странице: 1из 15

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/6682500

Validity and Reliability of the Classroom Participation Questionnaire With


Deaf and Hard of Hearing Students in Public Schools

Article  in  Journal of Deaf Studies and Deaf Education · February 2007


DOI: 10.1093/deafed/enl028 · Source: PubMed

CITATIONS READS
38 1,256

3 authors, including:

Shirin Antia Darrell L. Sabers


The University of Arizona The University of Arizona
41 PUBLICATIONS   977 CITATIONS    69 PUBLICATIONS   788 CITATIONS   

SEE PROFILE SEE PROFILE

Some of the authors of this publication are also working on these related projects:

Center on Literacy and Deafness (CLAD). View project

All content following this page was uploaded by Shirin Antia on 20 May 2014.

The user has requested enhancement of the downloaded file.


Journal of Deaf Studies and Deaf Education Advance Access published November 17, 2006

Validity and Reliability of the Classroom Participation


Questionnaire With Deaf and Hard of Hearing
Students in Public Schools
Shirin D. Antia
Darrell L. Sabers
University of Arizona
Michael S. Stinson
Rochester Institute of Technology

Downloaded from http://jdsde.oxfordjournals.org/ by guest on June 3, 2013


The Classroom Participation Questionnaire (CPQ) was ad- tion of the school day in general education classrooms.
ministered to 136 deaf or hard of hearing (D/HH) students These numbers are likely to be underestimates of the
attending general education classrooms in Grades 4–10. The
true numbers of D/HH students in general education
CPQ is a student-rated measure that yields scores for Un-
derstanding Teachers, Understanding Students, Positive classrooms because school districts that serve only
Affect, and Negative Affect. Validity and reliability of a long a few D/HH students may not respond to the annual
(28-item) and a short (16-item) form are reported. We pro- survey. A major concern expressed by researchers re-
vide evidence of (a) internal structure validity through an garding D/HH students in general education class-
examination of the relationships between the subscales and
an analysis of interitem reliability within each scale, (b) re-
rooms is their ability to actively and fully participate
liability over time by examining the scores of students over in classroom instruction and discussion because of
a 3-year period, and (c) external structure validity through their communication difficulties (Garrison, Long, &
an examination of the relationships of the CPQ with meas- Stinson, 1994; Saur, Layne, Hurley, & Opton, 1986).
ures of teacher-rated academic competence and Stanford
For all students, the ability to communicate with
achievement scores. The results suggest that both the long
and short form of the CPQ can be used to assess participa- teachers and peers can be a major component of aca-
tion of D/HH students in general education classrooms. demic success as teacher–student communication and
student–student communication are the primary means
Today, the majority of students who are deaf or hard of of learning in classrooms. Students who have difficulty
hearing (D/HH) and receiving special education serv- communicating may choose not to participate in class-
ices are educated in public schools; of these, a large room activities; nonparticipation can adversely affect
number attend general education classrooms for some their learning and eventual academic success (Long,
or all of their academic subjects. Data from the Annual Stinson, & Braeges, 1991). Conversely, students who
Survey of D/HH Children and Youth (Karchmer & think they understand the communication of teachers
Mitchell, 2003) show that during the 2000–2001 and classroom peers are likely to be academically en-
school year, 75% of all D/HH students were reported gaged because they have a sense of control over the
to attend public schools and 44% spent some propor- learning outcome; they are also likely to believe that
they have a good chance of succeeding academically.
Several researchers have documented the specific
This research was supported by the U.S. Department of Education,
Office of Special Education Programs, Field Initiated Research
communication difficulties of D/HH students in
(H324C010142). Correspondence should be sent to Shirin D. Antia, classrooms with hearing students. Saur et al. (1986)
Department of Special Education, Rehabilitation and School Psychology,
College of Education, University of Arizona, Tucson, AZ 85721 (e-mail:
observed D/HH students in college classrooms over
santia@u.arizona.edu). three academic quarters. They reported that a major

 The Author 2006. Published by Oxford University Press. All rights reserved. doi:10.1093/deafed/enl028
For Permissions, please email: journals.permissions@oxfordjournals.org
2 Journal of Deaf Studies and Deaf Education

barrier to classroom participation was the rate of class- school students at schools for the deaf. Some limited
room presentation and discussion. Rapid rates resulted and indirect data are available about participation
in a time lag in sign language interpreting; conse- difficulties of school-age D/HH students in general
quently, D/HH students were not able to respond to education classrooms. Clearly, classroom participation
instructor questions in a timely manner or responded for students who use sign language is dependent on an
inappropriately (Saur et al., 1986). Similar results interpreter. Sign language interpreters in public schools
were obtained by Stinson, Liu, Saur, and Long (1996) may not be highly skilled, especially in rural school dis-
who interviewed 50 D/HH college students at tricts (Antia & Kreimeyer, 2001). Schick, Williams, and
the National Technical Institute for the Deaf. They Bolster (1999) report that 56% of interpreters in Colo-
divided students into two groups by their preferred rado had less than the minimal skills needed for
mode of communication (oral only or oral and/or interpreting in classrooms. A more recent study that
sign). They reported that, regardless of the preferred included a national sample of 2,091 educational inter-
mode, all students perceived classroom communica- preters (Schick, Williams, & Kupermintz, 2006)
tion as a challenge. For the oral students, difficulties showed that only 38% met the minimum standards set
with teachers included their lack of clarity of speech; by the state. Thus, many interpreters may not be able to
students who depended in part or wholly on sign adequately interpret teacher talk. Moreover, any sign

Downloaded from http://jdsde.oxfordjournals.org/ by guest on June 3, 2013


language interpreters expressed frustration keeping language interpreter will have difficulty interpreting
up with the flow of classroom communication. Both all the communication in the classroom and may have
groups also indicated that communication with hearing difficulty deciding what classroom communication to
students was difficult, especially during class discus- interpret (Schick et al., 1999). Consequently, inter-
sions, because of rapid turn taking, lack of topic coher- preters may omit peer responses to teacher questions,
ence, and frequency of interruptions. peer questions, and other communication that would
Classrooms in which D/HH students find it diffi- contribute to the student’s understanding of classroom
cult to participate can lead to ‘‘an island of deafness . . . content (Shaw & Jamieson, 1997). Schick et al. (2006)
where hearing-impaired students appear passive and found that areas of particular difficulty for interpreters
unresponsive’’ (Saur et al., 1986, p. 327). Such pas- included discourse mapping, accurately representing
sivity can negatively impact academic achievement key vocabulary, and accurately indicating the speaker.
(Braeges, Stinson, & Long, 1993; Long et al., 1991). These aspects of interpreting are crucial for students
Long et al. (1991) examined the relationship between not only to understand teacher lectures but also to be
communication ease, academic engagement, and aca- able to understand the content of, and participate ap-
demic achievement of D/HH high school students at propriately in, classroom discussion. Another area of
a school for the deaf. These researchers found signif- interpreting that might affect the student’s ability to
icant positive correlations between student self-rated contribute to classroom discussion is sign-to-voice in-
communication ease, student-rated academic engage- terpreting, as the student’s comments that are inaccu-
ment, and Stanford achievement scores in language, rately interpreted may be perceived by peers and
mathematics, and science. Furthermore, student-rated teachers as irrelevant or off the point. Schick et al.
communication ease was a significant predictor of ac- (2006) found that Registry of Interpreters for the Deaf
ademic achievement. In a related study, Braeges et al. certified interpreters were quite variable in their
(1993) found that, for high school D/HH students, sign-to-voice interpreting scores on the Educational
teacher-rated communication ease and teacher-rated aca- Interpreters Performance Assessment and that inter-
demic engagement were significantly related. Teacher- preters who took the elementary version of the test
rated academic engagement was also a significant predic- scored significantly less well in sign-to-voice interpret-
tor of academic achievement (as measured by the ing than those who took the secondary version of the
Stanford Achievement Test). examination. Thus, it is possible that some students
The previous studies were conducted with college- in general education classrooms may have interpreters
age students in mainstream college classes or high who have difficulty interpreting their comments.
Communication Participation 3

As is true for college students, school-age students questionnaire used by Long et al. (1991) consisted of
who use oral communication may find it easier to join 28 statements rated on a four-point scale and yielded
classroom discussions than students who use sign ratings for four subscales. The two cognitive subscales
communication (Stinson et al., 1996). However, unlike are Understanding Teachers (UT) and Understanding
their hearing peers, they may have to visually locate Students (US); the two affective subscales are Positive
the speaker during a classroom discussion and, conse- Affect (PA) and Negative Affect (NA). This version of
quently, miss much of what the speaker says. Average the scale was validated with students in Grades 7–10 at
classroom noise levels may make it impossible for a school for the deaf (Long et al., 1991). An adapted,
D/HH students to understand the communication of revised, but unpublished, version of the scale was used
peers, and they may also have difficulty comprehend- with deaf college students (Stinson et al., 1996).
ing other students when several are talking at the same Given the large numbers of D/HH students in
time (Preisler, Tvingstedt, & Ahlstrom, 2005). The public elementary and middle schools, it is important
degree of hearing loss, classroom acoustics, and the to know whether the scale can be used with students
use of group and individual amplification all may affect in elementary and middle school as well as with high
these students’ access to classroom communication and school students. Consequently, we modified the admin-
classroom participation. istration of the most recent scale so that it could also

Downloaded from http://jdsde.oxfordjournals.org/ by guest on June 3, 2013


Because many D/HH students spend a proportion be used with younger students. The modified scale
of their day in general education classrooms, it is im- is renamed the Classroom Participation Questionnaire
portant to obtain some measure of their classroom (CPQ). The purpose of this article is to present reliabil-
participation. One method is to obtain the student’s ity and validity data on the CPQ when given to school-
self-perceptions of participation. Such information can age students in public schools and to provide a short
assist teachers of D/HH students to work appro- 16-item version that would be of considerable practical
priately with general education teachers and hearing use to teachers. We envision that the CPQ can be used
students, as well as help the D/HH students develop for two purposes: one is to obtain information about an
strategies to enhance their own participation. individual student’s participation in the general edu-
cation classroom and the second would be to measure
group performance on a pre- and post-evaluation.
Measuring Classroom Communication

To measure classroom communication, Long and


Determining Validity of Educational Measurements
colleagues at the National Technical Institute for the
Deaf developed a rating scale named the Perceived Anastasi and Urbina (1997) suggest that ‘‘almost any
Communication Ease Questionnaire (Long et al., information gathered in the process of developing
1991) and a subsequent version, the Classroom Com- or using a test is relevant to its validity’’ (p. 138). For
munication Ease Scale (Garrison et al., 1994). These purposes of analysis, we use the categories of relevant
authors conceptualize communication ease as having validity information suggested in Nitko (2001) that
both a cognitive and an affective component. The pertain to the CPQ. Nitko, like Anastasi and Urbina,
cognitive component includes the student’s self- treats interitem reliability as a separate aspect of
perception of the amount and quality of information the test, although it is a precursor of validity. Because
received and expressed in the classroom. The affective validity information (as well as reliability) must be
component includes individual subjective responses interpreted as being relevant to the sample of students
to the communicative situation. These subjective re- from which the data are obtained (Sechrest, 2005), we
sponses may be positive feelings, such as feeling good, do not generalize any of this information to any other
relaxed, or comfortable, or negative feelings, such as population of students and certainly not to the CPQ
being frustrated, nervous, or upset. The questionnaire as an instrument. The following is a brief summary of
statements are written so students can respond to the relevant categories of validity evidence for the
them regardless of the mode of communication. The CPQ derived from Nitko.
4 Journal of Deaf Studies and Deaf Education

• Content representativeness and relevance is the additional severe cognitive disabilities, (c) received
evaluation of whether the tasks are representative of direct or consultative services from teachers of D/HH
the domain being assessed. In the case of the CPQ , an or had an Individual Education Plan, (d) attended gen-
examination of the items can show whether communi- eral education classrooms in public schools for 2 or
cation difficulties in the classroom are being addressed. more hours each day, and (e) were in Grades 2–8 at
• Internal structure evidence is the relationship the beginning of the study. Students were recruited
among the parts of the assessment. In the case of the from Arizona and Colorado through state agencies
CPQ , an examination of the relationships between the and school districts. Requests to allow students to par-
different scales provides this evidence. Internal struc- ticipate in the study were sent to parents of all eligible
ture evidence is also provided by evidence of interitem students. Initially, permission for participation was ob-
reliability. tained for 187 students. Data were obtained annually for
• Reliability over time provides information about all participating students on a battery of teacher and
the stability of the measured construct. We are able to student questionnaires that assessed academic perfor-
present data about the relationships of CPQ scores mance and social behavior. One of the instruments used
obtained for the same students over a 3-year period. was the CPQ. The CPQ and academic data presented
• External structure evidence is the evaluation of here were collected from the students in Year 3 (2003–

Downloaded from http://jdsde.oxfordjournals.org/ by guest on June 3, 2013


whether the assessment procedure predicts current or 2004) of the 5-year study. Students responded to the
future performance on external criteria. We expect the CPQ in the fall semester while the academic data for this
students’ perception of their classroom participation report were obtained in the spring of the same academic
to be influenced by their degree of hearing loss and year. To examine reliability over time, we additionally
their use of interpreters. Because the CPQ is meant to used the CPQ data obtained with the same students in
measure classroom participation, which in turn is Years 1 and 2 of the study.
expected to influence academic achievement, the rela-
tionship of the CPQ scores to external measures of
Participants
academic achievement provides additional evidence
of external structure validity. Consequently, we exam- Data on the CPQ were obtained on 136 students active
ine the relationships between classroom participation in the study during Year 3. The students not partici-
as measured by the CPQ , degree of hearing loss, in- pating that year were missing for various reasons,
terpreter use, and students’ academic outcomes as primarily because they had moved out of the school
measured by the Academic Competence Subscale of districts and we were unable to locate them. Data on
the Social Skills Rating System (SSRS; Gresham & student characteristics shown in Table 1 were obtained
Elliott, 1990) and the reading, language, and mathe- from each student’s teacher of D/HH who completed
matics subtests from the Stanford Achievement Test a demographic form on the student annually. Unfor-
(9th edition). tunately, we were not always able to obtain complete
• Practicality evidence is the evaluation of the cost data on each student despite repeated contacts with
and efficiency of obtaining the assessment information. teachers. Data that were unavailable to (or from) teach-
We are able to provide some data on how many stu- ers are recorded as missing. Students who received
dents were unable to take the CPQ. only monitoring or consultation services from the
teacher of D/HH were most likely to have missing data.
Table 1 shows that the sample of students is quite
Methods
evenly distributed across grades except for the youn-
The data for this article were collected as part of a gest and oldest students. Almost equal numbers of
5-year longitudinal study of academic and social prog- males and females are included. Hearing loss is pre-
ress of D/HH students in public schools. Students sented as pure tone average (PTA) in the better ear.
were eligible to participate if they (a) had an identified A majority of these students had mild and moderate
bilateral or unilateral hearing loss, (b) did not have hearing loss as would be expected in public school
Communication Participation 5

Table 1 Student characteristics in school; most of these students used a sign language
Number Percent interpreter in the classroom.
Total number of children 136
Grade Instruments
4 5 4
5 26 19 Classroom Participation Questionnaire. The CPQ
6 22 16 consists of 28 statements that a student rates on
7 22 16 a four-point scale (1, almost never; 2, seldom; 3, often;
8 18 13
4, almost always). The questionnaire yields four
9 29 21
10 13 10 subscale scores: Understanding Teachers (UT) (eight
Missing information 1 1 statements), Understanding Students (US) (five
Gender statements), Positive Affect (PA) (six statements), and
Male 70 52 Negative Affect (NA) (nine statements). To respond to
Female 66 49 the teachers who suggested that the CPQ was too long
Average level of hearing loss (unaided)a for annual administration, we developed a 16-item
Unilateral loss 18 13 questionnaire with four items in each subscale. The

Downloaded from http://jdsde.oxfordjournals.org/ by guest on June 3, 2013


0–20 dB (high frequency) 4 3 28-items with the selected 16-items for the short version
21–40 dB (mild) 37 27
are provided in Appendix A. As mentioned earlier, UT
41–60 dB (moderate) 26 19
61–90 dB (severe) 26 19 and US were envisioned as cognitive components,
91–1001 dB (profound) 16 12 whereas PA and NA were envisioned as affective
Missing information 9 7 components.
Student’s home language To develop the 16-item scale as part of this study, we
English 107 79 requested M. Stinson, one of the authors of the original
Spanish 14 10 Perceived Communication Ease Questionnaire and the
ASL 4 3
Classroom Communication Ease Scale, to select four
Other 4 3
Missing information 7 5 items for each subscale. The intent was to best represent
the constructs envisioned by the original authors. In this
Mode of communication in school
Spoken 105 77
paper we present the validity evidence for the 28-item
Spoken and signed 18 13 (long) and 16-item (short) forms.
Sign only 10 7 The modifications made to the CPQ to adapt its use
Missing information 3 2 for younger students were primarily in presentation and
Interpreter use administration. The test was formatted to be in reason-
No 107 79 ably large print, and care was taken that items were not
Yes 26 19 crowded together on a page. Students took the test in-
Missing information 3 2
dividually (i.e., not in a group situation as with the
Hours/day in regular classroom
original test). They could take the test in more than
1–,3 13 10
one sitting as necessary. They could read the items in-
3–5 36 26
51 78 57 dependently or have the items read or signed to them by
Missing information 9 7 their itinerant or resource teacher of D/HH. These
ASL, American Sign Language. teachers could give explanations of words that were
a
Average of 500, 1,000, and 2,000 Hz in better ear. not understood by the student but were instructed to
cease administering the questionnaire if the student
needed explanation of more than two or three words.
classrooms (Karchmer & Mitchell, 2003), and a sub-
stantial percentage of students had a unilateral hearing Academic Competence Scale of the SSRS. The Aca-
loss. Only about 20% of students used sign language demic Competence Scale is one of three scales of the
6 Journal of Deaf Studies and Deaf Education

SSRS (Gresham & Elliott, 1990). The other two scales because they only provide percentile ranks and include
yield information on the student’s social skills and large numbers of children who have taken out-of-level
problem behaviors. The Academic Competence Scale tests. Therefore, these norms do not necessarily provide
is a nine-item scale completed by teachers. Teachers comparisons to children at the same age or grade level.
rate students on a five-point scale, placing each These norms also do not provide comparisons for the
student in the lowest 10%, the next lowest 20%, children with minimal hearing loss who are in our sam-
middle 40%, next highest 20%, or highest 10% on ple. The scores reported on the Stanford are normal
reading and mathematics compared to classmates and curve equivalents; these are standard scores with a mean
grade expectations. They also rate the students on of 50 and an SD of 21.06.
motivation, intellectual functioning, classroom behav-
ior, and parental encouragement. The Academic Com-
Procedure
petence Scale yields a standard score with a mean
of 100 and an SD of 15. The SSRS was normed on Each student’s itinerant or resource teacher of D/HH
a national sample of 2,109 students in Grades 3–12. administered the CPQ in the fall semester during the
Seventeen percent of the students in that sample were regular time to meet with the student. Teachers gave
identified as having a disability. Coefficient alpha re- the CPQ using the modifications described. A space

Downloaded from http://jdsde.oxfordjournals.org/ by guest on June 3, 2013


liability for the Academic Competence Scale was .95 was provided for teacher notes, so that teachers could
and test–retest reliability over a 4-week period was .93 inform us if the student’s responses seemed invalid for
in the standardization sample. any reason.
All students completed the ratings in reference to
Stanford Achievement Test—9th edition. The Stan- their general education classrooms. Middle and high
ford Achievement Test (Harcourt Brace Educa- school students were asked to think of their language
tional Measurement, 1997) is a standardized, norm- arts or social studies classes. If none of these were
referenced assessment including reading, mathematics, appropriate, they were asked to choose another class
and language that was administered to Arizona students including both lecture and class discussion. Students
in Grades 2–9 each spring as required for the state of who used interpreters were instructed to answer the
Arizona accountability purposes. Colorado students did questions about communication with other students
not take the Stanford and are therefore not included in and with teachers taking into account communication
analyses that included this test. The Stanford reading through the interpreter.
subtest measures reading vocabulary and comprehen- The SSRS Academic Competence Scale was com-
sion; the mathematics subtest measures problem solving pleted by each student’s general education teacher in
and mathematics procedures; and the language subtest the spring semester. For elementary students the class-
includes prewriting, composing, and editing. The Stan- room teacher completed the scale, whereas for second-
ford Achievement Test was normed on approximately ary students the scale was completed by the language
250,000 students from 1,000 school districts across the arts teacher with input from other teachers as needed.
United States. Students with disabilities or limited Data were obtained on 130 of the 136 students for
English proficiency who received instruction within a both the SSRS Academic Competence Scale and the
regular education classroom composed 5.5% of the CPQ.
norming sample. The Gallaudet Research Institute con- The Stanford achievement scores on reading,
ducts a norming study and publishes normative data on mathematics, and language were obtained from stu-
each version of the Stanford with D/HH students. dent records. These tests were administered once a year
However, because the sample for this study included (during the spring) only for students in Arizona
primarily students in general education classrooms, (the students in Colorado did not take the Stanford
we have used the norms for the general population, Achievement Test). There were no makeup days;
not the D/HH population. The norms for D/HH stu- therefore, scores were not available for students who
dents do not provide a good comparison for our sample were absent on the testing days. Data were available
Communication Participation 7

for 69 students for the reading subtest, 66 for mathe- Table 2 Intercorrelations between subscales with
matics, and 70 for language. 28-item form correlations above the diagonal and 16-item
form correlations below the diagonal (N ¼ 136)
Subscale 1 2 3 4
Results 1. UT __ .68 .57 .28
2. US .64 __ .72 .44
The following section provides the data for each cat- 3. PA .57 .70 __ .54
egory of validity: content representativeness, internal 4. NAa .31 .44 .45 __
a
structure, external structure, reliability over time, and Reverse scoring.

practicality. p , .001 for all correlations.

Content Representativeness variables correlated (see Nunnally & Bernstein, 1994,


The best evidence of content representativeness is the for the formula and the explanation of the approach).
list of the items in Appendix A. The potential user of The correlation between two perfectly reliable meas-
the CPQ should examine this set of items to determine ures would best describe the relationship between the
two constructs being compared. Table 3 presents the

Downloaded from http://jdsde.oxfordjournals.org/ by guest on June 3, 2013


whether the items are appropriate for measurement of
the construct of interest. A further examination of the corrected-for-attenuation correlations between pairs of
content is included later when we discuss the deter- subscales for the 28-item and the 16-item forms.
mination of total scores. The data in Tables 2 and 3 address the issue of how
the tasks help define the construct of classroom par-
ticipation. Because of some differences in the reliabil-
Internal Structure Evidence: Results and
ity coefficients described above, the data in Table 3 are
Interpretation
the better indicators of task intercorrelations of the
We present first the relationships between the four parts of the construct. As shown in Table 3, the cor-
subscales of the CPQ (UT, US, PA, and NA). These relations among the UT, US, and PA subscales are
relationships help determine whether the components substantially higher than the correlations involving
of the CPQ work together so that each component NA. The highest relationship occurs between US
contributes positively toward assessing the relevant and PA, although US was intended to measure a cog-
construct. We then present data for interitem consis- nitive component and PA was intended to measure an
tency. For the NA scale, the scoring is reversed so affective component of communication. Thus, for our
that for all scales high scores are desirable. data, the cognitive and affective aspects of the CPQ
are not well differentiated.
Subscale correlations. Table 2 presents the correla- The type of reasoning used in the previous para-
tions between pairs of subscales. Because the correla- graph involves an examination of what are termed
tion obtained between two subscales depends on the
reliability of each subscale, it is easier to examine the
correlations if they are corrected for attenuation Table 3 Corrected-for-attenuation intercorrelations
between subscales with 28-item form correlations above
(Nunnally & Bernstein, 1994). The correction for at-
the diagonal and 16-item form correlations below the
tenuation is intended to estimate the correlation that diagonal (N ¼ 136)
would be obtained if each subscale were perfectly Subscale 1 2 3 4
reliable, and thus, the corrected correlations are an
1. UT __ .80 .64 .32
attempt to directly infer the degree to which the sub- 2. US .80 __ .64 .52
scales measure the same construct. The correction for 3. PA .71 .89 __ .61
attenuation used to adjust the correlations in Table 3 is 4. NAa .38 .55 .56 __
applied by dividing an obtained correlation by the a
Reverse scoring.
square root of the product of the reliabilities of the two p , .001 for all correlations.
8 Journal of Deaf Studies and Deaf Education

convergent and discriminant aspects of validity. The Table 4 Coefficient alpha reliability estimates, means,
examination of convergence asks whether the task cor- and SDs for subscales and UT/US/PA composite
(N ¼ 136)
relates highly with those traits that it should resemble;
the examination of discriminant validity asks whether Long form Short form
a
the task correlates lower with traits that it should not K Alpha Mean (SD) K Alpha Mean (SD)
resemble. The NA subscale was originally intended to UT 8 .89 3.3 (0.56) 4 .82 3.3 (0.60)
have more convergent validity with PA and more dis- US 5 .81 3.1 (0.61) 4 .78 3.1 (0.64)
PA 6 .88 3.2 (0.65) 4 .79 3.2 (0.64)
criminant validity with respect to UT and US. In this
NA 9 .88 3.2 (0.57) 4 .82 3.5 (0.60)
respect, NA is not correlated high enough with PA UT/US/PA 19 .93 3.2 (0.53) 16 .90 3.2 (0.54)
but does have the desired lower correlations with UT composite
and US. UT and US correlate well enough with each a
K ¼ number of statements.
other, but the US with PA correlation is problematic
because discriminant validity was expected for that rela- (1991) who reported interitem consistency reliabilities
tionship by the original authors (Garrison et al., 1994). of .77 for UT, .73 for US, .63 for PA, and .80 for NA.
A possible resolution to the lack of convergent/ Means and SDs for the subscales for the 28-item
discriminant validity of the subscales is to avoid a cog- and 16-item forms are similar. Mean subscale scores

Downloaded from http://jdsde.oxfordjournals.org/ by guest on June 3, 2013


nitive versus affective interpretation of the data. Thus, are calculated by summing individual item ratings and
it is not appropriate to report averaged scores for cog- dividing by the number of items in the subscale; so the
nitive (UT, US) and affective (PA, NA) communica- number of statements does not change the magnitude
tion or to report a total score. However, each subscale of the scores. Note that for the NA scale the scoring is
can be considered a separate construct. Given the high reversed to make scoring comparable for all scales.
correlations between UT, US, and PA, an additional
solution is to have a summative score for the three
Reliability Over Time
convergent subscales (UT, US, and PA) and use the
NA subscale as an additional measure. Therefore, the For measuring reliability over time (stability), we used
following tables will include reliability and validity the test–retest method over 1- and 2-year intervals for
data on the composite, UT/US/PA, as well as the those students who had data available for those years.
individual subscales. These correlations are presented in Table 5. The sta-
bility estimates presented in Table 5 are, as expected,
Interitem consistency. Interitem consistency (Anastasi & considerably lower than the intercorrelations between
Urbina, 1997) measures the degree to which the items subscales reported in Tables 2 and 3, probably because
in a set measure the same construct, and we used of the changing situations occurring as the students
coefficient alpha (Cronbach, 2004) for this assess- moved into different classrooms with different class-
ment. Interitem consistency reliability was examined mates and teachers each year. We were unable to give
separately for the UT/US/PA composite and each the CPQ to students more than once a year to get
subscale of the 28-item and 16-item forms. Table 4 a better measure of test–retest reliability under more
presents these data obtained from the total sample of consistent conditions.
136 students. Reliability for each subscale of the 28-
item scale is higher than the reliability for the corre-
External Structure Evidence
sponding subscale of the 16-item scale, as is expected
due to the increased number of statements. The re- Relationships of the CPQ to other meaningful varia-
liability coefficients in Table 4 for the 28-item test bles provide external structure evidence of validity.
range from .81 to .89 and on the 16-item test from This category of evidence allows the reader to examine
.78 to .82. The reliabilities for the subscales for both whether the tasks exhibit meaningful relationships
the 28-item and 16-item scales of the CPQ from this with variables that might be used to better understand
sample are higher than those reported by Long et al. the construct measured by the CPQ. To examine the
Communication Participation 9

Table 5 Stability estimates (test–retest correlations) for Table 6 shows the correlations of the CPQ sub-
subscales across changing conditions scales and the UT/US/PA composite with the SSRS
Years 2–3 Years 1–2 Years 1–3 Academic Competence Scale and the Stanford read-
Student N 111 121 116 ing, mathematics, and language scores; better ear PTA;
Long form subscales and interpreter use. Student ratings of classroom par-
UT .43* .49* .36* ticipation and teacher ratings of academic competence
US .49* .39* .35*
and Stanford scores are positively related (p , .01),
PA .49* .36* .31**
NA .45* .34* .29** with the exception of the Stanford mathematics on the
UT/US/PA composite .52* .46* .37* 16-item form. A less severe hearing loss, as measured
Short form subscales by better ear PTA and not having an interpreter is
UT .37* .42* .35* associated with UT and US but not with PA ratings.
US .44* .35* .32* Degree of hearing loss and not having an interpreter is
PA .49* .36* .31** associated with NA ratings only on the 16-item form.
NA .40* .22** .20**
The composite UT/US/PA is significantly related to
UT/US/PA composite .52* .45* .38*
all the external academic measures.
*p , .001.
To determine whether the CPQ can explain vari-

Downloaded from http://jdsde.oxfordjournals.org/ by guest on June 3, 2013


**p , .01.
ance in academic achievement beyond explanations
based on demographic variables such as degree of
external structure evidence, we correlated each of hearing loss and use of interpreters, we analyzed the
the four subscales and the composite of UT/US/PA data using sequential multiple regression. Two analy-
with teacher ratings of the SSRS Academic Com- ses were conducted: In the first analysis, interpreter
petence Scale and with the Stanford test scores. use was entered as an independent variable in Step 1,
We also correlated the CPQ scores with two demo- better ear PTA and interpreter use were entered as
graphic variables that we expected to be related to independent variables in Step 2, and finally the CPQ
the student’s perceptions of classroom participation: composite score (UT/US/PA) and NA were entered
better ear PTA and whether the student used an additionally in Step 3. In the second analysis, PTA
interpreter. was entered as an independent variable in Step 1;

Table 6 Correlations of 28-item and 16-item CPQ with academic achievement and student characteristics
UT US PA NAa UT/US/PA
28-item
Academic competence (N ¼ 132) .28** .32** .21** .27** .30**
Stanford reading (N ¼ 69) .31** .48** .36** .17 .44**
Stanford mathematics (N ¼ 66) .22* .30** .26* .18 .29**
Stanford language (N ¼ 70) .30** .43** .41** .15 .43**
Better ear PTA (N ¼ 127) .23** .22** .00 .08 .16*
Use of interpreter (N ¼ 133) .21** .15* .08 .10 .16*
16-item
Academic competence .26** .31** .23** .32** .31**
Stanford reading .22* .46** .33** .23* .39**
Stanford mathematics .15 .30** .24* .22* .27*
Stanford language .20* .43** .40** .19 .40**
Better ear PTA .19* .20* .01 .20* .14
Use of interpreter .17* .12 .06 .19* .14
a
Reverse scoring.
*p , .05 (one tailed).
**p , .01 (one tailed).
10 Journal of Deaf Studies and Deaf Education

Table 7 Predictions of academic achievement by hearing loss, interpreter use, and CPQ scores for 28-item and
16-item scales
Total R2 (Step 1 Total R2
Dependent variable interpreter use) Incremental R2 (Step 1 PTA) Incremental R2
Academic competence (N ¼ 119) Step 1a .05* .01
Step 2b .06* .01 .06* .05*
28-item Step 3c .13** .07** .13** .07**
16-item Step 3c .15** .09** .15** .09**
Stanford reading (N ¼ 60) Step 1 .16** .06
Step 2 .16** .00 .16** .10**
28-item Step 3 .32** .15** .32** .15**
16-item Step 3 .30** .13** .30** .13**
Stanford mathematics (N ¼ 56) Step 1 .09* .00
Step 2 .09 .00 .09 .09
28-item Step 3 .21* .12* .21* .12*
16-item Step 3 .20* .11* .20* .11*
Stanford language (N ¼ 60) Step 1 .12** .02
Step 2 .13* .01 .13* .11**

Downloaded from http://jdsde.oxfordjournals.org/ by guest on June 3, 2013


28-item Step 3 .30** .18** .30** .18**
16-item Step 3 .28** .16** .28** .16**
a
Step 1 includes interpreter use in columns 1 and 2; better ear PTA in columns 3 and 4.
b
Step 2 includes interpreter use and better ear PTA.
c
Step 3 includes interpreter use, better ear PTA, and CPQ scores for either the 28-item or 16-item scale as indicated.
*p , .05.
**p , .01.

the remaining analysis was the same. The results for Practicality
the 28-item and 16-item scales are shown separately in
The CPQ is fairly easy for most students to complete.
Table 7. The first two columns show the results when
Only two students between Grades 4 and 10 were
interpreter use is entered first; the last two columns
reported not to be able to take the test because of their
show the results when PTA is entered first. In both
inability to comprehend the items when either read
pairs of columns, CPQ scores are entered as a third
or signed to them. Teacher comments that the 28-item
step. These data show that PTA alone does not appear
scale was too long provided the rationale for develop-
to predict much variance in academic outcomes for
ing the 16-item scale.
this sample of students or to add to the prediction
based on interpreter use. Interpreter use and PTA
together significantly predict academic outcomes,
Discussion
while the prediction is improved further when the
CPQ scores are added. The complete model (i.e., bet- The purpose of this article is to provide evidence for
ter ear PTA, interpreter use, and 28-item CPQ scores) using the CPQ to assess the participation of D/HH
explains 13% of variance for Academic Competence students in general education classrooms. The CPQ
scores, 32% of the variance for reading, 21% of the has previously been used only with high school and
variance for mathematics, and 30% of the variance for college students; the data presented here provide evi-
language scores. The CPQ scores explained the most dence that the instrument can be used with students in
additional variance (18%) for language scores. The elementary and middle school classrooms as well. Ad-
16-item scale explains less variance than the 28-item ditionally, the CPQ had previously been used with
scale for each outcome except Academic Competence high school students at a center school; our data in-
scores. dicate that elementary and middle school students can
Communication Participation 11

reliably rate their own participation in general educa- reported here are lower than those reported by Long
tion classrooms. Our sample also provides evidence et al. (1991) who reported significant correlations of
that the CPQ can be used with hard of hearing stu- .42 for mathematics and .29 for language. Differences
dents, as our sample, unlike previous samples, includes in the characteristics and school experiences of the two
a high percentage of students with unilateral, mild, sets of participants may have contributed to the dif-
and moderate hearing losses. Because classroom par- ferent results. The students in the Long et al. study
ticipation is a concern for D/HH students in these were enrolled in a school for the deaf, in contrast to
classrooms and can impact student learning, teachers this study where the students were in public schools.
can use the results of the CPQ in addition to other Despite our results, we do not at this time recommend
assessments to assist in decision making about place- deleting the NA scale. Instead, we recommend that
ment, interpreter use, and other issues. Our findings teachers use it diagnostically and consider interven-
suggest that both the long (28-item) and short (16- tions for students who complete ratings of often or
item) forms can be useful to teachers or researchers, almost always on the NA statements.
although they may be used for different purposes. We The CPQ scores are significantly, though modestly,
recommend using the long form when individual correlated with academic achievement. The correla-
decisions are to be made (e.g., changes in classroom tion between communication participation and student

Downloaded from http://jdsde.oxfordjournals.org/ by guest on June 3, 2013


placement for a student). However, teachers might achievement lends support to the idea that students
be able to use the short form if time is a factor, for who participate and who feel positively about their
example, when teachers have to read the scale to the participation are more likely to do well academically,
students. The short form would also be of interest to although, of course, a cause-and-effect relationship
researchers who are interested in examining primarily cannot be established. It is interesting that each of
group data. Some possible uses for researchers might the CPQ subscale scores is positively and significantly
be to measure change after specific interventions to correlated to teacher-rated academic competence.
increase communication participation, to examine stu- Braeges et al. (1993) also reported significant positive
dent’s perceptions of differences in communication in correlations between teacher-rated academic engage-
different educational settings, or to compare student’s ment and student-rated communication ease for stu-
self-perception with teacher’s perceptions of student dents in Grades 7–10 attending a school for the deaf.
communication. Their data are not exactly comparable with ours as
Our data indicate that items on the CPQ do not they used a total score of all four subscales and re-
measure separately the cognitive and affective compo- ported correlations of .48 for mathematics and .32 for
nents of communication in classrooms as envisioned language; our data (on the 28-item scale) show a cor-
by the authors of the original scale. The results show relation of .29 for mathematics and .43 for language
a high correlation between UT, US, and PA, whereas with the composite score of UT/US/PA. Of course,
NA does not correlate highly with any of the other the samples in the two studies are quite different; their
subscales, perhaps because responding to the NA neg- sample consisted of deaf high school students attend-
atively criterial statements is different than responding ing a center school; our sample can be characterized as
to positively criterial statements in the other three mostly hard of hearing students in general education
subscales. We therefore recommend using a composite classes. Also, our sample of students are likely to have
score (UT/US/PA) and a separate score for NA. One a much wider range of academic achievement than
of the reasons for the high correlations between US those in their study. Our data, therefore, extend their
and PA may be because both subscales include several findings and show a relationship between teacher-
items about group discussion. rated academic competence and student-rated class-
In this study, NA does not correlate significantly room communication for younger students in general
with any of the Stanford scores in the 28-item scale education classrooms. It should be remembered that
but it does correlate significantly with reading and the teachers who rated academic competence were
mathematics in the 16-item scale. The correlations the students’ general education teachers, whereas the
12 Journal of Deaf Studies and Deaf Education

teachers of D/HH administered the CPQ to the stu- of decisions being made based on test results, and
dent. The general education teachers were not aware reliabilities of .80 are acceptable in many testing
of the students’ CPQ ratings. Thus, the general edu- situations. The interitem correlations for the 16-item
cation teachers’ ratings were independent of the stu- form provide evidence that the items chosen are more
dents’ ratings. reliable than a random sample of items would be.
We had expected that degree of hearing loss and Although the correlations between years are sig-
interpreter use would be significantly correlated with nificant, they are quite modest. We expected lower
CPQ scores. Our data show that they are moderately correlations across years because students encounter
(though significantly) correlated with UT and US but different teachers and must adjust to a new environment
not PA or NA (on the 28-item scale). Our data also each year. The higher correlations across years for the
show that CPQ scores are reasonably good predictors composite (UT/US/PA) compared to the individual
of academic achievement when added to demographic subscales are expected because of the increased number
predictors such as degree of hearing loss and inter- of items when three subscales are combined. However,
preter use. Degree of hearing loss alone added very the expectation that additional items result in higher
little to the prediction of academic achievement. In- correlations was not evident in the comparison of the
terpreter use predicted achievement in reading, mathe- stability of the 16-item subscales or composite com-

Downloaded from http://jdsde.oxfordjournals.org/ by guest on June 3, 2013


matics, and language, as well as teacher-rated academic pared with the 28-item subscales. Again, the similarity
competence. Whereas hearing loss and interpreter use of the stability estimates for these two forms is evidence
together accounted for between 6% and 16% of vari- that the items chosen for the short form are more stable
ance on academic outcomes, adding the CPQ scores than a random sample of items would be.
allowed us to account for between 13% and 32% of Teachers can use the individual scales of the
variance on academic outcomes. Although these predic- CPQ diagnostically to alert them to the student’s prob-
tions are significant, they are still modest, and clearly lems in classroom communication that may result in
several other factors, not measured here, affect out- academic difficulties. The results of the CPQ might
comes. However, our data make clear that an assessment help them to determine when and how to intervene.
of students’ perceptions of classroom participation is For example, if students rate themselves low in
valuable and that assumptions about the ability or ease understanding their peers, the teacher of D/HH
of student participation in the classroom should not be could observe the classroom to identify specific group
based solely on student demographic variables. discussion practices that might be modified. The
An examination of Table 6 shows that US con- teacher of D/HH might also provide information to
sistently has the highest correlations with all the classroom teachers and hearing classmates to increase
academic outcome measures. The relationship between their sensitivity about classroom communication
US and academic achievement points to the impor- difficulties that the D/HH student might have. The
tance of understanding peers as part of classroom teacher could use specific CPQ responses to engage
learning. As noted by Stinson et al. (1996), college- with the student in developing self-advocacy strategies
age D/HH students indicated that communication such as requesting clarification during group discus-
with peers was a challenge. Clearly, the same challenge sions or requesting a quiet space for group projects.
might exist for school-age D/HH students. The modifications made to the questionnaire
Although the interitem reliabilities for both the administration allowed students who were not able to
28- and 16-item scales are respectable, they are prob- read fluently to complete the self-rating. Teachers
ably not high enough to make important decisions did not report that students had difficulty under-
about individuals, but important decisions should standing the questionnaire items. However, we did
not be made based on a single test. The coefficients not obtain data on how many students read the items
are high enough for most group decisions. Reynolds, independently. Future research on the use of the CPQ
Livingston, and Willson (2005) suggest that judg- should examine carefully the effects of test-taking
ments about reliability should be made in the context modifications.
Communication Participation 13

In conclusion, the CPQ may be a useful tool to Negative Affect


examine participation in general education classrooms
I feel lonely because I cannot understand other stu-
for elementary and middle school students as well
dents
as high school students. The instrument could be
*I feel frustrated because it is difficult for me to com-
used diagnostically by teachers to intervene in difficult
municate with other students
classroom communication situations where necessary.
*I get upset because other students cannot understand
However, we do not advocate using the CPQ alone to
me
make decisions about educational placement.
*I get upset because my teacher cannot understand me
I feel nervous when I talk to other students
Appendix A I feel nervous when I talk to my teacher
I feel nervous in group discussions in class
CPQ Items Arranged by Subscale I feel frustrated in group discussions in class
*I feel unhappy in group discussions in class
Understanding Teachers
*Items included in the 16-item short scale.
My teacher understands me

Downloaded from http://jdsde.oxfordjournals.org/ by guest on June 3, 2013


*I understand my teacher
I have enough time to answer the teachers’ questions References
I understand the homework assignments my teacher
Anastasi, A., & Urbina, S. (1997). Psychological testing (7th ed.).
gives me Upper Saddle River, NJ: Prentice-Hall.
I understand when my teacher tells me what to study Antia, S. D., & Kreimeyer, K. H. (2001). The role of inter-
for a test preters in inclusive classrooms. American Annals of the Deaf,
146, 355–365.
*I understand my teacher when she gives me home- Braeges, J., Stinson, M. S., & Long, G. (1993). Teachers’ and
work assignments deaf students’ perceptions of communication ease and en-
*I understand my teacher when she answers other gagement. Rehabilitation Psychology, 38, 235–247.
students’ questions Cronbach, L. J. (2004). My current thoughts on coefficient
alpha and successor procedures. Educational and Psycholog-
*I understand my teacher when she tells me what to ical Measurement, 64, 391–418.
study for a test Garrison, W., Long, G., & Stinson, M. S. (1994). The classroom
communication ease scale. American Annals of the Deaf, 139,
132–140.
Understanding Students Gresham, F. M., & Elliott, S. N. (1990). Social skills rating
system. Circle Pines, MN: American Guidance Service.
The other students in class understand me
Harcourt Brace Educational Measurement. (1997). Stanford
*I understand the other students in class achievement test series, ninth edition: Spring multilevel norms
*I join in class discussions book. San Antonio, TX: Author.
*I understand other students during group discus- Karchmer, M., & Mitchell, R. E. (2003). Demographic and
achievement characteristics of deaf and hard-of-hearing
sions students. In M. Marschark & P. E. Spencer (Eds.), Oxford
*I understand other students when they answer my handbook of deaf studies, language and education (pp. 21–37).
teacher’s questions New York: Oxford University Press.
Long, G., Stinson, M. S., & Braeges, J. (1991). Students’ per-
ception of communication ease and engagement: How they
Positive Affect relate to academic success. American Annals of the Deaf, 136,
414–421.
*I feel good about how I communicate in class Nitko, A. J. (2001). Educational assessment of students (3rd ed.).
I feel relaxed when I talk to other students Upper Saddle River, NJ: Merrill Prentice Hall.
*I feel relaxed when I talk to my teacher Nunnally, J. C., & Bernstein, I. H. (1994). Psychometric theory
(3rd ed.). New York: McGraw-Hill.
I feel relaxed in group discussions
Preisler, G., Tvingstedt, A., & Ahlstrom, M. (2005). Interviews
*I feel happy in group discussions in class with deaf children about their experiences using cochlear
*I feel good in group discussions in class implants. American Annals of the Deaf, 150, 260–267.
14 Journal of Deaf Studies and Deaf Education

Reynolds, C. R., Livingston, R. B., & Willson, V. (2005). Mea- Sechrest, L. (2005). Validity of measures is no simple matter.
surement and assessment in education. Boston: Pearson. Health Services Research, 40, 1584–1604.
Saur, R. E., Layne, C. A., Hurley, E. A., & Opton, K. (1986). Shaw, J., & Jamieson, J. (1997). Patterns of classroom discourse
Dimensions of mainstreaming. American Annals of the Deaf, in an integrated, interpreted elementary school setting.
131, 325–330. American Annals of the Deaf, 142, 40–47.
Schick, B., Williams, K., & Bolster, L. (1999). Skill levels of Stinson, M. S., Liu, Y., Saur, R. E., & Long, G. (1996). Deaf
educational interpreters working in public schools. Journal college students’ perceptions of communication in main-
of Deaf Studies and Deaf Education, 4, 144–155. stream classes. Journal of Deaf Studies and Deaf Education,
Schick, B., Williams, K., & Kupermintz, H. (2006). Look whose 1, 40–51.
being left behind: Educational interpreters and access to
education for deaf and hard of hearing students. Journal Received January 20, 2006; revisions received September 11,
of Deaf Studies and Deaf Education, 11, 3–20. 2006; accepted October 24, 2006.

Downloaded from http://jdsde.oxfordjournals.org/ by guest on June 3, 2013

View publication stats

Вам также может понравиться