Вы находитесь на странице: 1из 36

Journal of Literacy Research http://jlr.sagepub.

com/

Contributions of Student Questioning and Prior Knowledge to Construction of Knowledge from Reading Information Text
Ana Taboada and John T. Guthrie Journal of Literacy Research 2006 38: 1 DOI: 10.1207/s15548430jlr3801_1

The online version of this article can be found at: http://jlr.sagepub.com/content/38/1/1 Published by:
http://www.sagepublications.com

On behalf of:
Literary Research Association

Additional services and information for Journal of Literacy Research can be found at: Email Alerts: http://jlr.sagepub.com/cgi/alerts Subscriptions: http://jlr.sagepub.com/subscriptions Reprints: http://www.sagepub.com/journalsReprints.nav Permissions: http://www.sagepub.com/journalsPermissions.nav Citations: http://jlr.sagepub.com/content/38/1/1.refs.html

>> Version of Record - Mar 1, 2006 What is This?

Downloaded from jlr.sagepub.com at DUQUESNE UNIV on February 2, 2012

JOURNAL OF LITERACY RESEARCH, 38(1), 135 Copyright 2006, Lawrence Erlbaum Associates, Inc.

Contributions of Student Questioning and Prior Knowledge to Construction of Knowledge From Reading Information Text
Ana Taboada and John T. Guthrie
Department of Human Development University of Maryland

This study investigated the relationship of student-generated questions and prior knowledge with reading comprehension. A questioning hierarchy was developed to describe the extent to which student-generated questions seek different levels of conceptual understanding. Third- and fourth-grade students (N = 360) posed questions that were related to their prior knowledge and reading comprehension, measured as conceptual knowledge built from text. The results indicated that student questioning accounted for a significant amount of variance in students reading comprehension, after accounting for the contribution of prior knowledge. Furthermore, low- and high-level questions were differentially associated with low and high levels of conceptual knowledge gained from text, showing a clear alignment between questioning levels and reading comprehension levels.

An active learner has been described as inquisitive and curioussomeone who asks a substantial number of questions (Graesser, McMahen, & Johnson, 1994). Students who compose and answer their own questions are perceived as playing an active, initiating role in the learning process (Collins, Brown, & Newman, 1990; King, 1994; Palincsar & Brown, 1984; Singer, 1978). They seek information that is related to an existing knowledge structure (Olson, Duffy, & Mack, 1985). Student questioning, defined as self-generated requests for information within a topic or domain, relies on assessing what is known and what is unknown about a topic and attempting to expand existing knowledge of the topic (Taboada & Guthrie, 2004).
Correspondence should be addressed to Ana Taboada, University of Maryland, 3304 Benjamin Building, College Park, MD 20742. E-mail: ataboada@umd.edu

Downloaded from jlr.sagepub.com at DUQUESNE UNIV on February 2, 2012

TABOADA AND GUTHRIE

In reading, student questioning is represented as a strategy that helps foster active comprehension (e.g., National Reading Panel, 2000; Singer, 1978). The significance of student questioning during reading was underscored in a call for the improvement of comprehension tests: We might wish for more extended passages, more complex interpretive questions, and certainly, opportunities for students to formulate questions about what they read instead of just selecting answers to a test-makers questions (Resnick & Klopfer, 1989, pp. 208209). Student Questioning in Relation to Text Instruction in generating questions in relation to both expository and narrative texts has been shown to positively influence reading comprehension for elementary school, middle school, high school, and college students (Ezell, Kohler, Jarzynka, & Strain, 1992; King & Rosenshine, 1993; Nolte & Singer, 1985; Raphael & Pearson, 1985; Scardamalia & Bereiter, 1992; Singer & Donlan, 1982; Taylor & Frye, 1992). The instructional effect has been evident in students accuracy in answering test questions, better free recall of text, and identification of main ideas (Rosenshine, Meister, & Chapman, 1996). However, a limitation of many of these studies is that the authors have not attempted to provide evidence that the processes of question asking were the source of improvement in comprehension, nor has a theoretical explanation for the effects of questioning instruction been provided. For example, it is possible that instruction on questioning increased students activation of their background knowledge and that such activation accounted for the positive effects of the instruction. In other words, the attribution of the instructional effects to questioning has not been shown empirically, and a theoretical explanation of the benefits of questioning instruction has not been formulated in detail. The evidence for questioning instruction in relation to narrative texts is extensive in terms of the types of questions students ask and the impact these questions have on different comprehension measures. For instance, third graders who learned to ask literal questions in relation to short stories showed significant gains in answering and generating questions in criterion and standardized reading comprehension tests as compared to students who did not learn to generate story-based questions (Cohen, 1983). Older students, who learned to ask story-specific questions by using elements of story structure (e.g., Who is the leading character?), also scored significantly higher on tests assessing knowledge of story structure as compared to students who answered teacher-posed questions (Nolte & Singer, 1985; Singer & Donlan, 1982). Furthermore, third-grade students have learned to formulate their own questions by distinguishing between the text to which the question referred and the knowledge base of the reader (Ezell et al., 1992). These students showed gains of 2.2 years (grade-equivalent score) on the California Achievement Test when compared to third graders who did not receive questioning instruction

Downloaded from jlr.sagepub.com at DUQUESNE UNIV on February 2, 2012

STUDENT QUESTIONING

(Ezell et al., 1992). However, these results may be confounded by the fact that students who received questioning instruction had also been exposed to a rich, narrative reading curriculum with a large number of supplemental stories and were compared to students who did not have the same curriculum. A meta-analysis of instructional studies (Rosenshine et al., 1996) revealed that the impact of questioning instruction yielded larger effect sizes for experimenter-based comprehension tests (effect size [ES] = .87) than for standardized tests (ES = .36). These effects were observed when students asked specific questions using, mainly, three types of question prompts: (a) signal words (e.g., who, where, how, why), (b) generic question stems (e.g., How are X and Y alike? How is X related to Y?) for expository texts, and (c) story grammar categories (e.g., a main characters goals) for narrative texts. Despite the evidence that instruction in questioning in relation to narrative texts has a positive impact on the comprehension of those texts, the literature has not fully addressed that impact from a theoretical viewpoint. A similar scenario occurs in the case of questioning in relation to expository texts. For example, third-grade students who asked two literal-text types of questionsdefinition of terms and clarification questions (MacGregor, 1988)did not differ in vocabulary and reading comprehension from students who asked mainly one of the two question types. It is possible that, to have an impact on reading comprehension, students need to learn to ask questions that go beyond the literal level of term definitions and require integration of information between the text and the readers prior knowledge. In fact, when sixth graders learned to differentiate between literal and inferential questions in relation to expository passages, they were better at answering and asking questions than students who engaged only in question practice or who did not ask any questions (Davey & McBride, 1986). Similarly, sixth graders who were taught to formulate questions on the main ideas of expository paragraphs (Dreher & Gambrell, 1985) performed better in answering main idea questions for new paragraphs than students who interacted with text through different activities. In summary, studies have indicated that a wide age range of students can learn to generate questions in relation to text (Cohen, 1983; Dreher & Gambrell, 1985; Ezell et al., 1992; Nolte & Singer, 1985; Palincsar & Brown, 1984; Rosenshine et al., 1996; Singer & Donlan, 1982) and that this questioning instruction fosters reading comprehension on both experimenter-designed and standardized tests (e.g., National Reading Panel, 2000; Rosenshine et al., 1996). Occasionally, researchers have discussed possible explanations for the impact of question generation on reading comprehension. For example, with regard to expository texts, it has been assumed that higher order inferential questions induce more thorough processing of text and enhance attention to the macrostructure of text (Davey & McBride, 1986), whereas for narrative texts, story-based questions were believed

Downloaded from jlr.sagepub.com at DUQUESNE UNIV on February 2, 2012

TABOADA AND GUTHRIE

to aid in the organization of story events (Singer & Dolan, 1982). However, evidence has not been presented to address these possibilities. Despite the evidence that students who ask questions improve their understanding or their reading comprehension of a topic, researchers have not attempted to account for why instruction in questioning improves their reading comprehension of a text. Theoretical explanations for the impact of questioning instruction on students reading comprehension have been scarce, but at least three possibilities exist, and we discuss them next. Influence of Questioning on Reading Comprehension Processes Among the factors that can explain the relationship between questioning and reading comprehension, three have been discussed in previous literature: (a) active text processing, (b) knowledge use, and (c) attentional focus. According to some authors (e.g., Davey & McBride, 1986; Singer & Dolan, 1982), it is possible that the generation of questions improves reading comprehension as a result of active text processing (Wittrock, 1981). When asking questions, students are involved in multiple processes requiring deeper interaction with text. During questioning, students ponder relationships among different aspects of the text. They hypothesize, focus on details and main ideas, use attention selectively on different text sections (van den Broek, Tzeng, Risden, Trabasso, & Basche, 2001), and possibly anticipate conclusions about information in the text. Questions may contribute to reading comprehension mostly because they initiate cognitive processes. A second explanation for the association between questions and reading comprehension is the influence of prior knowledge on students questions. In particular, prior knowledge may play a very specific role in the types of questions a student asks. College students, with little prior knowledge in a knowledge domain, do not ask many questions on materials that are too difficult or that exceed the extent of their knowledge base in the domain. Experts, however, tend to ask more questions on difficult materials than they do for easier, less conceptual materials in that domain (Miyake & Norman, 1979). These data support the notion that some type of relationship exists between the extent of the questioners prior knowledge and the number of questions asked. A plausible explanation for this relationship is that questions activate prior knowledge, which, in turn, aids in reading comprehension. A third possibility is that the impact of questioning on reading comprehension is explained by attentional factors. By asking questions related to a specific topic, the questioner directs his or her attention to text sections that contain information necessary to provide appropriate answers. This process has been termed the selective attention hypothesis, where questions lead to a focusing of attention on text segments containing information from the category that the questions are about (Reynolds & Anderson, 1982, p. 624). College students retained more knowledge

Downloaded from jlr.sagepub.com at DUQUESNE UNIV on February 2, 2012

STUDENT QUESTIONING

from text information that was relevant to questions than they retained from text information irrelevant to questions. This evidence supports the notion that readers selectively allocate more attention to question-relevant information and learn this information better (Reynolds & Anderson, 1982). Van den Broek et al. (2001) described specific attention perspective (p. 522) in relation to narrative texts. Under this perspective, readers comprehension and memory would improve only for the story sections that were targeted by the questions asked. A general attention perspective (p. 522), for which questioning results in improved comprehension of the whole text, was also proposed. Under the general attentional focus, readers are motivated to give thorough answers that require integration of information across the story; thus, they focus on understanding the text as a whole (van den Broek et al., 2001). All three explanations are feasible reasons for the association between questioning and reading comprehension. However, few of these reasons have been empirically investigated in past research.

Questioning and the Conceptual Level Hypothesis We propose a fourth plausible explanation for the contribution of questioning to reading comprehension: that the conceptual levels of questions enable students to build knowledge structures from text. When the text is expository or informational, reading comprehension can be characterized by the conceptual knowledge constructed from text (Alao & Guthrie, 1999; Guthrie & Scafiddi, 2004). Conceptual knowledge consists of content information that can be structurally organized within a knowledge domain or a particular topic in that domain. Central to this structural organization are the interrelationships among the main concepts in the knowledge domain (e.g., Alao & Guthrie, 1999; Champagne, Klopfer, Desena, & Squires, 1981; Chi, de Leeuw, & Chiu, 1994; Guthrie & Scafiddi, 2004). Student questions, then, may support expository text comprehension to the extent that they support building a conceptual knowledge structure that includes the main concepts and essential relationships among the concepts in the text (Taboada & Guthrie, 2004). Most theories of comprehension view successful understanding of a text as the identification of the elements in the text and the relationships among those elements to form a coherent structure, a mental representation of the text (e.g., Graesser & Clark, 1985; Kintsch, 1998; Trabasso, Secco, & van den Broek, 1984; van den Broek & Kremer, 2000). Students questions may enhance reading comprehension by creating a preliminary structure for the different elements and relationships of a text representation. Questions may benefit comprehension of narrative texts to the extent that they support the text representation of a causal network (van den Broek et al., 2001). Similarly, questions may increase expository reading comprehension to the extent that they support the conceptual knowledge structure

Downloaded from jlr.sagepub.com at DUQUESNE UNIV on February 2, 2012

TABOADA AND GUTHRIE

of the text (Taboada & Guthrie, 2004). We call this process the conceptual level hypothesis. To investigate the hypothesis that questions increase comprehension by creating a preliminary expectation for the conceptual knowledge structure of the text, it is necessary to build a framework that characterizes the structural qualities of questions. In the past, this has been done by describing types of questions. The majority of previous studies have proposed binary levels of question types, such as literal and inferential (e.g., Cohen, 1983; Davey & McBride, 1986; Ezell et al., 1992), definitional versus clarification (MacGregor, 1988), main idea questions versus detail questions (Dreher & Gambrell, 1985; Palincsar & Brown, 1984), and so on. A few studies have described question hierarchies, which categorize questions along a continuum of types or levels. In some of these hierarchies, higher level questions tend to subsume lower level ones, with higher level questions being more inclusive in their requests for information than lower levels. For example, Cuccio-Schirripa and Steiner (2000) described a four-level question hierarchy in which low-level questions required yes/no or factual answers, whereas high-level questions required causeeffect explanations of science phenomena. High-level questions have also been described as eliciting responses such as explanations of concepts, relationships, inferences, and application of information to new situations (King & Rosenshine, 1993); requesting causal explanations (Costa, Caldeira, Gallastegui, & Otero, 2000; Graesser, Langston, & Bagget, 1993); and requesting the integration of complex information from multiple sources (Scardamalia & Bereiter, 1992). We suggest that, to understand the association between questioning and reading comprehension, it is necessary to construct types or levels of questions that allow examining questioning as a variable. If students questions in relation to text are examined in terms of the characteristics of their requests for information, the conceptual complexity of these questions can be described. When questions are categorized in terms of the conceptual complexity of the information requested to answer them, they can then be related to reading comprehension.

Conceptual Questions, Prior Knowledge, and Reading Comprehension Our view of the roles of questioning and prior knowledge in reading comprehension is based on Kintschs (1998) theory of the constructive-integration process. In that view, prior knowledge is used by the reader in conjunction with the text base to construct a situation model that fuses the two. The situation model is new knowledge gained from text. The more prior knowledge possessed by the reader, the fuller the situation model can be constructed. In this process, prior knowledge contributes declarative information (content) to which the text base can be connected.

Downloaded from jlr.sagepub.com at DUQUESNE UNIV on February 2, 2012

STUDENT QUESTIONING

If the reader poses conceptual questions prior to reading, the reader brings a new cognitive process to the constructive reading task. Conceptual questions enable the reader to connect the readers prior knowledge to the text base more easily for several reasons. First, the questions anticipate a possible macrostructure of the situation model. The reader with high-level questions preconstructs a framework into which the text base can be integrated. Not only does this reader have the content for a new situation model based on his or her prior knowledge, but the reader has established part of the structure of the situation model before reading by posing questions. Second, questioning is likely to facilitate the construction of a full situation model by constructing a high standard of coherence for understanding. That is, a reader who asks highly conceptual questions expects a large number of links among propositions. This expectation leads the reader to construct a relatively large number of causal relationships among words, concepts, and propositions that enable the situation model to be rich, multilayered, and memorable. A reader with low-level questions does not anticipate an elaborate macrostructure, but may only anticipate a list of factual information, which does not facilitate the interconnections that foster reading comprehension.

Questioning in Ecological Science In this study, we examined the association of question levels with reading comprehension, as characterized by conceptual knowledge built from expository science texts. However, to understand these relationships, any other content domain, such as geography or history, can be used. We had three reasons for choosing ecological science texts. First, conceptual knowledge structures are often represented in short amounts of text in this domain, thus minimizing the total volume of reading for young students. Second, concepts are readily identifiable in ecological science texts, facilitating the differentiation of students new constructed knowledge from prior knowledge. Third, science texts derived from trade books often have topographical markers, such as headings, captions, indentation, and so on, that afford the construction of conceptual knowledge more readily than other genres. Specifically, we hypothesized that levels of student self-generated questions in the content domain of ecology would be associated with degrees of conceptual knowledge built from texts in that domain. Students self-generated questions were categorized according to requests for factual information, simple descriptions, complex explanations, or patterns of relationships among ideas or concepts (see Appendix A for a description of the questioning hierarchy). The structure of this questioning hierarchy varies as a function of the complexity of the knowledge the question elicits. A description of each level is included in the Method section.

Downloaded from jlr.sagepub.com at DUQUESNE UNIV on February 2, 2012

TABOADA AND GUTHRIE

Conceptual Knowledge in Ecological Science Conceptual knowledge for ecological science in this study was categorized into degrees or levels of knowledge built from text. The six-level hierarchy used in this study was constructed by using students statements of their knowledge about ecology (Guthrie & Scafiddi, 2004). This hierarchy is comparable to the rubric constructed by Chi et al. (1994), which represented conceptual knowledge of the circulatory system. Like Chi et al.s categorization, the higher levels in this hierarchy represent levels of conceptual knowledge characterized by qualitative and quantitative shifts with respect to lower knowledge levels (see Appendix B for a description of the knowledge hierarchy). For instance, qualitative changes are evident in knowledge statements that represent a few major concepts from the text with supporting facts, as opposed to statements containing facts only. Higher complexity is also noticeable in knowledge statements in which concepts are coherently organized and related to each other, rather than explained in isolation from each other. In addition, qualitative shifts reflect that more elaborate and higher knowledge statements do not necessarily include more propositions but rather require a substantive integration of information (Guthrie & Scafiddi, 2004). Similar to Chi et al.s knowledge hierarchy, higher knowledge in this hierarchy is represented by explanations of the essential relationships among concepts in the domain, supported by subordinate information (e.g., facts) in a structured network of knowledge.

Questions as Contributors to Knowledge Building If student questioning is to be related to reading comprehension, measured as conceptual knowledge built from text, the relevant question is: How do different question levels contribute to knowledge? Or, more precisely, how does the student asking a higher level question (e.g., Level 4) differ from the student asking a lower level question (e.g., Level 2)? In our theoretical perspective, a student who asks a Level 4 question has understood and managed individual concepts and can focus on a higher organizational level, which entails relationships among concepts. What is presupposed by a higher level question is the ability to anticipate a knowledge structure that includes conceptual relations. For example, a Level 4 question would be How do tadpoles develop lungs when they become toads, and how do these help them in adjusting to their habitats? A student asking a question such as this is seeking information on (a) the concept of respiration by asking about toads lungs, (b) specific animals features that will contrast toads and tadpoles, and (c) the concept of adaptation to habitat (explicitly stated in the question). This last piece of the question captures the request for an answer that connects both concepts.

Downloaded from jlr.sagepub.com at DUQUESNE UNIV on February 2, 2012

STUDENT QUESTIONING

The three components of this question reveal the complexity of the knowledge necessary to answer the question. Essentially, the student asking a Level 4 question forecasts that the type of information the text contains will be comprehensive and will provide an explanation that relates these ecological concepts. In summary, our focus on student questioning has to do with the organization of information in the questioners mind, with the knowledge that the reader/questioner brings to the text, and how this is expressed through questions. Hypotheses Three hypotheses are proposed in this study: 1. Students question levels on a questioning hierarchy will be positively associated with students levels of reading comprehension measured by a multiple text comprehension task. 2. Students questions will account for a significant amount of variance in reading comprehension, measured by a multiple text comprehension task when the contribution of prior knowledge to reading comprehension is accounted for. 3. Students questions at the lowest levels of the questioning hierarchy (Level 1) will be associated with reading comprehension in the form of factual knowledge and simple associations. Students questions at higher levels in the questioning hierarchy (Levels 2, 3, and 4) will be associated with reading comprehension consisting of conceptual knowledge supported by factual evidence.

METHOD Participants This study included 360 students from Grades 3 and 4. The 125 third-grade students and 235 fourth-grade students were from four schools in a small city in a mid-Atlantic state. Students participated with parental permission. Eighty-one percent of Grade 4 students in the sample were returning students and had been at the same schools in Grade 3; 19% were newly enrolled. Demographic characteristics of the sample are included in Table 1. On the indicator of social economic status (SES), the sample had approximately 20% of students qualifying for free and reduced-price meals, whereas the district has 13%, showing comparability between the sample and the district population. Both third- and fourth-grade classrooms in all schools were self-contained, with the teacher providing the instruction for approximately 25 children. The students reading achievement was indicated by the GatesMacGinitie Reading Test mean grade equivalent score (M = 4.08, SD = 1.78 for Grade 3, and M = 5.34, SD = 2.72 for Grade 4).

Downloaded from jlr.sagepub.com at DUQUESNE UNIV on February 2, 2012

10

TABOADA AND GUTHRIE TABLE 1 Demographic Characteristics of Students in Grades 3 and 4 Grade 3 Grade 4 % 45.6 54.4 100.0 25.8 5.6 55.6 3.2 9.7 100.0 n 118 117 235 48 9 147 17 11 232 % 50.2 49.8 100.0 20.7 3.9 63.4 7.3 4.7 100.0 District % 50 50 100 8 2 87 2 1 100

Characteristic Gender Male Female Total Ethnicity African American Asian Caucasian Hispanic Other Total

n 57 68 125 32 7 69 4 12 124

Materials A multiple text packet containing topics on two specific biomes within the field of ecology was the core text for three of the administered tasks. Texts in this packet simulated a variety of information texts in ecology and were extracted from multiple published trade books on Reading Levels 2 to 5 in the domain of ecology. Texts were relevant to the school district science requirements. Each packet consisted of one of three alternative forms: Oceans and Forests (Form A), Ponds and Deserts (Form B), and Rivers and Grasslands (Form C). The three alternative forms were parallel in content difficulty and text structure. Students received alternative forms of the packet in both years. Each packet comprised approximately 75 pages and a total of 22 chapter-like sections. Each section was three to four pages long. Sixteen of these sections were relevant to the packet biome, and six sections were nonrelevant (i.e., distracters). Biome and animal/plant life information was emphasized equally across sections. Distribution of sections was the same across all three forms (i.e., equal number of sections on plants, animals, and biome characteristics). Each packet had a glossary and an index. Text difficulty was equally distributed throughout the packet. Eight sections were easy text, and eight sections were more difficult text. Text difficulty varied mainly in terms of sentence and paragraph length. Easy text had approximately two to four sentences (313 words in length) per paragraph and five to six paragraphs per section. Difficult text had longer sentences (1428 words per sentence), with an average of 6 to 10 sentences per paragraph, and 13 to 16 paragraphs per section. Font size was generally bigger for the easy text than for difficult text, and the ratio of illustrations to paragraphs was similar for both text types, with approximately one or two illustrations per paragraph. In addition, difficult texts had twice

Downloaded from jlr.sagepub.com at DUQUESNE UNIV on February 2, 2012

STUDENT QUESTIONING

11

as many captions (per illustration) as easy texts. According to teachers ratings, 40% of texts were appropriate for a Grade 3 reading level and 60% were appropriate for a Grade 5 reading level. Packets had an average of two to three illustrations per page, with approximately 100 pictures in black and white and 11 pictures in color. The pictures in these texts generally illustrated a concept in the text (e.g., reproduction) or depicted factual and detailed text information (e.g., number and size of water lilies in a river). The majority of these illustrations had accompanying captions explaining the major features depicted. Most illustrations were real-life photographs; the others were diagrams with captions explaining their components. Due to the specificity of the content domain of the text materials used in this study (e.g., ecological science), the results are limited to expository texts in ecological science. Therefore, generalizability of the results is limited to this content domain and this genre. Measures A total of four tasks were administered to students in Grades 3 and 4 over three school days: prior knowledge, questioning, and multiple text comprehension, as well as the comprehension subtest of the GatesMacGinitie Reading Test (Form S). The GatesMacGinitie, a standardized measure of reading comprehension, was used to provide a measure of concurrent validity for the multiple text comprehension task. All measures used in the analyses for Grade 3 were administered in September and December 2002. Measures used in the analyses for Grade 4 were administered in September and December 2003.

Prior knowledge. Prior knowledge activation consists of students recall of what they know about the topic of a text before and during reading for the purpose of learning the content as fully as possible and linking new content to prior understanding. In this study, this task measured the breadth and depth of students prior knowledge on an assigned topic in ecology. Students were randomly assigned to one of the three alternative forms of the task: Oceans and Forests (Form A), Ponds and Deserts (Form B), and Rivers and Grasslands (Form C). Students wrote what they knew about their assigned biomes for 20 minutes. Five minutes were devoted to directions. This task measured prior knowledge about the topic before students read about it in the multiple text comprehension task. Students were prompted to activate their prior knowledge by recalling what they knew about the topics described in the multiple text packet. Prompts for prior knowledge activation consisted of five questions that focused on similarities and differences between the two biomes described in the reading packet. The directions read:

Downloaded from jlr.sagepub.com at DUQUESNE UNIV on February 2, 2012

12

TABOADA AND GUTHRIE

In the space below, write what you know about [ponds and deserts]. When writing your answer, think about the following questions. How are [ponds and deserts] different? What animals and plants live in a [pond]? What animals and plants live in a [desert]? How do these animals and plants live? How do the plants and animals help each other live? Write what you know. Write in complete sentences. You have 15 minutes to write your answer. After 7 minutes, the teacher provided the following prompt: You are doing well. Keep writing if you can. You can turn over the page if you need more room. After 15 minutes, forms were collected. Students responses to the prior knowledge task consisted of written essays. The following is an example of a third graders prior knowledge essay on the topic of Ponds and Deserts: Deserts are very dry. Ponds are very wet. Deserts and ponds are opposites. At a desert animals dont need a lot of water. They do eat but dont drink as much. There are lots of plants that are in the desert. For example, there are cactuses, and flowers and much, much more. Ponds have lots of animals. For example, there are ducks, and fish. There are lots of plants like lily pads that frogs jump on and reeds that ducks lay their eggs. Deserts have animals like coyotes, rabbits, snakes, birds, owls and lizards (reptiles). There are many other things about deserts and ponds. Well thats all I have to say about deserts and ponds. Parallel form across time reliability for this task was r(118) = .44, p < .001 for Grade 3, and r(151) = .31, p < .001 for Grade 4. Parallel form across time reliability was established by correlating students scores on one of three forms of the prior knowledge task in September with scores on an alternative form of the task in December for each grade. Exact interrater agreement for 20 responses for this task in Grade 3 was 80%; adjacent was 100%. Exact interrater agreement for 20 responses for this task in Grade 4 was 77%; adjacent was 100%. The procedure for establishing interrater reliability was very similar for all three tasks for which interrater reliability was indicated. Two independent raters coded students responses into the corresponding hierarchy for the task. Exact agreement was computed to report whether raters concurred on the identical number (coding) for a given response. Adjacent agreement was computed to report whether raters disagreed by one or less on the coding of a response. If exact agreement was below 70%, discrepancies in final scores were resolved by a third independent rater. Concurrent validity for this measure was indicated by the correlation between prior knowledge and multiple text reading comprehension using the three alternative forms for both of these tasks in December 2002 for Grade 3 and December 2003 for Grade 4. These correlations were r(116) = .45, p < .001 for Grade 3, and r(159) = .35, p < .001 for Grade 4.

Downloaded from jlr.sagepub.com at DUQUESNE UNIV on February 2, 2012

STUDENT QUESTIONING

13

Students performance on prior knowledge was rated on the same knowledge hierarchy as the multiple text comprehension task. The hierarchy scores ranged from one to six. A score of 1 (Level 1) corresponds to low prior knowledge and is evident in essays consisting of briefly stated simple facts. A score of 6 (Level 6) corresponds to high prior knowledge and is evident in essays in which students describe complex patterns of relationships among several organisms and their habitats. These types of essays are characterized by concepts and science principles that are thoroughly supported by appropriate examples and statements. The essay example previously presented for this task corresponds to a Level 2 in this hierarchy. At this level, students can correctly classify several organisms, often in lists, with limited definitions. These classifications are present in the preceding example (see Appendix B).

Questioning. Questioning refers to students asking or writing self-initiated questions about the content of the text before or during reading to help them understand the text and topic. In this task, students generated questions about life in two biomes that were described in the multiple text packet. Students were given directions to browse the text for 2 minutes: Look at your packets for a few minutes to remind yourself of the important ideas you have been learning about [ponds and deserts]. After browsing, students received the following directions:
You have been learning about [ponds and deserts]. What questions do you have about [ponds and deserts]? These questions should be important and they should help you learn more about [ponds and deserts]. You should write as many good questions as you can. You have 20 minutes. Packets were collected before students started generating their questions so texts were not available to students during question generation. Students were provided enough space on the forms to write a maximum of 10 questions. Very few students wrote more than 10 questions. These questions were neither coded nor used for data analyses. A large majority of the students completed the task in 20 minutes. We do not believe the questioning task was affected negatively or positively by the prior knowledge activation task.

Coding students questions: Developing a questioning hierarchy. Students questions were coded into the four levels of the questioning hierarchy presented in Appendix A. The hierarchy is a valuable tool because it characterizes a wide range of question levels in a qualitative and quantitative way. Qualitatively, questions are described in terms of their requests for information in a way that is transparent for multiple users and applicable to various knowledge domains (e.g., factual versus conceptual questions can be described in geography, as well as in history). In addition, questions are also quantifiable because levels are ascribed

Downloaded from jlr.sagepub.com at DUQUESNE UNIV on February 2, 2012

14

TABOADA AND GUTHRIE

values that correspond with objective characteristics of a question, allowing quantitative analyses and multiple uses of the hierarchy. The questioning hierarchy was developed by the two authors of this study. Based on students written questions, we constructed a hierarchy characterizing the types of questions students asked. During a pilot phase, we started by examining third-grade students questions at the beginning of the school year. Students questions were examined in two stages: (a) questions about animals, and (b) questions about biomes. We sorted 65 questions from a sample of 25 students holistically into six relatively lower and higher categories. We then identified the critical qualities of each category and discussed them. To test our prior classifications we sorted another set of 40 questions into the same categories. We discussed the categories again and reduced them to four categories, based on redundant characteristics across the six original ones. After reasonable agreement on the four categories, we identified two question prototypes for each category. At the basic level of the hierarchy, Level 1, the questions are simple in form and ask for a factual proposition or a yes/no answer. At Level 2, questions request a global statement about an ecological concept or an aspect of survival of an organism. The qualitative difference between questions at Level 1 and Level 2 rests on the conceptual (rather than factual) focus that Level 2 questions have. A concept is an abstraction that refers to a class of objects, events, or interactions (Guthrie & Scafiddi, 2004). For example, defense is a concept because it refers to a series of behaviors or a class of interactions that takes place for several organisms and species. At the same time, concepts are characterized by their abstractness because they are transferable from organism to organism (i.e., both owls and bears defend themselves and protect their young from predators, yet they do so using different behaviors and different features). Alternatively, paws cannot be characterized as an ecological concept because, although it can be related to defense, it is limited to particular species or organisms. Therefore, a question such as How do owls defend themselves from predators in the woodlands? elicits a request for conceptual information that is not captured by a question such as How big are grizzly bears paws? The concepts used in ecological science in this study are reproduction, communication, defense, competition, predation, feeding, locomotion, respiration, adjustment to habitat, and niche (see Appendix C for ecological concept definitions). Despite the conceptual focus of questions at Level 2, these are still global in their requests for information. Level 2 questions are not specific about aspects of the ecological concept, a feature that Level 3 questions have. A second characteristic of Level 2 questions is that they may also ask about a set of distinctions necessary to account for all the forms of species, or to distinguish a species habitat or biome. For example, in the question What kinds of sharks are in the ocean? rather than a request for a mere grouping or quantification of organisms, the notion of class or group is evident.

Downloaded from jlr.sagepub.com at DUQUESNE UNIV on February 2, 2012

STUDENT QUESTIONING

15

Level 3 questions are requests for elaborate explanations about a specific aspect of an ecological concept with accompanying evidence. The higher conceptual complexity in Level 3 questions is evident within the questions themselves because they probe the ecological concept by using knowledge about survival or animal characteristics. These questions show clear evidence of specific prior knowledge about an ecological concept that is contained in the question itself (e.g., Why do elf owls make homes in cactuses?). Level 3 questions require information about ecological concepts (i.e., knowledge about the concept of adaptation to habitat is expressed in the previous question) by specifying a particular aspect of that concept (i.e., that elf owls use cacti to make their homes). Lastly, questions at the highest level, Level 4, aim at the interrelationships of ecological concepts or about interdependencies of organisms within or across biomes (e.g., Why do salmon go to the sea to mate and lay eggs in the river?). Questions at Level 4 are differentiated from the other three levels because they constitute a request for principled understanding, with evidence for complex interactions among multiple concepts and possibly across biomes. At Level 4, interactions between two or more concepts are central to the requests for information. In summary, the progression from Level 1 to Level 4 questions is based on the complexity of the question as expressed in requests for knowledge, with Level 1 questions requesting factual knowledge and Levels 2 to 4 asking about conceptual knowledge with increasing degrees of specificity and complexity within the question. Students wrote from 0 to 10 questions and were given a hierarchy score of 1 to 4 for each question, with a score of 0 if they wrote no question. A score of 0 was also given if the question was categorized as noncodable. Noncodable questions included statements (rather than questions), requests for semantic definitions, questions containing misconceptions in their formulation (e.g., Why is the forest surrounded by water?), questions including ethical or religious notions (e.g., Why did God make grasslands?), anthropomorphic questions (e.g., Why are bats sad?), and nonreadable questions due either to very poor spelling or poor grammar. A students score could range from 0 to 40. The sum of the question levels was calculated by adding the codes assigned to the questions. The questioning mean was computed by dividing the sum by the number of questions asked. The number of questions asked included the noncodable questions (coded as 0). The questioning mean was used in all analyses as the indicator of the average level of questions asked. Exact interrater agreement for coding students questions to the questioning hierarchy in Grade 3 was 90%; adjacent was 100%. Exact interrater agreement for coding students questions to the questioning hierarchy in Grade 4 was also 90%; adjacent was 100% (100 questions for 25 students). Parallel form across time reliability coefficients were calculated for each grade. Parallel form across time reliability was r(116) = .43, p < .001 for Grade 3, and r(173) = .23, p < .003 for Grade

Downloaded from jlr.sagepub.com at DUQUESNE UNIV on February 2, 2012

16

TABOADA AND GUTHRIE

4, indicating adequate reliability. Internal consistency reliability for this task yielded a Cronbachs alpha coefficient of .83 (10 items).

Multiple text comprehension. Multiple text comprehension refers to students competence in identifying text-relevant information, reading to obtain question-relevant information, taking notes, and writing an open-ended statement expressing conceptual knowledge gained from performing this task. Like the other two tasks, the content domain for this task was ecological science. This task was administered in three sessions over 2 days. On the first day, students spent approximately 20 minutes searching for information. On the second day, students spent a total of approximately 40 minutes searching for information and an additional 30 minutes writing what they had learned from the text. During the first two sessions, students spent time searching for information, reading, and taking notes about the two biomes described in the multiple text packets. The searching activity consisted of identifying text-relevant information by choosing sections that helped them explain how animals and plants live in two biomes (e.g., ponds and deserts). As part of the searching activity, students were explicitly taught how to use the table of contents, how to select relevant sections, and how to take notes in the spaces provided on the given forms. In the third session, students were asked to write about what they learned during their interaction with text in the two previous sessions. Prompts consisted of the same questions posed for the prior knowledge task (e.g., How are [oceans and forests] different? What animals and plants live in a [forest]?) Students had 30 minutes to express their knowledge and were prompted to write in full sentences. They were encouraged to keep writing after 7 minutes and again after 20 minutes into the task. Students essays were coded into the categories of the hierarchy for conceptual knowledge (Appendix B). The same knowledge hierarchy was used to score students responses to the prior knowledge task. Interrater agreement for 20 responses for Grade 4 was 100% for adjacent coding and 70% for exact coding; interrater agreement for 20 responses for Grade 4 was 95% for adjacent coding and 60% for exact coding. For Grade 3, discrepancies in final scores were resolved by a third independent rater. Parallel form across time reliability was r(108) = .38, p < .001 for Grade 3, and r(151) = .46, p < .001 for Grade 4, indicating adequate reliability. Concurrent validity was indicated by correlations with the GatesMacGinitie Reading Test of r(114) = .30, p < .001 for Grade 3, and r(160) = .35, p < .001 for Grade 4. An example of a third graders Level 6 essay follows:
Grassland and rivers are different because grasslands are dry and have few water and rivers are a channel with water in it. Water lilys, trouts, salmon, sea wasp, lotuses, water weed, otters, piranhas, and platypus all live in a river.

Downloaded from jlr.sagepub.com at DUQUESNE UNIV on February 2, 2012

STUDENT QUESTIONING

17

Elephants, cheetahs, deers, birds, rinos, grass, flowers, trees, butterflies, hyenas, and puff adder all live in grassland. Animals drink, eat, and sleep to live, plants also drink, eat, sleep, and also need sunlight. Plants help animals by making oxygen and when animals die they can fetalize the soil and that is good for plants.

GatesMacGinitie Reading Test. The comprehension tests of Levels 3 and 4 (Form S) of this standardized measure of reading comprehension were used in this study. These tests consist of approximately 12 paragraphs on varied subjects with a range of two to six questions on each paragraph for students to answer. The extended scale score was used for all statistical analyses.
RESULTS The means and standard deviations for all variables are presented in Table 2, and the correlations are presented in Table 3. The first hypothesis was that students question levels on the questioning hierarchy would be positively associated with students level of text comprehension measured by a multiple text comprehension task. For both grades, this hypothesis was addressed by examining the correlations of questioning and multiple text comprehension. For Grade 3, questioning correlated with multiple text reading comprehension, r(116) = .38, p < .001. Prior knowledge correlated with questioning, r(125) = .31, p < .001, and prior knowlTABLE 2 Means and Standard Deviations for All Variables for Grades 3 and 4 Cognitive Variables Prior knowledge M SD n Questioning M SD n Multiple text comprehension M SD n GatesMacGinitie M SD n Grade 3 1.95 0.69 128 1.30 0.52 125 2.44 0.98 119 469.90 37.44 164 Grade 4 2.35 0.86 221 1.28 0.61 235 3.29 1.22 211 494.64 42.17 218

Downloaded from jlr.sagepub.com at DUQUESNE UNIV on February 2, 2012

18

TABOADA AND GUTHRIE TABLE 3 Correlations Among Prior Knowledge, Questioning, and Reading Comprehension for Grades 3 and 4

Cognitive Variables 1. Prior knowledge 2. Questioning 3. Multiple text comprehension 4. GatesMacGinitie

1 .21** .40*** .48***

2 .31*** .19** .31***

3 .45*** .38*** .34***

4 .41*** .34*** .30***

Note. Correlations for Grade 3 are above the diagonal; those for Grade 4 are below the diagonal. **p < .01. ***p < .001.

edge correlated with multiple text comprehension, r(116) = .45, p < .001. The GatesMacGinitie test correlated significantly with the multiple text reading comprehension task, r(114) = .30, p < .001. For Grade 4, questioning correlated with multiple text reading comprehension, r(211) = .19, p < .01. Prior knowledge correlated with questioning, r(221) = .21, p < .01, and prior knowledge correlated with multiple text comprehension, r(204) = .40, p < .001. The GatesMacGinitie test correlated significantly with the multiple text reading comprehension task, r(197) = .34, p < .001. The second hypothesis of this study was that students questions would account for a significant amount of variance in reading comprehension, measured by a multiple text comprehension task when the contribution of prior knowledge to reading comprehension was accounted for. This was tested in multiple regression analyses for Grades 3 and 4. In each analysis, multiple text reading comprehension was the dependent variable, with prior knowledge entered first and questioning entered second as independent variables. This order of entry was intended to examine the contribution of student questioning when prior knowledge was statistically controlled. Missing data were handled with list-wise deletion. Results for Grade 3 (Table 4) indicated that questioning accounted for a significant amount of variance in multiple text reading comprehension and the GatesMacGinitie Reading Test over and above that accounted for by prior knowledge. After prior knowledge was accounted for, questioning explained 7% of the variance in multiple text reading comprehension, which was significant, F(1, 113) = 10.43, p < .01. The multiple R was .52, and the final beta for questioning was .27 (p < .01). When the GatesMacGinitie was entered as the criterion, questioning accounted for 6% of the variance on this standardized test after prior knowledge was accounted for, F(1, 121) = 7.89, p < .01. The multiple R was .47, and the final beta for questioning was .23 (p < .01). Results for Grade 4 (Table 5) indicated that, after prior knowledge was accounted for, questioning explained 2% of the variance in multiple text comprehension, which was significant, F(1, 201) = 3.99, p < .05. The multiple R was .42,

Downloaded from jlr.sagepub.com at DUQUESNE UNIV on February 2, 2012

STUDENT QUESTIONING TABLE 4 Regression Analyses of Prior Knowledge and Questioning on Reading Comprehension for Grade 3 Students Dependent and Independent Variables Multiple text comprehension Prior knowledge Questioning GatesMacGinitie Prior knowledge Questioning **p < .01. ***p < .001. TABLE 5 Regression Analyses of Prior Knowledge and Questioning on Reading Comprehension for Grade 4 Students Dependent and Independent Variables Multiple text comprehension Prior knowledge Questioning GatesMacGinitie Prior knowledge Questioning *p < .05. ***p < .001. R .40 .42 .48 .52 R2 .16 .18 .23 .27 R2 .16 .02 .23 .04 F 38.93*** 3.99* 59.43*** 11.69*** R .45 .52 .41 .47 R2 .20 .27 .16 .22 R2 .20 .07 .16 .06 F 28.34*** 10.43** 23.98*** 7.89**

19

Final .36*** .27** .33*** .23**

Final .38*** .13* .43*** .21***

and the final beta for questioning was .13 (p < .05). In addition, questioning also accounted for 4% of the variance over and above prior knowledge when the GatesMacGinitie test was the criterion variable, F(1, 202) = 11.69, p < .001. The multiple R was .52, and the final beta for questioning was .21 (p < .001). We tested for the interaction effects of prior knowledge and questioning on multiple text comprehension for each grade. Results from regression analyses showed that the interaction between these two variables was not significant for Grade 3, F(1, 112) = 1.879, p = .173, or for Grade 4, F(1, 200) = 0.959, p = .329. Figures 1 and 2 show multiple text comprehension as a function of questioning levels and prior knowledge levels for each grade. For both grades, main effects were observed. As shown in the regression analyses, questioning improved comprehension significantly for students with high prior knowledge (Grade 3, ES = 1.04; Grade 4, ES = .57) and low prior knowledge (Grade 3, ES = .45; Grade 4, ES = .20). Similarly, prior knowledge had benefits on comprehension for students with high questioning levels (Grade 3, ES = .97; Grade 4, ES = .80), as well as for students with low questioning levels (Grade 3, ES = .51; Grade 4, ES = .35). Had there been an interaction between questioning and prior knowledge, these two variables

Downloaded from jlr.sagepub.com at DUQUESNE UNIV on February 2, 2012

20

TABOADA AND GUTHRIE

FIGURE 1 Mean proportion of multiple text comprehension scores as a function of prior knowledge levels and questioning levels for Grade 3 students.

would have been dependent on each other for their impact on reading comprehension, with one variable (e.g., questioning) making a difference at one level of the other variable (e.g., high prior knowledge), but not making a difference at the other level of that variable (e.g., low prior knowledge). The absence of an interaction, or the independence of these variables from each other, is evidenced by the fact that either one of the two variables has an impact on reading comprehension, irrespective of the levels of the other variable. The third hypothesis was that studentsquestions at the lowest levels of the questioning hierarchy (Level 1) would be associated with reading comprehension levels in the form of factual knowledge and simple associations, whereas students questions at higher levels in the questioning hierarchy (Levels 2, 3, and 4) would be associated with reading comprehension levels consisting of factual and conceptual knowledge. A chi-square test for independence was used to address this hypothesis. Frequencies of high and low scores were computed for the variables of questioning and multiple text reading comprehension for each grade. Low-level questions reflected factual knowledge (defined as Level 1 in the questioning hierarchy). High-level questions reflected conceptual and factual knowledge (defined as Levels 2, 3, and 4 in the questioning hierarchy). Scores for the multiple text com-

Downloaded from jlr.sagepub.com at DUQUESNE UNIV on February 2, 2012

STUDENT QUESTIONING

21

FIGURE 2 Mean proportion of multiple text comprehension scores as a function of prior knowledge levels and questioning levels for Grade 4 students.

prehension task were also categorized into high and low levels. Scores for multiple text comprehension were low if they equaled 2 or below on the knowledge hierarchy. Scores were high if they equaled 3 or above on the knowledge hierarchy. This partitioning of high and low for both variables was done to make the subgroups as equivalent as possible in size to enable a chi-square to be computed and to meet the requirement that expected frequencies in each cell should be at least 5. The chi-square tested whether question levels were independent of the levels of conceptual knowledge. For both grades, Tables 6 and 7 show the observed frequencies in the form of 2 2 matrices, where the rows correspond to the two categories of the multiple text comprehension variable and the columns correspond to the two categories of the questioning variable. For Grade 3, the Pearson chi-square was statistically significant, 2(1, N = 116) = 12.23, p < .001, which indicates that the hypothesis of independence between the two variables is rejected. It should be noted (see Table 6) that the majority of the students (67%) were located in the low questioning/low multiple text comprehension group (n = 49) and in the high questioning/high multiple text comprehension group (n = 29). The higher proportion represented by these two groups gave the significant association between these variables.

Downloaded from jlr.sagepub.com at DUQUESNE UNIV on February 2, 2012

22

TABOADA AND GUTHRIE TABLE 6 Questioning Levels According to Levels of Multiple Text Comprehension for Grade 3 Students Questioning

Multiple Text Comprehension Low High Total Note.

Low 49 19 68

High 19 29 48

Total 68 48 116

The values represent frequencies of questioning categories (high/low). TABLE 7 Questioning Levels According to Levels of Multiple Text Comprehension for Grade 4 Students Questioning

Multiple Text Comprehension Low High Total Note.

Low 42 15 57

High 19 24 43

Total 61 39 100

The values represent frequencies of questioning categories (high/low).

For Grade 4, the Pearson chi-square statistic was also statistically significant, 2(1, N = 100) = 8.96, p < .01. Again, the higher proportion of cases was represented by the cells of low questioning/low multiple text comprehension (n = 42) and high questioning/high multiple text comprehension (n = 24), which indicate a significant association between these two variables for this sample (see Table 7). These results support a specific alignment between questioning levels and levels of conceptual knowledge built from text measured by the multiple text comprehension task for Grade 3 and Grade 4 students. The two groups of students were compared for descriptive purposes. A multivariate analysis of variance determined any significant differences between the two age groups on the outcome variables of prior knowledge, multiple text comprehension, and questioning. Results from this analysis showed significant differences between Grades 3 and 4 on all three variables collectively. Results from a follow-up analysis of variance showed significant differences between the two groups on two of the three variables. Statistically significant differences between the two grades were found for prior knowledge, F(1, 303) = 19.01, p < .001, with Grade 4 (M = 2.35) higher than Grade 3 (M = 1.95). Multiple text comprehension was also statistically significantly different, F(1, 303) = 37.50, p < .001, with Grade 4 (M = 3.29) higher than Grade 3 (M = 2.44). Questioning was not statisti-

Downloaded from jlr.sagepub.com at DUQUESNE UNIV on February 2, 2012

STUDENT QUESTIONING

23

cally significantly different across grades (M = 1.28 for Grade 4, and M = 1.30 for Grade 3). DISCUSSION The findings in this investigation showed that students questions were positively associated with their reading comprehension. This association was shown in the correlations between student questioning and reading comprehension for students in Grades 3 and 4. These findings are consistent with suggestions from previous investigators that there is a positive relationship between students generated questions and their reading comprehension (e.g., Davey & McBride, 1986; Ezell et al., 1992; King & Rosenshine, 1993; Rosenshine et al., 1996; Scardamalia & Bereiter, 1992). However, this study expands previous literature because of its distinctive measure of student self-generated questions that allowed relating these questions to reading comprehension and prior knowledge. In this study, student questions were described as requests for conceptual knowledge from text. Categorizing questions on the basis of their requests for content, rather than by question form (e.g., question words what, when, who; question stems), is consistent with previous suggestions in the literature: Defining categories on the basis of content of the information requested rather than form is consistent with theories of question answering in the cognitive sciences (Graesser et al., 1994, p. 209). Thus, our results contribute to the extant literature in student questioning by specifying a measure of question quality and presenting empirical evidence for the association of student questioning and reading comprehension. To investigate the relationship between student questioning and reading comprehension, we examined the relationship of questioning with reading comprehension when taking into account the influence of prior knowledge. Regression analyses showed that third and fourth graders self-generated questions contributed a significant amount of variance to reading comprehension in the domain of ecology when the contribution of prior knowledge was statistically controlled. Furthermore, questioning still explained a significant amount of variance over and above prior knowledge in reading comprehension when the GatesMacGinitie was the dependent variable in the regression analyses for both grades. These findings indicate that the contribution of questioning to reading comprehension is not constrained to the topic or content domain of the text. Rather, they show that questioning, understood as a strategy that serves to seek conceptual information, is a process that benefits skills involved in standardized reading tests such as the GatesMacGinitie. Both of these findings contribute to the literature in two main ways. First, previous research has indicated that students who possess higher prior knowledge in a given domain tend to ask a higher proportion of questions or higher

Downloaded from jlr.sagepub.com at DUQUESNE UNIV on February 2, 2012

24

TABOADA AND GUTHRIE

level questions than students who have lower prior knowledge in the domain (Miyake & Norman, 1979; Van der Meij, 1990). Although we observed similar findings, our results provide evidence showing that students spontaneous question generation, in reference to authentic school texts, accounts for variance in reading comprehension above and beyond the variance accounted for by prior knowledge in the domain of ecological science. Furthermore, as discussed, these findings do not seem to be constrained to the specific domain of ecological science. Indeed, questioning accounted for variance in reading comprehension when this was measured with an experimenter-designed test and with a standardized test of reading comprehension. This last finding verifies the unique contribution of questioning to reading comprehension through replication of results across different measures. Second, we found no evidence of an interaction between prior knowledge and questioning for either grade. The absence of this interaction indicates that both of these variables had benefits for students reading comprehension independently of one another. As shown in Figures 1 and 2, questioning contributed to reading comprehension for students with low prior knowledge, as well as for students with high prior knowledge in both grades. For both grades, these results appear to contradict the findings of Scardamalia and Bereiter (1992), who indicated that fifth and sixth graders tended to ask more definitional types of questions when they did not know enough about a topic but asked more high-level questions when they had some prior knowledge on the topic. Similarly, middle-school students tended to ask more questions on word definitions than high-level/causal questions when they had difficulty understanding the terminology in the text (Costa et al., 2000). However, in these studies, this apparent interaction between types of questions and prior knowledge was not tested empirically. In this sense, our analyses permit discussing the contributions of each of these variables to reading comprehension. Specifically, not only did significant regression weights indicate that prior knowledge and questioning contributed to reading comprehension independently of each other, but the absence of an interaction lent further support to their separate benefits on reading comprehension when levels of each variable were examined. Had the interaction between these two variables been significant for either grade, questioning would be dependent on prior knowledge for its contribution to reading comprehension. In other words, questioning would show benefits on reading comprehension for students with high prior knowledge, but not for students with low prior knowledge. Our results do not support this notion. Thus, in our view, questioning contributes to comprehension in parallelconcurrently with prior knowledge. Questioning facilitates the use of prior knowledge but does not itself require prior knowledge beyond the extremely minimal level that any student would bring to the text. Likewise, prior knowledge does not require questioning beyond a minimal level. Therefore, these two processes are parallel, rather than interdependent in their action, during the meaning construction process that takes place during reading comprehension.

Downloaded from jlr.sagepub.com at DUQUESNE UNIV on February 2, 2012

STUDENT QUESTIONING

25

Our third finding was that students question levels were associated with levels of reading comprehension measured as conceptual knowledge built from text. Specifically, questions that requested simple facts were associated with reading comprehension levels consisting of factual knowledge and simple associations, whereas questions requesting information about concepts were associated with higher levels of reading comprehension consisting of conceptual knowledge supported by factual evidence and examples. The majority of the students asking Level 1 questions, as defined by the questioning hierarchy used in this study, tended to have low levels of reading comprehension, whereas the majority of the students asking conceptual questions as expressed in Levels 2, 3, and 4 had levels of conceptual knowledge commensurate with those levels. For example, students who asked questions such as Are sharks scary? (Level 1) tended to gain knowledge from text consisting of statements such as I know that most sharks are terrifying. Some of them are less terrifying like the carpet shark. Statements such as these denote the absence of ecological concepts and biome definitions and include only a few characteristics of a biome or an organism. Students who asked questions requesting a global statement about an ecological concept, such as What do grasslands animals eat? (Level 2), tended to gain simple concepts from text. Such knowledge is expressed in statements like this one: Rivers and grasslands are different. I will tell you the difference is. I will tell you the animals and plants of a river and grassland. Hear are the animals and plants of a river salmon, hippo, crocodile, sea plants, otters, and polar bears. They all live by water, plants or meat. Some live by water, some dont. Hear are the animals and plants in a grassland lion, coyote, eagle, elephant, prairie dog, zebra, and orangutan. They all live by water, most of them eat meat and only some of them eat plants. Some of them live in trees one of them live in a hole some of them live on the ground. Knowledge built from text at this level is characterized by the identification of one or more biomes (e.g., rivers and grasslands), in which the information is minimal, factual, and may appear as a list, as in the previous statement. Yet, these statements are not characterized by full definitions of biomes or descriptions of organisms adaptations to biomes. In addition, weakly stated concepts may be included, such as the concept of feeding in this statement. Students asking Level 3 questions requested an elaborated explanation about a specific aspect of an ecological concept. The specificity of the concept was generally expressed by using prior knowledge within the question. Students who, on average, asked questions at Level 3 had knowledge representations at Levels 3 and 4 in the knowledge hierarchy. For example a third-grade students question at Level

Downloaded from jlr.sagepub.com at DUQUESNE UNIV on February 2, 2012

26

TABOADA AND GUTHRIE

3 was What kinds of birds eat river animals? The following is a knowledge statement commensurate with this question level: One thing I know about rivers and grasslands are the animals that live there. Some animals that live in grasslands are grasshoppers, crickets and vultures. Some types of grasslands are savannahs, prairies, and plains. Prairies and plains have large openings and a lot of grass but very little trees. The big difference between a river and a grassland is the main natural resource. The main natural resource for a river is water. The main natural resource for a grassland is grass. Some animals in a river are otters, hippos, and fish. It is not a regular type of hippo, it is called a River Hippo. Otters like to eat snakes. One way all plants and animals help each is for food. In this example, the student expressed conceptual knowledge (Level 3) by presenting conceptual, defining characteristics typical of each biome (e.g., The big difference between a river and a grassland is the main natural resource). The student also included types of grasslands with characteristics for each type, as well as a few correct classifications of organisms to each biome (e.g., grasshoppers, crickets and vultures). Survival concepts, such as feeding and interdependence between animals, are also briefly stated. Lastly, students who asked questions requesting a pattern of relationships between concepts (Level 4) tended to show patterns of organized conceptual knowledge (Level 5). For instance, a question such as How do animals in the deserts get water and protect themselves from heat if there is not water and it doesnt rain a lot? (Level 4) requests information about the interaction of the organism with the biome. Students who were able to ask questions at this level of complexity tended to write essays that expressed similar complexity (essays at Levels 5 and 6), such as the following: Ponds and deserts are different, deserts have little or no freshwater and ponds have a lot of water. The animals that live in the desert are jack rabbits, snakes, insects, donkeys, spiders, scorpions, elf owls, road runners, and vultures. The plants in the desert are cactuses, flowers, trees, and bushes. The animals that live in a pond are fish, frogs, shrimp, great blue herons, green herons, tadpoles, birds, insects, spiders and raccoons. The plants that live in a pond are duckweed, lily-pads, algae, bushes, trees, and flowers. Animals in the desert rely on plants and animals for food and water. Animals in ponds rely on other animals. Some on water. Some on both. Animals in ponds rely on plants for food, oxygen, and shelter. Scorpions kill their prey using their stinger in their tail. Jack rabbits usually feast at night. They eat desert grasses, prickly pears, and other plants. Insects in ponds eat algae and plants. Bigger insects eat small fish.

Downloaded from jlr.sagepub.com at DUQUESNE UNIV on February 2, 2012

STUDENT QUESTIONING

27

The student who wrote this (Level 5) essay showed command of several ecological concepts such as predation, feeding, and protection, with supporting information for each of them. The student also showed several correct classifications of animals and plants to their corresponding biomes (e.g., scorpions and jack rabbits in deserts). In addition, comparisons across the two biomes and interdependencies between organisms were also included (e.g., Animals in the desert rely on plants and animals for food and water). Knowledge statements at these levels show higher organization by emphasizing knowledge principles that subsume relationships between ecological concepts and of the organisms with their biomes. We propose that the association between question levels and reading comprehension levels, as described here, serve to inform the theoretical views of the contribution of questioning to comprehension. First, previous investigators have speculated that the generation and answering of higher, inferential questions could be due to the active processing of text (Davey & McBride, 1986). In other words, question asking and answering mobilizes attention for learning broadly from text (Wittrock, 1981). However, if this view were fully accurate, then questioning of any form would increase comprehension. Our data suggest that it is not the presence or absence of questions in general, but the presence or absence of higher level questioning that facilitates higher comprehension. By comparing high- and low-level questions, we vastly reduce the explanation of the active processing hypothesis. If high-level conceptual questions have greater benefits for reading comprehension than low-level questions, the benefit is due to questioning levels. Our chi-square analyses showed that lower level questions were associated with lower than average comprehension, and high-level questioning was associated with high levels of multiple text comprehension. Consequently, we doubt that questioning improves comprehension by increasing generalized cognitive activation. Another explanation found in the literature for the relationship between questioning and reading comprehension is that attentional processes are elicited by asking questions. Van den Broek et al. (2001) found that questions induce a selective enhancement of memory because the reader focuses attention only on the text information needed to answer the questions. Our findings differ from van den Broek et al.s in two main ways. First, we investigated students self-generated questions, whereas they studied experimenter-posed questions. Second, we explained cognitive characteristics of questions in general, and comprehension of diverse texts in which the content was broader than the questions. In other words, we did not attempt to examine whether the content of the questions predicted or related to the content of knowledge built from text. Our interest focused on the relationship between levels of questions and levels of conceptual knowledge built from text. Thus, we propose that this relationship is explained by a conceptual level hypothesis. In synthesis, based on our evidence and our measurement of questioning, we did not attempt to distinguish between the at-

Downloaded from jlr.sagepub.com at DUQUESNE UNIV on February 2, 2012

28

TABOADA AND GUTHRIE

tention hypothesis proposed by van den Broek et al. and the conceptual level hypothesis. Therefore, we cannot rule out the possibility that students questions had an attentional effect of enhancing recall and/or comprehension of sections of text that pertain solely to their questions. In conclusion, we suggest that students who tend to ask lower level questions struggle with identifying the overall hierarchical structure and the major interrelationships among the concepts within texts in a knowledge domain. Conversely, students who overall ask higher level, conceptual questions tend to represent knowledge built from text in a conceptually organized, hierarchical structure. Readers asking high-level, conceptual questions can anticipate and bring to the text an elaborate text macrostructure. Consequently, these readers would tend to build fuller text representations and richer situation models (Kintsch, 1998), characterized by a larger number of connections and relationships among the major concepts in the text. Our findings, then, are consistent with an attentional hypothesis, as well as with our proposed conceptual level hypothesis. However, the findings raise doubts about whether a full explanation of the effects of questioning on reading comprehension can be based either on prior knowledge or general activation. Although these processes facilitate text comprehension, they are not central to the explanation of the effects of questioning on comprehension. LIMITATIONS This study has three main limitations. First, generalizability of the results is constrained to questions for information texts within the domain of ecology. Therefore, it is not known how questioning for narrative texts would relate to reading comprehension of stories, for example. Furthermore, the text type used to elicit questioning in this study was based on authentic information texts for elementary grades. Rich, informational text and vivid pictures characterize these texts. Therefore, conclusions regarding student questions are applicable to these particular types of texts. Second, we studied third and fourth graders questions only. Perhaps the question hierarchy can be applied to questions generated by students in later elementary grades, but its scope may be too limited to describe questions formulated by middle and high school students. Third, this study provides only a cognitive view of the relationship between questioning and reading comprehension. The complexity of question generation calls for exploring the motivational aspects that may be involved. What makes some students more avid questioners than others? What questions do highly motivated students ask? Examining the role that motivational variables play in student questioning may be the next step toward understanding the interplay of questioning and reading comprehension.

Downloaded from jlr.sagepub.com at DUQUESNE UNIV on February 2, 2012

STUDENT QUESTIONING

29

ACKNOWLEDGMENTS This work was supported by Interagency Educational Research Initiative (IERI) Award 0089225 as administered by the National Science Foundation (NSF). The findings and opinions expressed here do not necessarily reflect the position or policies of the IERI, the NSF, or the University of Maryland. We thank Ellen Kaplan and Eileen Kramer for help in preparation of this article.

REFERENCES
Alao, S., & Guthrie, J. T. (1999). Predicting conceptual understanding with cognitive and motivational variables. The Journal of Educational Research, 92, 243253. Champagne, A. B., Klopfer, L. E., Desena, A. T., & Squires, D. A. (1981). Structural representations of students knowledge before and after science instruction. Journal of Research in Science Teaching, 18, 97111. Chi, M. T. H., de Leeuw, N., & Chiu, M. H. (1994). Eliciting self-explanations improves understanding. Cognitive Science, 18, 439477. Cohen, R. (1983). Self-generated questions as an aid to reading comprehension. The Reading Teacher, 36, 770775. Collins, A., Brown, J. S., & Newman, S. E. (1990). Cognitive apprenticeship: Teaching the crafts of reading, writing, and mathematics. In L. Resnick (Ed.), Knowing, learning, and instruction: Essays in honor of Robert Glaser (pp. 453494). Hillsdale, NJ: Lawrence Erlbaum Associates, Inc. Costa, J., Caldeira, H., Gallastegui, J. R., & Otero, J. (2000). An analysis of question asking on scientific texts explaining natural phenomena. Journal of Research in Science Teaching, 37, 602614. Cuccio-Schirripa, S., & Steiner, E. H. (2000). Enhancement and analysis of science question level for middle school students. Journal of Research in Science Teaching, 37, 210224. Davey, B., & McBride, S. (1986). Effects of question-generation training on reading comprehension. Journal of Educational Psychology, 78, 256262. Dreher, M. J., & Gambrell, L. B. (1985). Teaching children to use a self-questioning strategy for studying expository prose. Reading Improvement, 22, 27. Ezell, H. K., Kohler, F. W., Jarzynka, M., & Strain, P. S. (1992). Use of peer-assisted procedures to teach QAR reading comprehension strategies to third-grade children. Education and Treatment of Children, 15, 205227. Graesser, A. C., & Clark, L. F. (1985). Structures and procedures of implicit knowledge. Norwood, NJ: Ablex. Graesser, A. C., Langston, M. C., & Bagget, W. B. (1993). Exploring information about concepts by asking questions. In G. V. Nakamura & D. Medin (Eds.), Categorization by humans and machines (pp. 411436). Orlando, FL: Academic. Graesser, A. C., McMahen, C. L., & Johnson, B. K. (1994). Question asking and answering. In M. A. Gernsbacher (Ed.), Handbook of psycholinguistics (pp. 517538). San Diego, CA: Academic. Guthrie, J. T., & Scafiddi, N. T. (2004). Reading comprehension for information text: Theoretical meanings, developmental patterns, and benchmarks for instruction. In J. T. Guthrie, A. Wigfield, & K. C. Perencevich (Eds.), Motivating reading comprehension: Concept-oriented reading instruction (pp. 225248). Mahwah, NJ: Lawrence Erlbaum Associates, Inc. King, A. (1994). Autonomy and question asking: The role of personal control in guided student-generated questioning. Learning and Individual Differences, 6, 163185.

Downloaded from jlr.sagepub.com at DUQUESNE UNIV on February 2, 2012

30

TABOADA AND GUTHRIE

King, A., & Rosenshine, B. (1993). Effects of guided cooperative questioning on childrens knowledge construction. Journal of Experimental Education, 61, 127148. Kintsch, W. (1998). Comprehension: A paradigm for cognition. Cambridge, England: Cambridge University Press. MacGregor, K. (1988). Use of self-questioning with a computer-mediated text system and measures of reading performance. Journal of Reading Behavior, 20, 131148. Miyake, N., & Norman, D. A. (1979). To ask a question one must know enough to know what is not known. Journal of Verbal Learning and Verbal Behavior, 18, 357364. National Reading Panel. (2000). Report of the National Reading Panel: Teaching children to readAn evidence-based assessment of the scientific research literature on reading and its implications for reading instruction (NIH Publication No. 004769). Jessup, MD: National Institute for Literacy. Nolte, R. Y., & Singer, H. (1985). Active comprehension: Teaching a process of reading comprehension and its effects on reading achievement. The Reading Teacher, 39, 2431. Olson, G. M., Duffy, S. A., & Mack, R. L. (1985). Question asking as a component of text comprehension. In A. C. Graesser & J. B. Black (Eds.), The psychology of questions (pp. 219226). Hillsdale, NJ: Lawrence Erlbaum Associates, Inc. Palincsar, A. S., & Brown, A. L. (1984). Reciprocal teaching of comprehension-fostering and comprehension monitoring strategies. Cognition and Instruction, 1, 117175. Raphael, T. E., & Pearson, P. D. (1985). Increasing students awareness of sources of information for answering questions. American Educational Research Journal, 22, 217236. Resnick, L. B., & Klopfer, L. E. (1989). Toward the thinking curriculum: Concluding remarks. In L. B. Resnick & L. E. Klopfer (Eds.), Toward the thinking curriculum: Current cognitive research (Yearbook for the Association for Supervision and Curriculum Development, pp. 206211). Alexandria, VA: Association for Supervision and Curriculum Development. Reynolds, R. E., & Anderson, R. C. (1982). Influence of questions on the allocation of attention during reading. Journal of Educational Psychology, 74, 623632. Rosenshine, B., Meister, C., & Chapman, S. (1996). Teaching students to generate questions: A review of the intervention studies. Review of Educational Research, 66, 181221. Scardamalia, M., & Bereiter, C. (1992). Text-based and knowledge-based questioning by children. Cognition and Instruction, 9, 177199. Singer, H. (1978). Active comprehension: From answering to asking questions. The Reading Teacher, 31, 901908. Singer, H., & Donlan, D. (1982). Active comprehension: Problem-solving schema with question generation for comprehension of complex short stories. Reading Research Quarterly, 2, 166185. Taboada, A., & Guthrie, J. T. (2004). Growth of cognitive strategies for reading comprehension. In J. T. Guthrie, A. Wigfield, & K. C. Perencevich (Eds.), Motivating reading comprehension: Concept-oriented reading instruction (pp. 273306). Mahwah, NJ: Lawrence Erlbaum Associates, Inc. Taylor, B. M., & Frye, B. J. (1992). Comprehension strategy instruction in the intermediate grades. Reading Research and Instruction, 32, 3948. Trabasso, T., Secco, T., & van den Broek, P. W. (1984). Causal cohesion and story coherence. In H. Mandl, N. L. Stein, & T. Trabasso (Eds.), Learning and comprehension of text (pp. 83111). Mahwah, NJ: Lawrence Erlbaum Associates, Inc. van den Broek, P., & Kremer, K. E. (2000). The mind in action: What it means to comprehend during reading. In B. M. Taylor, M. F. Graves, & P. van den Broek (Eds.), Reading for meaning: Fostering comprehension in the middle grades (pp. 131). New York: Teachers College Press. van den Broek, P., Tzeng, Y., Risden, K., Trabasso, T., & Basche, P. (2001). Inferential questioning: Effects on comprehension of narrative texts as a function of grade and timing. Journal of Educational Psychology, 93, 521529.

Downloaded from jlr.sagepub.com at DUQUESNE UNIV on February 2, 2012

STUDENT QUESTIONING

31

Van der Meij, H. (1990). Question asking: To know that you do not know is not enough. Journal of Educational Psychology, 82, 505512. Wittrock, M. C. (1981). Reading comprehension. In F. J. Pirozzolo & M. C. Wittrock (Eds.), Neuropsychological and cognitive processes in reading (pp. 229259). New York: Academic.

APPENDIX A Questioning Hierarchy Level 1: Factual Information Questions are simple in form and request a simple answer, such as a single fact. Questions are a request for a factual proposition or yes/no answers. They are based on naive concepts about the world rather than a disciplined understanding of the subject matter. Questions refer to relatively trivial, nondefining characteristics of organisms (plants and animals), ecological concepts, or biomes. Examples for text about animals: How big are bats? Do sharks eat trash? How much do bears weigh? Examples for text about biomes and organisms: Are there crabs in a river? How old do orangutans get? How big do rivers get? How big are grasslands? How many grasslands are there? How many rivers are there in the world? How many plants live in ponds? Level 2: Simple Description Questions are a request for a global statement about an ecological concept or an important aspect of survival. Questions may also request general information that denotes a link between the biome and organisms that live in it. The question may be simple, yet the answer may contain multiple facts and generalizations. The answer may be a moderately complex description or an explanation of an animals behavior or physical characteristics. An answer may also be a set of distinctions necessary to account for all the forms of species. Questions also include classifications or general taxonomies of species. Examples for text about animals: How do sharks have babies? How do birds fly? How do bats protect themselves? What kinds of sharks are in the ocean? What types of places can polar bears live? What kind of water do sharks live in? How many eggs does a shark lay? How fast can a bat fly? How far do polar bears swim in the ocean? Examples for text about biomes and organisms: What bugs live in the desert? What kind of algae are in the ocean? How do desert animals live? How do grasslands get flowers and trees? How come it almost never rains in the desert? How long do sandstorms last? Why do rivers start at a hilltop? Level 3: Complex Explanation Questions are a request for an elaborated explanation about a specific aspect of an ecological concept with accompanying evidence. The question probes the ecological concept by using knowledge about survival or animal biological characteristics. Questions may also request information that denote a link between the biome and organisms that live in it. Questions use defining features of biomes to probe for the influence those attributes have on life in the biome. The question is complex and the expected answer requires elaborated propositions, general principles, and supporting evidence about ecological concepts. Examples for text about animals: Why do sharks sink when they stop swimming? Why do sharks eat things that bleed? How do polar bears keep warm in their dens? Why do sharks have three rows of teeth? Why is the polar bears summer coat a different color? Do fruit-eating bats have really good eyes? Do owls that live in the desert hunt at night?

(continued)

Downloaded from jlr.sagepub.com at DUQUESNE UNIV on February 2, 2012

32

TABOADA AND GUTHRIE APPENDIX A (Continued)

Examples for text about biomes and organisms: What kinds of meat-eating animals live in the forest? Why do elf owls make their homes in cactuses? What makes the river fast and flowing? How do animals in the desert survive long periods without water? If the desert is hot, how can animals be so active? Level 4: Pattern of Relationships Questions display science knowledge coherently expressed to probe the interrelationship of concepts, the interaction with the biome, or interdependencies of organisms. Questions are a request for principled understanding with evidence for complex interactions among multiple concepts and possibly across biomes. Knowledge is used to form a focused inquiry into specific aspects of biological concepts and an organisms interaction with its biome. Answers may consist of a complex network of two or more concepts. Examples for text about animals: Do snakes use their fangs to kill their enemies as well as poison their prey? Do polar bears hunt seals to eat or feed their babies? Examples for text about biomes and organisms: Why do salmon go to the sea to mate and lay eggs in the river? How do animals and plants in the desert help each other? How does the grassland help the animals in the river? How are grassland animals and river animals the same and different? Is the polar bear at the top of the food chain?

APPENDIX B Knowledge Hierarchy Level 1: Facts and AssociationsSimple Students present a few characteristics of a biome or an organism. Example: In grasslands are lions, tigers, zebras. Level 2: Facts and AssociationsExtended Students correctly classify several organisms, often in lists, with limited definitions. Example: Animals live in a desert. They like to live there because its nice and warm. Ducks like to drink water in the pond. They are different because one of them is wet and the other dry. Snake and bears, birds, live in the deserts. They help each other live by giving the animals water and some food thats what the mothers do. Level 3: Concepts and EvidenceSimple Students present well-formed definitions of biomes with many organisms correctly classified, accompanied by one or two simple concepts with minimal supporting evidence. Example: Deserts are different than ponds because deserts have a little bit of water and ponds have a lot of water. The animals that live in a pond are snakes, fish, bugs, ducks, and plants. The plants that live in a pond are grass and seaweed. The animals and plants that live in a desert are rattlesnakes, foxes, rabbits, owls, woodpeckers. The plants that live in a desert are cactus, little grass, small trees. Some of the animals eat plants. The plants eat the food in the soil and the little rain. The animals help the plants live by when the animals step on the ground it makes it a little soft and it is easy for the plants to grow. The plants help the animals by bringing some animals close so other animals can catch them and eat them. The animals also help the plant when some of the bugs that drink the plants nectar carry things from one plant to another.

(continued)

Downloaded from jlr.sagepub.com at DUQUESNE UNIV on February 2, 2012

STUDENT QUESTIONING APPENDIX B (Continued)

33

Level 4: Concepts and EvidenceExtended Students display several concepts of survival illustrated by specific organisms with their physical characteristics and behavioral patterns. Example: Some snakes, which live in the desert, squeeze their prey to death and then eat them. This is called a deadly hug. Bright markings on some snakes are warnings to stay away. In the desert two male jackrabbits fight for a female. Some deserts are actually cold and rocky. Both deserts hot or cold, it barely ever rain and if it does it comes down so fast and so hard it just runs off and does not sink into the ground. Level 5: Patterns of RelationshipsSimple Students convey knowledge of relationships among concepts of survival supported by descriptions of multiple organisms and their habitats. Example: A river is different from grassland because a river is body of water and grassland is land. A river is fast flowing. Grasshoppers live in grasslands. A grasshopper called a locust lays its egg in a thin case. One case could carry 100 eggs. The largest herbivores in the grassland are an elephant. In the African savanna meat-eats prey on grazing animals, such as zebra. Many animals live in grasslands. The river is a home to many animals. In just a drop of river water millions of animals can be living in it. Many fish live in the river. Many birds fly above the grasslands and rivers. A river is called freshwater because it has no salt in it. Level 6: Patterns of RelationshipsExtended Students show complex relationships among concepts of survival, emphasizing interdependence among organisms. Example: River and grassland are alike and different. Rivers have lots of aquatic animals. Grasslands have mammals and birds. Rivers dont have many plants but grassland have trees and lots of grass. Rivers have lots of animal like fish trout and stickle backs. They also have insects and mammals, like the giant water bug and river otters. Grasslands usually have lions, zebras, giraffes, antelope, gazelles, and birds. In rivers the food chain starts with a snail. Insects and small animals eat the snail. Then fish eat the small animals and insects. Then bigger animals like the heron and bears eat the fish. Snails also eat algae with grows form the sun. In the grass lands the sun grown the grass. Animals like gazelle, antelope, and zebra eat the grass. Then animals like lions eat them. This is called a food chain of what eats what. In a way the animals are helping each other live. Animals have special things for uses. Otters have closable noses and ears. Gills let fish breath under water. Some fish lay thousands of egg because lot of animals like eating fish eggs. Some animals have camouflage. Swallow tail butter fly larva look like bird droppings. That is what I know and about grasslands rivers.

Downloaded from jlr.sagepub.com at DUQUESNE UNIV on February 2, 2012

34

TABOADA AND GUTHRIE APPENDIX C Ecological Concepts

Science Concept Reproduction All plants and animals have behaviors, traits, and adaptations designed to ensure reproduction of its species Communication Critical to all aspects of the life of plants and animals Defense All plants and animals must have adaptations for defense from predators, enemies, and the environment in order to survive

Traits, Behaviors, or Features Encompassed by the Concept Egg laying, mating, sexual communication

Songs, chirps, odors, chemicals, patterns, colors, shape, behavior Types of bodies, types of appendages, camouflage, warning colors, mimicry, where in the habitat they live, how they move, scales, shell, teeth, movement in groups, eyes Conflict, amount of available food, size of organisms, feeding preference (specialization on food type or general feeder), morphological or behavioral adaptations Chasing or seeking other animals, running or hiding, behavioral adaptations for chasing, seeking other animals, types of mouths and feeding, types of bodies, types of appendages, camouflage, warning colors, mimicry, where in the habitat they live, how they move, teeth Teeth, location in habitat, response to other animals, eyes

Competition Because most critical resources are shared and in limited supply, competition in plants and animals is often observed

Predation Although feeding on plants is very common, predation is a frequently observed interaction among animals

Feeding The search for food and the interactions involved in feeding are critical if animals and plants are to acquire the nutrition needed for growth and development Locomotion Locomotion allows organisms to undertake all needed requirements of life and usually reflects a close adaptation to their habitat Respiration Respiration is an essential process for the acquisition of oxygen, without which most life cannot proceed

Feet, fins, tail, ways of swimming, suction cups, webbed feet

Gills, lungs, skin

(continued)

Downloaded from jlr.sagepub.com at DUQUESNE UNIV on February 2, 2012

STUDENT QUESTIONING APPENDIX C (Continued)

35

Science Concept Adjustment to habitat Physical and behavioral characteristics of plants and animals enable them to survive in a specific habitat Nichea Function of a species in a habitat described by the use of resources and its contribution to other species survival
aThe

Traits, Behaviors, or Features Encompassed by the Concept Animals and plants physical adaptation to habitat (e.g., penguins have webbed feet, polar bears have thick fur, camels can store water) Dam building, recycling, scavenging, population control, habitat conservation

concept of niche was used only in Grade 4.

Downloaded from jlr.sagepub.com at DUQUESNE UNIV on February 2, 2012

Вам также может понравиться