Вы находитесь на странице: 1из 8

EDR 626 Final Exam Essay Question 1

Reading Assessment

Winter 2012

Bridget Rieth

Scenario: A colleague expresses concern that the source of Joshs poor academic performance in 3rd grade social studies is poor understanding of text. The colleague asks, How can I understand what is going on with my students understanding of text?

Response: As a starting point we can examine Joshs scores on the NWEA Measure of Academic Progress (MAP) Reading test to see where he falls in relation to a normed group of third graders around the country, as well as in his own school community. This is a standardized, norm-referenced test that is administered on a computer. As he will do two or three times over the course of a school year, Josh will sit for the untimed test reading on-screen text samples and answering multiple choice questions; the test will attempt to gauge Joshs comprehension level of the text. At the close of the test, we will be able to access an immediate Lexile range score and RIT score for Josh. The Lexile score can give us an idea of Joshs ability to respond to text at a particular level of word frequency and sentence length. The RIT (Rasch unIT) score is an NWEA reporting device using the Rasch model to analyze a students performance in relation to item difficulty. Within 48 hours we will also have a class/grade level composite report in which we can place Joshs score to compare him to his peers in performance. The advantage of the MAP test is that it is simple to administer, we are able to consistently collect data over time to measure Joshs growth, the test is administered impartially with objective, computer-based interpretation of performance, and we receive quick feedback on Joshs scores. As a standardized test choice, the MAP test holds an advantage over the state administered Michigan Educational Assessment Program (MEAP) test, in that the MAP test is adaptive, thus it adjusts itself in response to the students performance on each individual question. (The MEAP test is also currently abysmally slow in returning data, making it highly ineffective for instructional application for a particular student.) As Josh answers items correctly, the test offers increasingly more challenging questions, thus dialing in on Joshs individual skill level, rather than presenting a predetermined set of questions to all students. Likewise, when Josh answers a question incorrectly, the test offers a less challenging question, again to find the text and task range in which Josh is able to perform successfully. When this testing program is implemented into a 2 or 3 times per year administration schedule, we will also be able to examine Joshs data across the last several testing cycles to see if his scores show growth and development or if his scores are static over time. The information we receive from this test gives us an understanding of where Joshs reading performance falls in relation to his peers as well as over time. In interpreting this data, we may be able to detect a gap between the level at which Josh is able to respond to test items and the level at which the typical third grade student is performing, to begin to see if reading comprehension is indeed an issue for this student. Using the Lexile range, we may get a general sense of the text level that Josh is able to successfully understand. We can then compare that with the level of text he is asked to process in social studies class to see if there is a mismatch between the current grade level materials and his skill level,

and thus suggesting the need to offer alternate materials to present the concepts to Josh at his level. The Lexile range, however, may not take into consideration Joshs comprehension of the content of the text. The score will not show if it is the particular subject concepts that Josh is struggling with due to a lack of background or vocabulary knowledge, or perhaps inexperience with the type of texts presented in social studies class (i.e. non-fiction, biography, primary source documents, etc.) Likewise, Joshs RIT score will reflect his performance on the test-selected items, none of which may be related to the type of reading that Josh is being asked to do in social studies. Reading text samples and answering multiple choice questions may not be the best task comparison for the work Josh is expected to do in class. He is more likely required to read, discuss and respond in writing to class content, so if Josh is actually having difficulties with oral language, processing or written response, we should not expect to get indicators of this on the MAP test. And finally, while the results of the MAP test may indicate that Josh has difficulties with reading comprehension, it will not give us the full diagnostic information we need to determine an instructional path to help Josh grow and develop in this area. I recommend checking those scores as an available and fairly reliable entry point to Joshs reading profile, but the scores will likely only provide some evidential data to support your suspicion that reading comprehension is getting in the way of Joshs performance in social studies. The next level of assessment I would suggest with Josh is the Fountas and Pinnell Benchmark Assessment, which will allow a closer look at Joshs reading skills. This assessment is a modified Informal Reading Inventory, which attempts to offer a look at multiple facets of a students reading process. Josh will sit with the instructor, and read aloud text selected to hone in on his own independent and instructional reading levels. If there is no previous data concerning this level, we can begin with a word list assessment, wherein Josh will read a series of word lists until he reads 5 or more incorrectly; this will help determine at which level to begin the full assessment task. After selecting the approximate text level, the teacher will gather a running record of Joshs oral reading performance, timing Joshs reading to obtain a words/minute rate and listening for reading fluency, all in an attempt to gain a better picture of Joshs reading process. About half way through the text, the teacher will ask Josh to finish reading silently, to probe for Joshs ability to process text both aloud and internally. When he is done, the assessor will lead a comprehension conversation using prescribed questions and listening for specific answers to determine Joshs understanding of the text he has just read. The questions will fall into the categories Within the Text for surface level details, Beyond the Text for inferential thought, and About the Text for a look at the written constructs of the text, i.e. looking at the text like a writer. After gathering sufficient information from the comprehension conversation, the teacher will ask Josh to respond in writing to a prompt about the text. There are multiple data points that can be gathered and examined when using this assessment, and in some ways we can look at it as two different levels of testing. On one level, we can look at Joshs ability to process text smoothly and successfully and his ability to engage in an informed conversation concerning the material afterwards. We can make note of his reading rate, which often, but not exclusively can give us a picture of his ability to fluently process text. A too slow reading rate may get in the way of capturing and holding onto information. A too fast reading rate may suggest that the reader is a rapid word caller, but is not attending to meaning. These issues might be revealed in the students

level of success in the comprehension conversation. In another indicator of fluency, we can listen to Joshs phrasing as he reads-is he reading smoothly with expression and appropriate attention to punctuation and other text features? Or is his reading choppy and broken up by many stops, repetitions or errors? The reading record and a carefully listening assessor can help reveal issues in these areas. Included in this first level of information is the opportunity to converse with Josh to probe for his understanding of the text. Although we are asked to look for some fairly specific information in his responses that show his comprehension, we have some latitude in talking with him to get at whether or not he does indeed understand. While careful not to lead Josh to a particular answer, we can prompt for more information or expansion on the ideas he shares. We will ask about the text in multiple ways and from multiple angles (within, beyond and about the text, which in my opinion serves well to dig more deeply into the students ability to think at different levels about a given passage.) After we are satisfied that the comprehension conversation has revealed as much as possible about Joshs understanding, well ask him for a written and possibly illustrated response to the text using a prescribed prompt. A readers writing sometimes reveals additional information about their understanding or application of the information provided by the text, and can be an excellent artifact to gather. (As a bonus, it also can give us a look at Joshs writing abilities.) The Benchmark Assessment is designed so that we can look at multiple levels of achievement by asking Josh to read and respond (orally and in writing) to as many text levels as we deem necessary to get an accurate picture of where he is able to process text independently, and where he is able to process text instructionally, i.e. with the support of a teacher. This can help us determine if the text that Josh is being asked to use in social studies is a good fit for his ability levels, or if he might need additional support or alternative materials to gain information about the third grade curriculum concepts. Using the within, beyond and about the text framework, we might also gain information about Joshs ability to process details, think deeper to reach more sophisticated understandings, or perhaps his ability to process certain types of text structures. (We will likely want to use the nonfiction text materials on the assessment to see how Josh does with text that is more closely related to what he might be working with in class.) Again, we may be able to determine areas of comprehension where Josh could use some extra support as well as direct instruction. One caveat of this first level of information from the Benchmark Assessment is that, although it offers us a fiction and nonfiction text to use at each instructional level, these may or may not directly reflect the sort of text that Josh is using in social studies class. Likewise, while the assessment makes an attempt at activating the students thinking before reading the text with an introductory sentence or two, it does not provide for an assessment of the students prior knowledge or interest in the text content, and will not help us to know if that is also a stumbling block with the social studies materials that he uses in class. It does offer an additional Vocabulary in Context assessment which might help detect a difficulty at word level comprehension, but again this relates only to the text material content, not Joshs social studies texts. Another possible limitation is that Joshs responses on the Benchmark Assessment will be evaluated by the teacher, which can lead to subjective assessment. A teacher with more experience using this assessment who is also knowledgeable about the process of reading development can help mitigate some of the subjectivity, and a careful adherence to the test guidelines

will also reduce some of the limitation this sort of assessment can contain. A collaborative evaluation of the data with other grade level/consulting professionals could also help to raise the validity of the results analysis. The third type of assessment I would recommend is a word level analysis to try to get at what exactly is going on when Josh processes text. The trouble he is having with his social studies text is likely an issue he is having with many types of text. So in order to provide appropriate instruction to help him grow his reading skills, it would be helpful to take a close up look at what Josh is able to do and what Josh still needs to develop. Fortunately, the Benchmark Assessment provides us a sample of his word level processing, in the form of the reading record that we will have gathered. One of the affordances of this system by Fountas and Pinnell is that it allows us to gather multiple types of information within the same testing session. It is worth noting that this type of assessment can also be done using a text excerpt directly from the students classwork, and this might be advisable, especially if the text structure of curriculum materials is significantly different than the Benchmark testing materials. Administering a running record on an additional text sample is also a good idea, in order to gain sufficient information about Joshs text processing, i.e. enough data to constitute a robust sampling of the types of errors he is making while he reads. Through the use of word level analysis, we might gain a better understanding of multiple aspects of Joshs reading process. We can quantify his overall accuracy rate and his typical rate of repetition, omission, insertion and self-correction. This may help with an understanding of Joshs fluency and how it is affecting his comprehension. Even more helpful in determining instructional support to help Joshs development, we can analyze the miscues that he makes while reading to determine which systems of text processing he uses most often, and any that he seems to be neglecting at this stage. For example, if Josh is using only grapho-phonic cues (sound-letter information) when attempting to decode an unknown word, and not considering meaning or syntax within the text, we can work with him on expecting and looking for the semantic component of a word and how it fits in with overall text meaning. If he is using only semantic cues and not grapho-phonic information, we can work with Josh to practice looking at the whole word when reading. The word level assessment truly gives us some of the best information to inform effective instruction with a student. It is definitely time consuming, and could be difficult to gather and finitely analyze for each and every student in a typical class, however it is extremely helpful when looking at a reader who is need of support or instruction to move them to be reading on grade level. Again, one affordance of the Benchmark system is that it provides the opportunity to take a snapshot of all assessed readers processing at the word level, but more complete results will require a larger text sample than the testing typically provides. After gathering these various pieces of evidence pertaining to Joshs reading skills, you should be able to determine whether or not reading comprehension is impeding his success in social studies, and also a large amount of information that will help you create instruction designed to support Joshs learning needs.

Essay Question 2 We began this course with Snow and Sweets model of factors (reader, text, activity, and socio-cultural context) that affect text comprehension. a. Explain that model in your own words. Refer to Snow and Sweets work to help you do so, using appropriate citations. Their chapter is available on BB site under Readings; a more extensive monograph is available online from Rand Education: http://www.rand.org/pubs/monograph_reports/2005/MR1465.pdf b. Demonstrate how the interaction of reader, text, activity, and context could be seen to affect learners performance on the assessments that you conducted. Provide three examples for comprehension assessment, 1 example for the word-level skills assessment. c. For each example that you provided in b above, describe the implications for the construction, administration, interpretation, and application of information from your assessments.

Snow and Sweets model lists reader, text and activity as the 3 factors that interact to affect comprehension and engagement within a reading encounter, surrounded by the fourth aspect of sociocultural context which encompasses all three factors, informing or altering each. Within the task of extracting and constructing meaning from text (Snow, 2002), each of the initial 3 factors present a range of affordances and limitations to a successful outcome of the reading task. The reader brings his or her own processing skills, intellectual abilities, level of motivation and background knowledge of the text content. The text offers a wide range of supports or challenges depending on its genre, level, structure, graphic elements, literary and linguistic structure, and content. The activity informs the readers purpose for engaging with the text, the process the reader will use to do so, and the desired or actual results of the readers efforts. The fourth consideration, socio-cultural context, is woven throughout the entire configuration. In relation to the reader, it informs the way he or she processes and interprets the content; in relation to the text, it can vary depending on the content, writer, publication locus or historical circumstance; concerning the activity, the context presents a particular set of expectations predicated on social constructs or community customs and beliefs. For example, in my work with Will, I was able to see several examples of the above interactions, most notably in the Think Aloud classroom based assessment. The interaction between Will (reader) and the Think Aloud process (activity) brought a surprising result. As observed in multiple other reading instances prior to the Think Aloud assessment, and again, since that time, Will is a reader who frequently comments during his reading in an instructional setting. He usually seems comfortable sharing his thoughts on the content, his own comprehension, and even his own decoding process with me as his teacher, with or without being prompted to do so. Examples of Wills unsolicited comments from a Benchmark Assessment session include: I know this word but dont know how to pronounce it, Im going to read this little caption now, and inserted in a description of snakes having no eyelids, I wouldnt want to be a snake. They lick their eyes. Upon explaining the Think Aloud procedure to Will, I

expected to hear abundant comments concerning both comprehension and decoding. This was not the case however, as Will interpreted the task in a very limited way, which caused him to limit his oral conversation as well. He ONLY commented when I asked him to do so, and his responses reflected only the storyline of the book, not his own interpretations, connections or processing. This interaction between reader and activity is important to keep in mind for future assessments, knowing that Will is likely to respond in a much more complete way when he does not feel the constraints of a formal expectation on his oral sharing. In subsequent sessions or follow up assessments, it would be best to allow Will to follow his natural inclinations to share orally, making sure to record these observations for later analysis. If another Think Aloud task is introduced, the same construct might apply, with teacher stops to prompt Will only at junctures where specific information is desired but not forthcoming. It might also be a be a good idea to explain to Will that a more open response spectrum would be wonderful, just as he is used to doing at other times when he reads with his teacher. In interpreting the data from the initial Think Aloud activity, it was important not to assume that Wills processing was limited to the plotline, and having some written commentary from other assessment situations (Benchmark Testing) was helpful in gaining a broader understanding of how Will approaches text. A rather basic illustration of the interaction between text and reader was evident when Will was asked about the job of a manager. As this was an unfamiliar concept to him, he was not able to satisfactorily address this as one of the important ideas in the text, neither in the retelling nor the follow up questions. In this example, the content of the text interacted with Wills lack of prior knowledge about this particular idea. Knowing this, one cannot conclude a lack of text comprehension simply by the omission of this particular idea, but rather that the text contained a concept beyond his experience. Will would certainly benefit from vocabulary support when a text or single concept is out of his range of schema. A glossary or a pre-reading discussion could help fill in the missing information. Another interesting example from the retelling portion of the Think Aloud assessment was Wills moving to dialogue as a way to communicate the storyline. He told much of the story with phrases such as: and then they were like, and then he said, and then she said, even though there was no dialogue in the book, just a straight narrative telling. I believe this is due to the interaction between text and activity. The book is about people, actual historical figures in fact, and so Will interpreted their story as a series of negotiated encounters between people, and that would call for dialogue. His strategy for meeting the demands of the retelling activity, which at first he thought would be very difficult, was to turn the narrative text into something new and effective for his thinking. Figuring out who will do what, how they will do it and when it will happen involves communication between people, so Will quite naturally inferred what they might have said to one another. This is particularly interesting in light of the fact that I have been designing instruction to develop Wills ability to infer. Transferring this to further instruction with Will, I will continue to work with text that involves people, as he seems to have an easier time extending his thinking about how people might think and feel, rather than inferring with concepts, or other more abstract pieces of information. We are now focusing on longer biographical texts to link into that interaction.

In the word level assessment work with Wills reading samples I observed the effect of the interaction between reader and text. As the text (Queen of the Falls) was somewhat above Wills independent reading level and held multiple levels of challenge (historical context, vocabulary, and complex syntax for example), Will relied frequently on graphic cues to support his understanding. Where the text was less complicated, he successfully negotiated the text using multiple processing approaches in a more balanced combination-syntax, semantics and grapho-phonic cues. When the text was more challenging, Will would search the illustrations for clues. If none were present, he would often miscall a word based on an initial letter sound or chunk, rather than thinking about the words meaning or its place in a sentence. The instructional implications of this interaction are twofold. First, to support comprehension where the meaning of the text takes precedence in a lesson, it would be best to provide Will text with strong graphic support. Second, it would be worthwhile to work with Will directly to access all three cueing systems consistently, especially as he has shown himself to be capable of using them all efficiently.

Essay Question 3
Page 2 of the course syllabus lists the course objectives for EDR 626 with respect to assessment, instruction, and professional dispositions, as articulated by our accrediting agency, the International Reading Association, in the Standards for Reading Professionals. Choose two of these objectives/standards and, for each one, state the ways in which you have developed your knowledge base, analytical or critical thinking, professional practices, and/or professional relationships, as well as your ability to articulate your ideas in the course of the study that you have engaged in for EDR 626. Provide examples to support your assertions.

Objective: Candidates will select from a range of assessment tools, or develop tools, appropriate to purpose, administer assessments, and state strengths, limitations of tools. Also, compare and recommend assessment tools, demonstrate administration. (IRA 3.1)

I have a much deeper, more complete knowledge and understanding of several different types of assessment tools as a result of this class. Much of our discussion around standardized tests supported the ideas I already had about MEAP, Stanford and similar other testing programs. As the federal and state governments continually move toward using such data to evaluate teachers, schools and districts, I have seen the tendency to make these tests and the data gathered from them less about examining individual student progress and informing instruction for that student in a way that is timely and effective. As our district adopts a new standardized assessment program, NWEAs MAP test (referenced in Essay Question 1), this class was helpful in facilitating some thought about how best to gather and analyze this data in a way that will serve the individual students as well as developing my own teaching over time. It caused me to look more deeply into the structure and research behind this test and to

understand the reporting structure. Finally, the class helped me to put this test into perspective as one small snapshot within the many opportunities I have for gathering information about my students reading development. Our district has spent the last 3 years implementing the Fountas and Pinnell Benchmark System throughout all elementary grades, and while we have received a fair amount of training in the administration of the test and analysis of the data, this class helped me gain confidence with the system, understand the affordances and limitations of the testing and gave me a clearer picture of the path between assessment and instruction. I find myself appreciating all that is built into the system-I tried a few of the alternative tests and now see the application for some of these tools within my work with students. Having been working to master the miscue analysis tools built into the Benchmark System, I appreciated the guidance and time to gain confidence in that skill. Our Think Aloud activity opened up a new avenue of assessment possibilities and I am looking forward to implementing more of this style of authentic task assessment in my classroom. While it is helpful to know of a catalog of prepared assessment tools (I enjoyed looking at other IRI forms, word level assessments and oral language tests), the ability to formulate, administer, record, analyze, interpret and implement results into instruction is the key to responsive, effective, student-centered instruction. This new knowledge paired with my recent acquisition of how to more easily take a spontaneous running record on a third grade leader, help me feel empowered to be a more skilled educator.
Objective: Candidates will communicate purposes, procedures, results, implications of assessments to student, parents, administrator; provide artifacts of student work as evidence. Also, interpret assessments in the context of school data. (IRA 3.4)

Presenting our ongoing assessment, analysis, rationale and recommendations in a formal report format was a new experience for me, one that will enhance my abilities to communicate professionally, concisely and effectively with the stakeholders in a students learning. Although it was sometimes challenging work, I appreciated the chance to work through a reporting process with models and guidance to follow. This will be especially helpful if I find myself in a reading consultant position one day, but it is also an excellent skill to have in a time when classroom teachers are asked to take over more and more of the intervention level work with students. Whether reporting to a child study team, administrator, colleague or the ultimate stakeholders, the student and his or her parents, I feel more prepared to be able to assess, analyze and share my findings.

Вам также может понравиться