Вы находитесь на странице: 1из 26

ACCESS for ELLs Final Paper

TESL 628
Dr. Koubek
Kristi Fletcher

The Assessing Comprehension and Communication in English State-to-State for English

Language Learners (ACCESS for ELLs) standardized test is a practical, high-stakes, standardsbased assessment developed by the World-Class Instruction Design and Assessment (WIDA)
Consortium. WIDA is a private institution initially funded through federal grants to develop a
standards and assessment system to help states meet No-Child Left Behind legislation criteria in
regards to English Language Learners (ELLs) (Fox & Fairbairn, 2011, p.425). States are required
to annually measure students with a composite English comprehension measure in addition to
individual scores in the areas of listening, reading, writing, and speaking (Kenyon, MacGregor,
Li, & Cook, 2011, p.383). States which have adopted this test pay twenty-three dollars per
student to have their ELL students assessed via the WIDA developed standards and assessment
process. All states are required to report data on three measures including the number or
percentage of students making progress, the number or percentage of students who attain English
proficiency by the end of each school year, and proof of adequate yearly progress in language
arts and mathematics (Freeman & Freeman, 2011). Students who have not passed the ACCESS
for ELLs tests measure of proficient English are required by law to have access to additional
instructional support. All students who have not met proficient levels for at least five years are
required to take an English language proficiency test annually (Fox & Fairbairn, 2011, p.426). In

this way the government is able to hold schools accountable to federal law in regards to giving
ELL students equal access to education.
Individual states that choose to adopt the ACCESS for ELLs test also have some leeway
for what they deem proficient English levels apart from the WIDA Consortiums
recommendations (Fox & Fairbairn, 2011, p.426). States use students scores to hold public
schools in the state accountable for overall ELL proficiency levels and growth. In some states
teachers may use alternative SOL assessments for students who have demonstrated low English
proficiency levels on the ACCESS for ELLs test. State governments expect that individual
schools and teachers will use the ACCESS for ELLs standards in addition to the Common Core
Standards or Standards of Learning to guide instructional content and assessment (Fox &
Fairbairn, 2011, p.426). States who have adopted the ACCESS for ELLs test have done so
because they believe it lines up with their existing state-mandated curriculum and educational
objectives. This dovetailing ensures that the ACCESS for ELLs assessment has content validity
for the population of students it is assessing.
Schools use WIDA performance standards and ACCESS for ELLs assessment scores to
determine instructional placement, instructional strategies, and overall goals of ELL students.
Many schools use ACCESS for ELLs scores to determine which ELL students will have
preferential access to support (Kenyon, MacGregor, Li, & Cook, 2011, p.384). Although federal
legislation requires ELL students be given access to support in order to ensure fair access to
education, states are able to set their own standards of proficiency which creates a loophole
through which schools can exit ELL students from support early. Schools often use the ACCESS
for ELLs assessment scores as a gateway test despite WIDAs caution against such use.
Regardless of their proven validity and reliability ACCESS for ELLs test scores reflect a single

sample collected amidst factors such as fatigue, disinterest, and unfamiliar testing procedures
(Peregoy & Boyle, 2008). Due to the high level of importance socially placed on the ACCESS
for ELLs test and its single snapshot approach teachers knowledge of the assessment and ability
to interpret its results become crucial in allowing it to shape student goals and instructional
modifications (Peregoy & Boyle, 2008).
WIDA English Language Development Standards
WIDA English Language Development Standards drive the focus of the ACCESS for
ELLs standardized test. Researchers created a set of English language proficiency standards
organized according to students grade level. Certain grade levels were clustered together to
account for the wide variety of maturational and linguistic competencies that exist in normal
classrooms. The five grade level clusters are kindergarten, grades 1-2, grades 3-5, grades 6-8,
and grades 9-12 (Fox & Fairbairn, 2011, p.427). Standards are organized according to
content/subject areas and English language proficiency levels. All of the content and English
proficiency standards are geared towards assessing academic language. Academic language is
broken down by five areas: Social and Instructional Language (SIL), Language of Language Arts
(LoLA), Language of Math (LoMA), Language of Science (LoSC), and the Language of Social
Studies (LoSS) (Kenyon, MacGregor, Li, & Cook, 2011, p.386). The WIDA English Language
Development Standards also include benchmarks for expected knowledge and skills, in addition
to progress indicators to help teachers assess how students are meeting a certain standard
(Peregoy & Boyle, 2008). The national organization of Teachers of English to Speakers of Other
Languages (TESOL) has adopted these standards as meeting the criteria of No-Child Left Behind
legislation (Fox & Fairbairn, 2011, p.425). WIDA standards were published in 2007 and 2014.
The organization is currently collecting feedback on standards for future modifications.


ACCESS for ELLs is a criterion-referenced, formal, summative, proficiency assessment.
The test is administered annually in February. The ACCESS for ELLs standardized test is broken
down into five different folders based on content area. Within each of these content areas each
of the five language domains (listening, speaking, reading, & writing) are assessed. Writing tasks
combine content areas for practical assessment purposes. Test researchers (for example, the
Wisconsin Center for Education Research at the University of Wisconsin-Madison) develop new
questions annually in order to ensure test validity from year to year (Fox & Fairbairn, 2011,
p.427). Each year one-third to one-half of the test items are retired (Fox & Fairbairn, 2011,
p.430). WIDA standards that feature both content and academic language features are used to
create assessment tasks. This design reflects a belief that students development in academic
language and academic content knowledge are inter-related processes. Students listening and
reading levels are assessed via select response. Writing and speaking language domains are
assessed through extended constructed response (Kenyon, MacGregor, Li, & Cook, 2011, p.387).
ACCESS for ELLs scores are meant to provide a measure of students English language
progress across grades on the same scale (Kenyon, MacGregor, Li, & Cook, 2011, p.383). The
standardized tests overall goal is to determine to what extent English language proficiency may
interfere with the validity of the state content test (for example, The Virginia Standards of
Learning). When ELL students are determined as proficient according to the ACCESS for ELLs
test it means that their ability to perform on content knowledge test items is not affected by
interference from language levels (Kenyon, MacGregor, Li, & Cook, 2011, p.398). ACCESS for
ELLs scores measure the extent to which state content scores are valid and reliable for individual
ELL students. The WIDA directed standards and assessments are designed to guide students to

an instructional level where they are able to take the state content assessments along with
mainstream students. ACCESS for ELLs washback allows various levels of the education system
to alter policies and instruction based on student performance.
The ACCESS for ELLs assessment uses Winsteps, a computer program, to score and
analyze results data (Kenyon, MacGregore, Li, & Cook, 2011, p.392). Students English
proficiency levels are assessed based on a vertical scaling system in which students are given a
consistent core number of questions regardless of their proficiency level to enable all students
(regardless of age) to be assessed on the same scale (Fox & Fairbairn, 2011; Kenyon,
MacGregor, Li, & Cook, 2011). This set of common items creates a connection between all
forms of the test regardless of age level (Kenyon, MacGregor, Li, & Cook, 2011, p.385). In total
25-29% of the items are common between tests (Kenyon, MacGregor, Li, & Cook, 2011, p.392).
This system was tested by WIDA in 2004 and found to be sufficiently reliable (Kenyon,
MacGregor, Li, & Cook, 2011, p.386). Students listening and reading scores form the basis of
this common core since they both have select response questions.
Writing scores are not select response and are therefore linked to reading scores based on
a smaller range of proficiency levels (Kenyon, MacGregor, Li, & Cook, 2011, p.386 & p.394).
Therefore, students writing scores are adjusted based on their level as determined by their
reading scores. The ACCESS for ELLS writing portion contains three shorter tasks (in content
areas of SIL, LoMA, and LoSC) and one extensive task (in content areas LoLA, LoSS, or SIL)
(Kenyon, MacGregor, Li, & Cook, 2011, p.389). The extensive writing task is weighted three
times the shorter writing tasks (Kenyon, MacGregor, Li, & Cook, 2011, p.390). Writing items
which have a high degree of distance (three logits) from their reading assessment level are
thrown out (Kenyon, MacGregor, Li, & Cook, 2011, p.387 & p.392). Writing tasks are scored

using rubrics developed based on the WIDA English Language Development Standards
(Kenyon, MacGregor, Li, & Cook, 2011, p.391).
Speaking scores are tied to reading assessment scores in the same way. The test will use
the students performance on reading to specify a level for that student and speaking items with
a significant distance from that level are thrown out (Kenyon, MacGregor, Li, & Cook, 2011,
p.387). The speaking portion of the test is unique in that it is adapative. Students start with basic
questions and those questions increase in difficulty each time. If students reach a level where the
question is too challenging they stop and that level is determined as the students ceiling
(Kenyon, MacGregor, Li, & Cook, 2011, p.388). At a minimum students are given three speaking
questions, which is a relatively low sample. Students at the highest proficiency level will be
asked to perform on all thirteen questions. All thirteen questions take approximately fifteen
minutes (Kenyon, MacGregor, Li, & Cook, 2011, p.391).
Test scores are reported in a variety of formats including raw scores (number of questions
answered correctly divided by total number of questions answered), scale scores (English
proficiency levels based on a single scale), and proficiency level scores (three digit scores which
indicate the students overall level and relative place in that levels continuum) (Fox & Fairbairn,
2011, pp.427-428). Research has shown that composite scores have a high rate of reliability
(Fox & Fairbarin, 2011, p.427). Students overall scores are also reported with confidence
levels to give interpreters a measure of reliability (Fox & Fairbairn, 2011, p.428).
The ACCESS for ELLs assessment has been criticized for its lack of test preparation
tools, minimal research and incorporation of feedback, and multicultural student background
reliability (Fox & Fairbairn, 2011; Kenyon, MacGregor, Li, & Cook, 2011). WIDA has chosen

not to publish sample tests in order to discourage teachers from teaching to the test in their
classrooms. However, this choice means that ELL students are not given ample opportunities to
practice taking the test prior to be expecting to perform their best on it. Some sample items and
tasks are available on the WIDA website but those are limited and do not always reflect the way
that the test itself is changing from year to year (Fox & Fairbairn, 2011, p.427). The inability of
students to prepare for the actual process of being tested may negatively impact the face validity
of the ACCESS for ELLs assessment. These decisions may also significantly impact the
accuracy of students scores, particularly among students who have no previous experience with
standardized tests.
Data generated from existing ACCESS for ELLs test performance has not been used to
study the assessments equality towards different student groups based on demographic
indicators. The test has not attempted to assess the extent to which school readiness and previous
schooling may impact performance on the standardized test. Students who struggle with similar
language domains have also not been analyzed in terms of trends and specific needs (Fox &
Fairbairn, 2011, p.429). Considering the existence of known factors that affect English language
growth, the amount of students taking the test currently, and the high number of states that have
adopted the ACCESS for ELLs test these seem to be major oversights. General feedback has
been limited thus far to writing and speaking areas of the test which tend to require more outside
involvement and a higher degree of scrutiny due to their assessment methods.
When developing test questions the WIDA Consortium used staff and educators to
develop questions which would take into account students cultural perspectives and background
knowledge (Fox & Fairbairn, 2011, p.428). However, research has shown that most educational
researchers and teachers come from a white cultural background and have demonstrated limited

multicultural competencies and skills even with students in their own classrooms. Further data
needs to be collected and analyzed to ensure that students of particular ethnic or linguistic
backgrounds are not at a disadvantage due to test design.
Two case study students provide a rich description of the ACCESS for ELLs test,
examples of interpretation complexities, and connections between test results and instructional
Student #1: E.V.
E.V. is a third grade student in a small city in Virginia. The ACCESS for ELLs
assessment gives raw scores for each student in the areas of comprehension (listening and
reading), speaking, and writing broken down by the content area with which each language task
was integrated.

Comprehension (Listening
and Reading)

Speaking Tasks
Writing Tasks

English Language Proficiency

# correct/ total # of items

Social & Instructional


Language of Language Arts
Language of Mathematics
Language of Science
Language of Social Studies
Social & Instructional
Language Arts/Social Studies
Social & Instructional

Linguistic Complexity: 3/6
Vocabulary Usage: 3/6


Language Control: 3/6

Linguistic Complexity: 3/6

Vocabulary Usage: 3/6

Language Control: 2/6
Language Arts & Social Studies Linguistic Complexity: 3/6
Vocabulary Usage: 2/6
Language Control: 3/6
The table above suggests E.V. demonstrates a relatively balanced English proficiency level
across both language domains and diverse content areas. In general, social and instructional
language is higher than academic language that suggests normative English language proficiency
development. For ELL students everyday speaking is often further advanced than language used
in school. E.V.s comprehension scores in the areas of LoMA (4/12) and LoSS (5/9) indicate that
vocabulary or grammatical features specific to those content areas are current weaknesses. Both
Mathematics and Social Studies use abstract concepts and ideas, in addition to language which is
not often used to describe daily life. These scores being low also indicates normative type ELL
growth since most ELL students struggle with abstract concepts which are disjointed from their
Scale scores are given based on a 100-600 range. E.V.s scale scores are reported in the
following areas: listening (340), speaking (307), reading (328), writing (337), oral language
(324), literacy (333), comprehension (332), and composite/overall score (330). These scale
scores represent the students performance after converted using the vertical scaling method
(WIDA, 2011, p.6). The score itself reflects where the student is in a specific language domain
relative to all other ELLs (regardless of age or current level). These scores are also useful in
providing detailed comparisons of year to year change in specific language areas (WIDA, 2011,
p.6). In E.V.s case the standard deviation of her scale scores is .16, again supporting that her

language proficiency skills are well balanced across language domains and content areas. Her
composite score of 330 suggests that she is firmly in the middle of WIDAs overall English
proficiency scale.
Confidence band graphs allow assessment interpreters to view the relative accuracy of
score reports. If E.V. took the ACCESS for ELLs again there is a 95% chance that her score
would fall into the range provided in the report graph (WIDA, 2011, p.10). According to E.Vs
scores the listening assessment was the least reliable because it has the largest confidence band
range at 82 points. The writing assessment has the highest rate of reliability due to its point
spread being only 30 total points. Composite scores levels of confidence reflect the reliability of
the underlying scores that make up the total composite.
E.V. has a total proficiency level of 4.1 on a six point scale. This score puts her at the
very beginning of the Expanding stage. However, E.V. is a student who needs differentiated
support due to having multiple language domains at significantly lower levels and some
language domains at higher levels. 4.1 is the composite score and does not reflect that E.V. is
capable of Expanding level standards and objectives across the board. The overall score is
weighted 35% reading, 35% writing, 15% listening, and 15% speaking. E.V. scored at level 5 in
both listening and reading. These language domains receptive focus means that they develop
faster in students whose language exposure is a balance of all four domains. E.V.s progress with
English language acquisition is similar to that of a native speaker and suggests a healthy amount
of underlying competency. The writing score was slightly lower at 4.2 which is expected since
productive skills are more difficult (Freeman & Freeman, 2011). Detailed writing task scores
suggest that E.V. has weaknesses in some content area vocabulary and language control.

The most interesting score on E.V.s ACCESS for ELLs evaluation is her 2.4
proficiency in speaking. This score is significantly lower than all other task measurements and
suggests that there is something about the speaking assessment that highlighted a language
weakness of hers. Detailed charts on the lower half of the teacher report indicate that E.V. did
well on the social & instructional section of the speaking items (2/3 points) but did poorly on the
language arts/social studies and mathematics/science sections (2/5 points for both). It may be
that E.V. had enough time during content-area writing to think about what words to use. Time
may have allowed her to generate multiple content-area words to use for the same idea when she
could not remember an exact word. The speaking tasks naturally allow for less time since
responses are normally given within a shorter time period than writing. It seems that E.V. was not
able to actively retrieve content-area vocabulary needed when faced with immediate response
E.V.s scores suggest that in areas of listening and reading her current instructional level
is at Level 5 Bridging. Instruction and curriculum decisions for her English growth should
reflect the WIDA performance definitions for Level 5. E.V.s instructional level for writing is on
the low side of Level 4 Expanding. Within the standards provided for Level 4 E.V. should be
challenged to accomplish the easier tasks before being expected to perform on the more difficult
objectives. Some Level 3 Developing standards may need additional reinforcement or practice
in order to bridge successfully into Level 4. E.V.s instructional level for speaking according to
the ACCESS for ELLs assessment is at Level 2 Emerging. The confidence band spectrum
indicates that her scores are most likely on the lower side of Level 2. Instruction in the area of
speaking should focus on meeting Level 2 WIDA performance standards in all content areas and
language domains.

Student #2: A.M.

A.M. is also a third grade student in a small city in Virginia. A.M. and E.V. attend
different schools and are therefore exposed to different teaching methods and instructional
techniques. A.M.s ACCESS for ELLs raw scores are reported in the table below.

Comprehension (Listening
and Reading)

Speaking Tasks
Writing Tasks

English Language Proficiency

# correct/ total # of items

Social & Instructional


Language of Language Arts
Language of Mathematics
Language of Science
Language of Social Studies
Social & Instructional
Language Arts/Social Studies
Social & Instructional

Linguistic Complexity: 2/6
Vocabulary Usage: 2/6


Language Control: 2/6

Linguistic Complexity: 1/6
Vocabulary Usage: 2/6


Language Control: 1/6

Linguistic Complexity: 2/6
Vocabulary Usage: 2/6

Language Control: 1/6

Language Arts & Social Studies 0

These raw scores detail a snapshot of how well A.M. did on this day on this particular version of
the ACCESS for ELLs assessment. WIDA cautions against using these scores as a comparative

measure for tracking student growth or measuring their level against other students (WIDA,
2011, p.5). A.M.s raw scores as reported above suggest a consistent deficiency in content area
knowledge. In areas of comprehension, speaking, and writing the scores for language domains
combined with content are the lowest. Social and Instructional Language scores in speaking
(2/3) and writing (2/6 in all areas) are the highest scores reported. A.M.s listening and reading
score even in the area of social and instructional language is very low (4/10), especially
considering his performance in the same content area while speaking and writing. This
peculiarity will be discussed in more detail later.
A.M.s scale scores reflect his English proficiency on the ACCESS for ELLs vertical
scale. They are as follows: listening (325), speaking (285), reading (207), writing (299), oral
language (305), literacy (253), comprehension (242), and composite/overall score (269). These
scores have been adjusted using WIDAs system of analysis and are more reliable for
determining student level and growth (WIDA, 2011, p.6). While E.V.s score standard deviation
was .16, A.M.s is .52. The larger standard deviation reflects less consistent performance across
language domains and content areas. Comparing these two standard deviations indicates E.V. and
A.M. are at very different instructional levels with perhaps unique backgrounds and exposure to
English. This is true despite their overall composite scores being relatively close in number
(A.M.=269 and E.V.=330) on a 600 point scale.
Confidence band graphs report the accuracy of A.M.s performance scores relative to his
actual ability. The confidence bands of individual language domain scores indicate that the
reading assessment was the least reliable because it has the largest confidence band range at 106
points. The writing assessment has the highest rate of reliability due to its point spread being
only 42 total points. A.M.s wide range of scores across language domains make face value

interpretation of composite scores more difficult since it is not transparent as to which portions
are skewing the total percentage.
A.M. has a total proficiency level of 1.9 on a six point scale. This score puts him at the
end of the Level 1 Entering stage. However this overall level should be viewed with caution
due to some of the peculiarities in A.M.s individual language domain proficiency scores. A.M.s
listening performance got the highest score. The 4.0 in listening suggests that this student has a
good grasp of receptive English skills and was able to perform fairly well even with content-area
integration. This score is not surprising since listening is usually the most advanced language
domain when compared to reading, writing, and speaking. However, at this point A.M.s scores
take on a strange pattern. His reading score, the other receptive language domain, is a low 1.9.
In some beginner students, high listening scores and low reading scores are a reflection of low
literacy levels and inexperience with text. This could be the case with A.M. except for the
presence of a high writing score. Although the writing task is more open ended and enables
students to choose words carefully over time it should reflect the students ability to produce
vocabulary, spelling, and grammar features which overlap with reading. It is normative for
reading skills to be far more advanced than writing scores since words are given during reading
and students are not required to produce them. What makes these scores even more confusing is
that A.M. scored a 1.9 on speaking. The detailed chart at the bottom of the teacher report
suggests that the student was not able to generate content area academic vocabulary (Language
Arts/Social Studies: 1/5, Mathematics/Science: 1/5). Why was the student able to meet content
area vocabulary standards on written tasks and not on reading or speaking tasks?
The strange convergence of A.M.s scores questions the validity and reliability of the
ACCESS for ELLs assessment. If both the listening and reading scores were low then it might be

that the student was not prepared for standardized testing since both of these sections use select
response formatting. However, in this case the student did well on the listening section
suggesting that the issue was not with student testing preparedness. Since the speaking and
writing portions use different assessment styles and may vary in content area depending on the
questions, this may account for the difference in productive skill scores. However, A.M. also
scored very low on reading which has content and language domain overlap with the other two
areas. Sample tasks provided by WIDA seem to have balanced response methods of choosing
pictures or words across all of the language domains; negating picture supports as a possible
interference in test reliability (WIDA, 2008). It is possible that the student randomly received
content area questions only on listening and writing sections of the ACCESS for ELLs test that
matched up with the students knowledge, resulting in irregular scoring across the language
domains. However, if this was the case the whole WIDA assessment could be questioned since it
is supposed to balance for these types of errors. If some of the WIDA performance standards had
been covered in class at length while others were ignored there may be a discrepancy in how
students perform on the test. However, it is hard to believe that a teachers instructional choices
would be able to skew test results to this extent. It may also be possible that since the reading
score has the largest confidence band point spread that the reading score alone falls within the
5% of error and is the only faulty score. If the reading score is not accurate and is actually
higher than reported the high listening and writing scores would be more understandable. In this
case, the lower speaking score would also make sense due to the type of speaking assessment
that is used on the ACCESS for ELLs test. If this was the case, which seems to be the most
likely scenario, the presence of a faulty reading score also calls into question the overall validity
and reliability of the ACCESS for ELLs assessment.

E.V.s ACCESS for ELLs scores give suggestions about content areas and language
domains which she needs additional practice and support in developing. In her case, the subject
areas of Mathematics and Social Studies have some of the lowest performance scores. As
discussed earlier these subjects can be more abstract which may present cognitive and linguistic
difficulty. Explicit content area vocabulary instruction in both Mathematics and Social Studies
in the form of word brainstorms, picture cards, read alouds, and tangible experiences should be
used to teach language specific to these areas (see Appendix A). Adaptations of Stauffers (1970)
Language Experience Approach may help E.V. transition from concrete knowledge to abstract
knowledge. This approach activates prior knowledge, immerse students in a content-rich
experience, explicitly teaches vocabulary related to that experience, teaches organizing
strategies, and then engages students in reading and writing about the knowledge gained from
that experience. There are various adaptations of this method which have been successful in
helping ELL students transition to being able to grasp and use academic language (Huang, 2013;
Landis, Omolu, & Mancha, 2010; Oldrieve, 2012).
Additionally, E.V. should be given increased opportunities to practice English productive
skills, especially in content-area speaking. Being able to use academic language in conversation
is a difficult skill since it requires increased grammatical and lexical complexity. Often
classrooms do not give students ample opportunities to use the vocabulary words that they have
learned. This is particularly true in the area of speaking. Instruction could be altered to use
content-area debates, presentations, news reports, or role-play to allow E.V. to practice academic
vocabulary (see Appendix B). Teachers may also choose to place restrictions on classroom

conversation in order to make it a more academic environment. By encouraging professional

talk these language domain weaknesses may be supported. The ACCESS for ELLs scores may
indicate that E.V. could also benefit from more timed language production tasks. By setting time
parameters on debates, news reports, or role-play activities students will have to use greater
amounts of working memory to produce language. This experience of production under pressure
may increase E.V.s performance on subsequent ACCESS for ELLs speaking items.
E.V. would benefit most from formative assessments which integrate content-area
knowledge with productive language domain integration. Prior to taking the test again next year
it is important the E.V. be exposed to tasks which require her to use both language and contentknowledge so that she is able to become aware of weaknesses and become stronger in those
areas. Speaking and writing tasks which allow E.V. to discuss WIDA content area standards
would enable her to have practice meeting both expectations simultaneously (see Appendix C).
Curriculum should be altered to make sure that Mathematics and Social Studies in particular use
productive types of assessment rather than receptive ones. This may require shifting assessment
types in order to meet E.V.s particular needs. The teacher may choose alternative assessments in
content areas so students are also able to practice language proficiency standards.
Prior to making teaching or assessment recommendations for A.M. additional assessment
data needs to be collected. The ACCESS for ELLs assessment results raise serious questions
about A.M.s actual abilities and instructional levels. If possible the school or teacher should
utilize available standardized English proficiency tests or alternative assessments to triangulate
results with the ACCESS for ELLs scores (see diagram below). Teachers who work with A.M.
should collect a portfolio of student work to present as an additional snapshot of his capabilities.

By approaching the student from multiple angles hopefully a clearer picture can develop of his
actual needs. Based on the ACCESS for ELLs alone it seems that A.M. could benefit from
continuing instruction in academic language. Content-area vocabulary and knowledge should be
a targeted goal of differentiation for A.M.

t Report



Educational Program Proposals

Both A.M. and E.V. highlight the common need of ELL students to have increased
exposure to and practice with academic language. This transition is often the most severe
beginning in third grade where content-area subjects become increasingly abstract and dependent
on non-fiction texts. ELLs may lack nonfiction processing strategies to help them understand the
new type of language exposure. It is important that teachers not only teach content but also teach
strategies that help ELL students break down academic language so that they can process and
apply content. The Question Answer Relationship Technique is a strategy that uses four

questions to help student break down content-related language (Raphael & Au, 2005). Schema
theory helped Keene & Zimmerman (2007) form the basis of their connection strategy which
suggests students connect text to text, text to self, and self to world. Teaching these types of
content-area comprehension strategies can assist ELLs with handling both content and second
language demands.
Academic language is a unique form of English in itself and requires strategic instruction.
E.V. in particular demonstrates how a lack of academic vocabulary can negatively impact overall
ACCESS for ELLs English proficiency scores. In order to help students reach WIDA objectives
academic vocabulary needs to be taught in a way that recognizes their cognitive limitations and
connects content-area experiences to language (Freeman & Freeman, 2011). Best ELL practices
suggests that academic vocabulary be taught through varied experiences, in coordination with
learning strategies such as utilizing suffixes and prefixes, and in a way that helps students have
interest in the meaning of words themselves (Freeman & Freeman, 2011).
Additionally both students highlight the importance of differentiation within ELL
curriculum. Although it is difficult to draw conclusions from A.M.s ACCESS for ELLs
assessment report, it is likely that his language domain and content area knowledge vary. E.V.s
scores more clearly show that her language domain skills affect performance in specific areas
and vice versa. In order to accommodate her various instructional levels teachers need to utilize
activities which are easy to differentiate in order to help students stay in their zone of proximal
development. The areas in which E.V. is almost approaching Level 6 (Listening & Reading)
should not be ignored simply because they are high. Best practice instruction would advocate for
helping her advance even those skills which are already successful while working to catch other
language domains up to the Bridging level. Since individual students language acquisition

progress varies depending on a wide variety of factors differentiation is commonly considered

one of the most important best practices for ELLs.

Cottage on Blackbird Lane. (November, 24, 2008). 3rd grade biography project. Retrieved from
Diply. (2015) 11 awesome science experiments your kids will love to try. Retrieved from
Fox, J., & Fairbairn, S. (2011). Access for ells. Language Testing 28(3), pp. 425-431.
Freeman, D.E., & Freeman, Y.S. (2011). Between worlds: Access to second language
acquisition. Portsmouth, NH: Heinemann.
GingerSNAPS: Titbits and Treats for Teachers. (January, 3, 2013). Resolutions and the
revolutionary war. Retrieved from
Hippo Hooray for 2nd Grade! (November 8, 2012). Whats the scoop? Current events. Retrieved
from http://www.hippohoorayforsecondgrade.com/2012/11/whats-scoop-current-eventsand-public.html
Huang, J. (2013). Bridging authentic experiences and language skills through the language
experience approach. MPAEA Journal of Adult Education, 42(1), pp. 8-15.
Keene, E. O., & Zimmerman, S. (2007). Mosaic of thought: The power of comprehension
strategy instruction. Portsmouth, NH: Heinemann.
Kenyon, D.M., MacGregor, D., & Li, D. (2011). Issues in vertical scaling of a k-12 english
language proficiency test. Language Testing, 28(3), pp. 383-400.
Landis, D., Umolu, J., Mancha, S. (2010). The power of language experience for cross-cultural
reading and writing. Reading Teacher, 63(7), pp. 580-589.
Leighfreed. (October, 24, 2011). Language experience approach. Retrieved from

Mrs. Lees Kindergarten. (October, 23, 2011). Pumpkins. Retrieved from

Mrs. Lisas Pre-K Crew Rocks! (October, 26, 2011). Retrieved from
Nylas Crafty Teaching (May, 14, 2015). Classroom grocery math. Retrieved from
Oldrieve, R. (2012). Magic paper: Using a modified language experience approach to teach sight
words and vocabulary. California Reader, 46(2), pp. 29-34.
Pinkadots Elementary. (January, 25, 2014). Current events, informational text, note taking, and
speaking skills! Motivation linky! Retrieved from
Raphael, T.E., & Au, K.H. (2005). QAR: Enhancing comprehension and test taking across
grades and content areas. The Reading Teacher, 59, 206-221.
Spice up your words with Cinnamons Synonyms. (January, 19, 2015). Solids and liquids part 2:
Making oobleck. Retrieved from http://cinnamonssynonyms.blogspot.ca/2015/01/solidsand-liquids-part-2-making-oobleck.html
Teacherific in 2nd Grade with Mrs. Concepcion. (September, 6, 2013). Retrieved from
The Lemonade Stand: Turning Lemony Standards into Sweet Success. (May 21, 2012). Math
methods class. Retrieved http://thelemonadestandteachers.blogspot.com/2012/05/mathmethods-class.html
The Math Maniac. (November, 3, 2014). Monday math literature: The difference model for
subtraction. Retrieved from
The Reading Bungalow (2015). Vocabulary study. Retrieved from
World-Class Instructional Design and Assessment (WIDA). ACCESS for ells interpretive guide
for score reports spring 2011. Retrieved November 11, 2014, from

World-Class Instructional Design and Assessment (WIDA). ACCESS for ells listening, reading,
writing, and speaking sample items 2008 Grades 1-12. Retrieved November 11, 2014,
from https://www.wida.us/assessment/access/access_sample_items.pdf.
World-Class Instructional Design and Assessment (WIDA). Grades 3-5 Can Do
Descriptors. Retrieved November 30, 2014, from
Appendix A
Content-Area Vocabulary Activities

Appendix B
Content-Area Speaking Activities

Appendix C
Content-Area Speaking and Writing Assessments