Академический Документы
Профессиональный Документы
Культура Документы
Assessing the Informal Science Learner: How Environmental Educators Can Understand
What Learners Have Gained from Park and Nature Center Programs
Emily Martin
IT 590
Fall 2013
ASSESSING THE INFORMAL SCIENCE LEARNER 2
Abstract
This paper explores the use of assessment in informal science settings, such as environmental
education programs offered at parks and nature centers. It discusses the issues faced by
environmental educators who often teach a diverse and self-selected group of learners, and asks
whether certain assessment methods tend to work better in informal learning environments. This
paper examines literature on assessment in environmental education and offers a plan for further
research to determine the most reliable, effective, and appropriate measures of tracking learning
Table of Contents
ASSESSING THE INFORMAL SCIENCE LEARNER 3
Introduction …………………………………………………………………………. 4
Method ……………………………………………………………………………... 15
Participants …………………………………………………………………. 15
Instruments …………………………………………………………………. 16
Introduction
ASSESSING THE INFORMAL SCIENCE LEARNER 4
Since the 1970’s, the environmental education movement has shifted its focus from alarm
sounding to relationship building. Many state and national parks and nature centers such as those
run by the Audubon Society aim to foster a sense of love and respect between visitors and the
great outdoors. But with these visitors often playing the role of one-time learners, attending a
weekend program while on vacation or during leisure time, park and nature center educators
have a difficult time assessing the effectiveness of their programs (See Appendix B).
so it can be extremely difficult to track the progress of these learners overtime. Even short-term
learning gains are hard to track because the casual setting does not lend itself to traditional forms
of assessment, such as written exams. If visitors feel criticized or bored, they may decide to leave
or, worse yet for environmental educators, they may decide to stop attending park and nature
center programs altogether. Because of the nature of informal science learning, educators often
data. A standard and reliable method of assessment is not widely known or available to
environmental educators.
The desired state of affairs for assessment in environmental education would involve
having a reliable means of tracking learning gains and/or behavior changes in learners who
attend park or nature center programs. Further, it would be desirable to find a method of
assessment that could be reconstructed and shared across many environmental education
programs.
ASSESSING THE INFORMAL SCIENCE LEARNER 5
The needs gap exists where informal science educators and organizations have the desire
cognitive outcomes, but the standard for doing this is not clear. If these educators had a reliable
and standard way to assess learning, not only would they be able to see which programs were
meeting or missing instructional goals, they might also have stronger data to back up grant
applications and other sources of funding. Without a clear form of assessment, however,
environmental educators are left to rely on post-program surveys and anecdotal information to
The purpose of this study is to identify current means of assessing learners in informal
science settings, determine which methods of assessment have been successful, and decide if
Literature Review
Phillip Bell (2009) and others on the National Research Council’s Committee on
Learning Science in Informal Environments said some of environmental educators have decided
to, “eschew formalized outcomes altogether and to embrace learner-defined outcomes instead”
(p. 3). But Bell et al. proposed an alternative to choosing purely academic or subjective goals for
The authors suggested six “strands” of science learning that could guide educators and
designers when thinking about learning goals, which say that learners should:
ASSESSING THE INFORMAL SCIENCE LEARNER 6
Strand 3: Manipulate, test, explore, predict, question, observe, and make sense of the
Strand 5: Participate in scientific activities and learning practices with others, using
Strand 6: Think about themselves as science learners and develop an identity as someone
who knows about, uses, and sometimes contributes to science” (p. 4).
How can educators measure the realization of these strands? Bell et al. (2009) said it
probably won’t be through multiple choice tests, or other methods that are “disruptive” to the
often one-time leisure experience offered by informal science settings (p. 56). They suggested
measuring each strand differently. For instance, Strand 1 could rely on self-reporting, such as
asking the target audience to describe their level of interest before and after attending the science
responses such as posture, or collecting verbal feedback from learners. But Bell et al. admit that
none of these methods are fool proof and each, especially self-reporting, has great potential for
bias.
ASSESSING THE INFORMAL SCIENCE LEARNER 7
The authors (2009) suggested that Strand 2 might be best-suited for “traditional” forms of
assessment, such as recording learners’ answers to factual recall questions, but they warn against
these methods saying they may lead to learners feeling frustrated or incompetent, thus wanting to
avoid the setting in the future. Instead, Bell et al. suggested that Strand 2 might be measured by
using cognitive or meaning mapping as a program activity that would let learners show the
various facts and concepts they learned in relation to the topic being taught. This activity could
provide a record of learning without being obtrusive, the authors said. They also said Strand 2
conversations and conducting focus groups with members of the target audience.
The authors (2009) further explained assessment possibilities for each of the other
strands, many of them focusing on observing learners and recording their immediate reactions
and behaviors, or setting up long-term studies with groups that represent the target audience. Bell
et al. concluded by saying informal science environments are known to be flexible and open to
the needs of the community. However, just because they are places of recreation, the authors said
informal science environments should still have standards for learning outcomes. The strands of
science learning offered by Bell et al. were meant to steer the conversation away from
“traditional” forms of assessment used in K-12 settings and toward a new way of documenting
learning in the diverse world of informal science education. The authors said more work needs to
Alida Kossack and Franz X. Bogner (2012) asked in their study how a one-time
environmental education program could bring about long-term learning in students. More
specifically, the authors asked how educators could track an individual’s “connectedness to
nature,” a phenomenon which they said leads to positive conservation behaviors (p. 180). Their
ASSESSING THE INFORMAL SCIENCE LEARNER 8
answer was something called the “Inclusion of Nature in Self” (INS) scale, designed by P.W.
Shultz in 2001.
According to the authors (2012), the INS scale allows learners to self-report how close
they feel to nature before and after attending an environmental education program. The scale
shows a series of two circles, one that says “self” and one that says “nature” (p.181). Learners
choose one of seven sets of these circles, from the first two that do not overlap to the last two
that overlap completely. The set is divided into three groups: low, medium, and high
education program, immediately after, and again seven weeks after the program.
Kossack and Bogner (2012) conducted a study with 123 sixth-grade students in Germany
(and 113 additional sixth-grade students from the same school in a control group). The students
attended a one-day science program outdoors about the changing of forest ecosystems over time.
They found that individuals who reported a low-level of connectedness in before attending a
program reported higher levels of connectedness after a program if it was highly experience-
based, such as the use of games and social activities. But individuals who reported a low-level of
program if the lesson focused on building cognitive skills, such as memorization of terms.
program were more likely to report high-levels of connectedness after attending a program.
Overall, the intervention group reported more positive changes in connectedness both
immediately after and seven weeks after attending the program, whereas the control group
Based on their findings, Kossack and Bogner (2012) suggested that the INS scale should
nature in order to customize programming to learners needs (i.e. providing more experience-
based activities for those who report low-level connectedness). The authors suggested that by
using the INS scale before a program, educators could attempt to control learners’ feelings of
connectedness to nature after the program. They admitted, however, that this method would be
most suited for school groups coming to a park or nature center for a field trip, or for
Julie Ernsta and Stefan Theimer (2011) also discussed individuals’ perceived increased
exploratory quantitative study that looked at the effects seven environmental education programs
The authors (2011) said that “connectedness to nature” is not something researchers have
come up with; rather, this theme is adopted by many organizations as a guiding environmental
education principle. The U.S. Fish and Wildlife Service, for example, names “Connecting People
with Nature” as one of its six top priorities (p. 578). Ernsta and Theimer sampled seven USFWS
with 3rd-6th graders from the region as participants and a control group from each grade level of
the same elementary schools. They pre- and post-tested participants in these programs using two
instruments for construct validity: Cheng’s Children’s Connection to Nature Index, which asked
disagree) regarding 16 statements, such as “Humans are a part of the natural environment” (p.
ASSESSING THE INFORMAL SCIENCE LEARNER 10
588); and Mayer and Frantz’s Nature Connectedness Inventory, which asked 11 questions about
participants’ feelings of belonging to the natural world. Out of the seven programs evaluated,
Ernsta and Theimer (2011) concluded that this outcome could have occurred for a
number of reasons: participants could have hit a “ceiling” as far as how connected they feel to
nature, the program content was varied and each could have affected learners differently, or
participants feelings of connectedness could have been more cognitive or emotional in nature,
meaning they could have felt connected in a way the study’s scales did not measure (p. 595).
described a different outcome that environmental educators want to measure: how likely learners
are to behave “sustainably” after attending a program, such as wasting fewer household
resources and eating differently (p. 1). Redman’s study involved presenting a summer program
on food and waste habits to a small group of learners (6 initially, 3 long-term) and then tracking
their behavior changes over a year’s time. Redman used a pre- and post-survey, which combined
open-ended and Likert-scale questions, to determine the types of foods participants ate (e.g. –
meat, organic or bagged vegetables), where they shopped (e.g. farmers’ markets, grocery stores),
and what kinds of household items they reused, recycled, or threw in the trash. Redman said the
questions were separated by topic (food or waste) and also by four knowledge domains:
declarative, procedural, effectiveness, and social. Her study measured changes in these domains
along with behavioral changes reported over a year. Long-term behavioral changes were
measured by qualitative methods including journals and interviews (with participants as well as
Redman (2013) found through comparing pre- and post-survey results that the biggest
knowledge change was in the food declarative knowledge domain, but the biggest change overall
was in waste behavior. Redman says the reason for this positive trend could be that social norms
regarding waste, such as knowing how and why to recycle, are more easily changed, whereas
food, such as changing from a meat to vegetable diet, could be a more difficult behavior to
change socially, with limited vegetarian or local food options depending on where a person lives.
Cultural values also play a large role in food choices, including meat consumption.
social context. Each participant in the year-long study revealed through that they were more
likely to stick to a sustainable change, such as using cloth napkins instead of disposables, if the
people around them took part in the change as well. Redman said tracking these kinds of long-
term behavior changes was possible through the use of qualitative methods such as journaling,
interviewing, and other techniques that allowed participants to give open-ended answers and
relate the questions specifically to their lives. But Redman admits that the study should be
conducted again with a larger sample to get a more diverse pool of participants and a more
Finally, Monzack and Petersen (2011) studied the effectiveness and assessment methods
involved in informal science learning. They presented a movement-based activity about human
heart, which asked participants to walk along a floor map depicting the chambers and vessels of
a cardiovascular system as if they were the blood pumping through the system. The researchers
welcomed participation from audience members in two different types of settings: a place where
people expected to learn science (a science fair) and a place where people did not necessarily
expect to learn science (a charity running event and the entrance to a Wisconsin State Fair).
ASSESSING THE INFORMAL SCIENCE LEARNER 12
Through a card sorting and labeling assessment activity, Monzack and Peterson sought to
determine if informal science learning could produce similar gains in expected places, where
learners are self-selected, as in unexpected places, where learners did not necessarily have the
The authors’ (2011) results suggested that regardless if people were seeking out these
experiences, individuals could learn science in informal settings and they could be assessed
through activities. At each event – in the expected setting and in both of the unexpected settings
– participants showed significant increases in post-test scores, which measured factual recall of
terminology as well as conceptual knowledge of the heart’s function within the human body. By
starting and ending the movement-based cardiovascular activity with card sorting and labeling
activities that dealt with the human heart, not only were participants learning more about the
subject, they were also being assessed on their cognitive understanding of how the heart
functions.
Study Rationale
that difficulty stems from affective learning outcomes being unclear, as Ernsta and Theimer
(2011) suggested with the “connectedness” theme that guide many environmental organizations
but have differing meanings for each. When behavior change is the desired outcome, long-term
studies such as Erin Redman’s (2013) may be the best answer. Bell (2009) and the National
that assessment can be done with groups of informal science learners rather than through every
program offered.
ASSESSING THE INFORMAL SCIENCE LEARNER 13
When learning outcomes are cognitive in nature, planned activities like card sorting may
suggested by Monzack and Petersen (2011). Literature suggests that educators may not need to
rely on “traditional” forms of assessment, such as paper exams, to assess learners in these
settings. Educators may find assessing members of the target audience through planned research
studies or using activities such as card sorting as pre- and post-assessments may be the less-
intrusive forms of assessment that work well in parks and nature center programs, measuring
This research aims to answer the following question: What reliable and repeatable
assessment tools could be used in parks and nature centers across the United States?
Method
Participants
Participants for this study will be full- or part-time environmental educators working for
state or national parks or nature centers in the United States. Income and gender are not
important for this study, but ideal participants will have used assessment tools to track learning
gains in the past twelve months. They will be recruited through the following online
Interpreters e-news and listserv (http://www.interpnet.com/), and the LinkedIn Conservation &
ASSESSING THE INFORMAL SCIENCE LEARNER 14
Environmental-Education-Resource-Network-2181156).
The recruitment message will include a description of the study along with a call for
educators to improve assessment efforts in their field by sharing methods that have worked for
them. The message will also explain that chosen participants who complete the entire process of
screening and interviewing (2 hour time commitment) will receive a $50 gift certificate to Acorn
Naturalists online shop. Participants will be environmental educators at state or national parks, or
nature centers, who use assessment methods to track learning gains. The reason for tracking
these gains may vary by location. Educators who rely on learning data to fund their organization
may be more ideal participants in this study because they have likely tried different forms of
Instruments
An online survey will be used to screen initial volunteers found through the online
communities mentioned above. Participants who answer that they have worked in informal
science settings (parks and nature centers) and have used assessment methods to track learning
The interview is the first phase of this study. It is designed to gather information about
how informal science educators use assessment and to select sites for further program evaluation,
as described later in this paper. In order for participants to feel more comfortable, and in hopes of
capturing richer feedback, the interviews will be conducted in a group setting. Participants will
join the interviewer in a web conferencing room on a selected day and time. Several meeting
options will be outlined depending on availability, which will be determined through the
ASSESSING THE INFORMAL SCIENCE LEARNER 15
invitation email and subsequent messages. The interviewer will moderate the discussion and will
describe to participants how to use the microphone and text features in the web conferencing
platform. During the interview, participants will be asked most questions as a group while
everyone offers individual answers, but some questions will be asked anonymously via web-
The next phase of this study will evaluate the effectiveness of specific assessment
methods mentioned in the interview session. Participants will be contacted from the interview
group who answered “yes” to “Are you interested in taking part in further research studies
regarding how your organization uses assessment?” Two sites will be selected for this part of the
study. As described further in the sections to follow, learner assessment and satisfaction data will
be evaluated alongside program objectives to determine which methods were most successful in
producing learning gains as well as in remaining non-invasive for the vacationing learner.
Evaluation, small-sample studies can be used to estimate program effectiveness and test
performance measures that will be used in evaluation. In this study, the small sample will involve
learner satisfaction and assessment data from two environmental education sites that participated
The purpose of the small-sample study will be to estimate program effectiveness in terms
of assessment methods used at the site and reported learner/visitor satisfaction and learning
gains. Not only is this a low-cost way to produce findings about the effectiveness of assessment
ASSESSING THE INFORMAL SCIENCE LEARNER 16
methods used in environmental education, a small-sample study would provide insight to the
Procedure
Selected participants will offer complete learner satisfaction surveys, assessment data,
and lesson plans from each informal science lesson they present. Researchers will then sample
from within these data sets. Learner/visitor assessment data (quizzes, audience polling results,
etc.) will first be grouped with satisfaction surveys and lesson plans from corresponding
educational events. Then each set will be assigned a number. The data sets will be randomly
Access to the organization’s detailed lesson plans will help determine if learning
objectives are being met and if the assessment methods are designed to accurately measure the
objectives. Satisfaction surveys, which may need to be modified for this study to include
questions about assessment, will illustrate if the learners’ attitudes about being assessed are
At least 25 sets of data will be collected at each site over a 3-week period. The study will
last approximately 4 months, from October 1, 2014 – February 1, 2015, to allow time for data
Figure 1: Timeline
Talk with site educators and other key personnel to determine 2 days
numbers.
Analyze data 2 months
Follow up with site educators by phone each month and in Monthly
Write final report, edit drafts, and share with stakeholders. 3 weeks
The dates were chosen around off-seasons for many parks and nature centers (after Labor
Day). By conducting the study after summer, it is more likely that educators will have the time to
The qualitative data collected from the small-sample study will be analyzed by first
coding the assessment data, visitor satisfaction survey results, and lesson plan by educational
ASSESSING THE INFORMAL SCIENCE LEARNER 18
event. For example, for an event on bird migration, each of those three sets of data will be given
a numeric ID and organized together as one unit. Once data has been collected for each of the
site’s educational events, a random number generator will be used to choose samples from the
entire selection.
Next, the lesson plans for each educational event sampled will be compared to
assessment data to determine if desired learning outcomes were met. Once learning outcomes are
identified and marked as either “met” or “unmet,” further analysis will attempt to find out why.
For instance, objective statements and assessment questions (if applicable) will be analyzed for
clarity and consistency. Trends will be identified in this process regarding the quality of objective
statements (measurability), as well as in the number of met and unmet objectives in each lesson
Then visitor satisfaction data will be compared to learning data to notice any trends
involving the feelings learners have about the program and their assessment results. Open-ended
comments regarding assessment methods will also be analyzed to determine the frequency and
type of comments (positive, negative, or neutral). Observations will be compared with data from
at each site. If certain methods have proven successful in their work, they will be collected and
shared with the focus group who previously shared their experiences with assessment in
environmental education (See Appendix B). The focus group’s next set of observations and
questions will further define an approach to assessment that may be more widely accepted and
References
Ernsta, J., & Theimer S. (2011). Evaluating the effects of environmental education
577–598. doi:10.1080/13504622.2011.565119
Goodman, E., Kuniavsky, M., & Moed, A. (2012). Observing the user experience: A
Kossack, A. & Bogner, F. (2012). How does a one-day environmental education programme
180-187. doi:10.1080/00219266.2011.634016
Monzack, E.L. & Petersen, G.M.Z. (2011). Using an informal cardiovascular system activity to
National Research Council. (2009). Learning science in informal environments: people, places,
Bruce Lewenstein, Andrew W. Shouse, and Michael A. Feder, editors. Board on Science
ASSESSING THE INFORMAL SCIENCE LEARNER 20
Education, Center for Education, Division of Behavioral and Social Sciences and
Wholey, J. S., Hatry, H. P., & Newcomer, K. E. (2010). Handbook of practical program
1. How frequently do you use these tools to assess what visitors have gained from your
programs? (poll)
Rate from 1-3: with 1 being not at all, 2 being occasionally, and 3 being regularly.
- Questionnaire/Survey
- Observation
- Journaling
ASSESSING THE INFORMAL SCIENCE LEARNER 21
- Polling
- Verbal questioning
- Direct testing
2. Please list any assessment tool you use regularly that is not listed above.
3. Which methods do you think give you the best learning data? Why?
5. Is there anything you would change about how your organization uses assessments?
6. Can you think of a time when an assessment went well? What happened?
7. Can you think of a time when an assessment went poorly? What happened?
8. What do you think the “perfect” type of assessment would look like for environmental
education?
9. Would you like to add anything about assessment in environmental education that wasn’t
covered here today? (Can be anonymous if participants choose to use anon. feature in
chat)
ASSESSING THE INFORMAL SCIENCE LEARNER 22
10. Are you interested in taking part in further research studies regarding how your
Audience: Three environmental educators from the Alton, IL area - one male and two females
Product Use: For audience polling, often used with PowerPoint presentations, presented to this
group as a means of assessing learners during informal science programs at parks and nature
centers.
ASSESSING THE INFORMAL SCIENCE LEARNER 23
Participants for this focus group were chosen by approaching several educators at a water
conservation event in Alton, IL. People who appeared to by environmental educators (those
behind informational booths, or those planning programs) were asked if they would like to take
part in a study. If they said yes, the following screening questions were asked:
education programming?
i. If yes: continue to next question.
ii. If no: Thank you for answering my questions. This study is looking for
would be available to meet the following week for a brief focus group.
Four environmental educators agreed to participate in the study, but only three came to
the hour-long focus group on Monday, September 23, 2013 at Lewis and Clark Community
College. We started with introductions, a brief ice-breaking activity where everyone shared their
favorite teaching moment, and an explanation of audience response systems (with photo of
“clicker” device and use with PowerPoint). I then began asking these questions in order:
ASSESSING THE INFORMAL SCIENCE LEARNER 24
1. Do you have a favorite type of assessment that you like to use after giving an
environmental education program? (All educators who agreed to participate said they
earlier?
6. Would you like to say anything else about “clickers” or audience response systems?
7. Would you like to say anything else about assessment tools in environmental
education?
I was surprised that even though educators were pre-screened to be users of assessment,
they each seemed reluctant to use anything other than anecdotal/verbal assessment and post-
program surveys, which do more to find out about the success of the program itself rather than
In question one, each participant stated they preferred an informal type of assessment,
such as having the audience raise hands to show what they believe is true. Participant One said it
was her favorite because it was not too much pressure for visitors and she didn’t want them to
leave her program if they did not know an answer. After she said this, the other educators echoed
her statement.
Participant Two mentioned handing out paper surveys at the end of his program, and that
he is moving toward online surveys for people who provide an email address. I asked him if he
could give me an example of survey questions and he said he has learners rate how interesting
the topic was, what other topics they would like to learn about, and what they felt they learned
ASSESSING THE INFORMAL SCIENCE LEARNER 25
from the program. He said he matches what they feel they learned to what he wants to teach
After hearing statements about not wanting to turn visitors away with public assessment,
I thought about adding an element to my next question: How do you feel about using an
anonymous audience polling system as assessment? But I thought this insertion may be an
attempt to sway participants toward answering a certain way, so I left the question as is, without
I asked this question directly to Participant Three who had not yet answered first. She
said she might be interested in using audience polling systems, but probably wouldn’t have the
budget to purchase the hardware. I asked if she would use it if the hardware was given to her and
she said maybe. At this point conversation in the focus group began to run dry. The other
participants also said they weren’t sure if they’d use it. Participant One said she wasn’t sure the
audience would know what to do with the hardware and that it might be more trouble than it was
I tried again with the next question specifically about the clickers I had explained to them
earlier. Again, money seemed to be a concern, so I asked if we could think hypothetically about
the hardware being a gift to their organization. Participants answered largely with “maybe” or “I
would try it and see how it worked”, but Participant Two said he liked the idea of clickers being
integrated into PowerPoint presentations and could see it being easier to keep track of assessment
At the last question, Participant Two said he thinks assessment is hard to do at his work
because most visitors don’t want to fill out surveys. Other participants agreed that it is hard to get
ASSESSING THE INFORMAL SCIENCE LEARNER 26
a response from learners. Participant One said she wouldn’t want to push assessment so much
that learners no longer have fun or want to come to their programs. Participant Three said she
could watch visitors during activities or games and see if they understand the concepts she has
taught them.
Although this was a small group of individuals and their thoughts do not represent those
of all or most environmental educators, the responses in this focus group did bring to mind the
idea of cost in assessment, as well as the idea of technology being a “worry” rather than
something helpful. I had not considered that these educators might not want to explore
assessment methods that cost money. It does make sense that most nature centers and parks are
suffering budget cuts and cannot afford anything outside of administrative costs and salaries, and
Another focus group may want to look at other forms of assessment that do not have to
Looking specifically at assessments that are cost-free and not “public” might help the next focus
group move past those concerns and share more about their feelings on how to test learning
gains.
Also, perhaps another focus group could look at free technology, like online assessment
software, while yet another could look at assessments without the use of hardware or software. I
assumed people would be more open to using tools for assessment, such as clickers, but this
group did not appear to be that interested. I’m not sure if it was the idea of technology, or just
this particular tool that inspired the comment about it being troublesome to have to teach learners
ASSESSING THE INFORMAL SCIENCE LEARNER 27
how the tool during the program. Perhaps more focus groups or a survey could help shed light on