Академический Документы
Профессиональный Документы
Культура Документы
, 2019, Advance
Article
Litvak and Li Ye
c
*a
a
Department of Chemistry and Biochemistry, California State University,
Northridge, Northridge, California 91330, USA. E-mail: li.ye@csun.edu
b
Department of Chemistry, Glendale Community College, Glendale, California
91208, USA
c
Department of Biology, California State University, Northridge, USA
Introduction
Introductory chemistry courses taken by students in science and related
majors are focused on fundamental concepts, theories and laws of chemistry.
They usually cover a broad range of topics and the topics are built upon
themselves. Ideally, students learn the topics by relating and integrating
what is acquired previously to newly-learned knowledge. The links between
prior knowledge and new knowledge help students construct holistic
knowledge structures and learn meaningfully (Ausubel, 1960;Ambrose et al.,
2010). However, students may not make connections among chemistry
topics spontaneously. There is a concern that students treat chemistry topics
in introductory chemistry as discrete pieces of information. As such, students
may heavily rely on memorization, which would further hinder students’
learning (Francisco et al., 2002). Assessment plays an essential role in
curriculum as it not only measures students’ mastery of course content but
also serves as a means to send signals to students about what is considered
as important for instructors. Instructors might use the assessment to
emphasize particular content or promote certain skills they want students to
acquire. According to the nature of the assessment in the course, students
might orient themselves to allocate appropriate amounts of time and effort to
meet the important aspects of the assessment (Gibbs and Simpson, 2004).
Learner-centered assessment emphasizes the active role of students in the
process of creating assessment. It promotes a sense of ownership in learning
for students and shifts the purpose of assessment from assessing what was
learned to getting students to learn while they are completing the task that
was given (Webber, 2012; Rich et al., 2014). In order to promote students’
linking of chemistry concepts in introductory chemistry, we developed and
implemented a learner-centered assessment named Creative Exercises (CEs).
The purpose of CEs is to get students to think about connections among
chemistry topics and provide opportunities for them to demonstrate the
connections made (Lewis et al., 2010, 2011). Additionally, students’
responses to CEs can expand the rubric generated by instructors if they meet
the criteria, which bring students to be part of the answer-generating process
of the assessment. The overarching goal of the study is to investigate the
effectiveness of CEs as an intervention in improving student learning and
academic achievement in introductory chemistry.
Theoretical frameworks
With a clearer distinction between different types of learning, the assumptive
theory of learning introduced by Ausubel places learning in a continuum,
which rote learning on one end and meaningful learning on the other end
(Ausubel, 1960). Rote learning is described as learners learning knowledge in
isolation and simply relying on memorization. The knowledge learned is
easily forgotten without incorporating new knowledge to the existing
cognitive structures. In contrast, meaningful learning highlights the process
of linking new knowledge to the existing cognitive structures (Mayer,
2002; Taber, 2014). To learn meaningfully, both old and new knowledge
needs to be slightly modified to generate more complex and coherent
cognitive structures. Meaningful learning usually leads to longer concept
retention as compared to rote learning (Bretz, 2001; Novak, 2010). A graphic
representation of the assumptive theory of learning is shown in Fig. 1.
Additionally, the constructivist model of knowledge provides perspectives on
how learners acquire knowledge. Learners construct understanding in their
minds as a result of making sense of experiences. New knowledge in
learner's minds needs to be constantly tested and adapted to fit into learner's
existing knowledge structures. As such, learners do not construct
understanding until they learn to coordinate new knowledge into their
existing thoughts (Bodner, 1986).
Research purposes
Evidence for the validity of CEs as a classroom assessment in chemistry and
linking chemistry concepts from students was obtained in the literature,
including content and structural validity, relationships with other chemistry
assessments, and generalizability among multiple graders (Lewis et al.,
2011; Lewis and Ye, 2014; Warfa and Odowa, 2015).
According to assumptive theory of learning and constructivist model of
knowledge, CEs have the potential to assist students in retrieving prior
knowledge and apply to new situations so as to foster meaningful learning in
introductory chemistry. However, the effects of CEs on students’ academic
achievement in chemistry courses haven’t been investigated yet. In this
study, we examined how CEs impact students’ academic performance and
retention in introductory chemistry courses at two different college settings.
The effectiveness of CEs can be used to compare to the effects of other types
of learner-centered assessments or those assessments designed to help
students make connections in chemistry, such as concept mapping.
Additionally, student perceptions of the use of CEs were also explored to help
understand why CEs are helpful or not for student learning in chemistry.
Another novel investigation of this study was that we showed students’
progress in linking of chemistry concepts for first time through visual maps.
These visual maps revealed the extent of current and prior topics used by
students while they were completing the CEs given at different time points
throughout the semester. This study was guided by the following research
questions:
(1) To what extent do Creative Exercises impact students’ performance and
retention in introductory chemistry?
(2) How do students link chemistry topics through answering Creative
Exercises across time?
(3) What are students’ perceptions of the use of Creative Exercises?
Methods
Research design and settings
Fig. 3 Study design and timelines of the implementation of CEs at the two settings.
Data collection
Creative Exercises were used five times at the first setting and nine times at
the second setting. All the CEs used in the study can be found in Appendix 1.
To evaluate the effectiveness of the use of Creative Exercises on student
performance and retention, quantitative data from course records such as
exam scores, letter grades, and DFW rates were collected. Additionally,
qualitative data includes student written responses to CEs and an end-of-
semester free-response survey regarding student perceptions of the use of
CEs were collected. Student written responses to CEs were
collected via paper and the free-response survey was administered
online via a web-based management system.
Data analysis
Setting 2 Exam 1 88 8 80 22
Exam 2 80 23 57 34
Exam 3 81 19 66 28
Exam 4 52 33 41 28
Final exam 69 19 61 34
Avg. exam 75 14 61 23
The differences between the treatment and control groups were ranged
from 3% to 6% at the first setting, and 8% to 23% at the second setting. To
visualize the differences in exams scores between treatment and control
groups, student exam scores were also converted to z-scores and mean z-
scores of two groups were plotted in Fig. 4. The figure shows how far each
group was far away from the mean score of the class (z-score = 0). To
analyze whether the effect of the intervention was statistically significant on
exam scores, independent t-tests were conducted to compare the mean
differences in average exam scores between treatment and control groups
(see Table 5). Results of the independent t-tests showed that the mean
difference on average exam scores at the second setting was statistically
significant (t = 2.388, p = 0.024), but not for the first setting (t = 1.588, p =
0.115). Because the relatively small sample sizes in our study, we also
calculated and reported effect size (Cohen's d) to quantify the size of the
differences between treatment and control groups. Effect size takes into
account sample size and the amount of variation in scores. It is independent
of the inferential statistics and allows us to move beyond does it work or
not to how well does it work. Cohen suggested that d = 0.2, 0.5, and 0.8 to
be considered as small, medium, and large effect size, respectively (Cohen,
2005). As listed in Table 5, the average effect size was 0.31 for average exam
scores at the first setting and 0.75 at the second setting. The average effect
size was considered as small at the first setting and medium at the second
setting. More specifically, at the first setting, the average exam score of the
average person in the treatment group was 0.31 standard deviations above
the average person in the control group, and this difference between groups
increased more than twofold at the second setting.
Fig. 4 Mean z-scores of treatment and control groups at the two settings.
Table 5 Independent t-tests comparing average exam scores between
treatment and control groups
Mean difference
(treatment – p-
control) Value Effect size (Cohen's d)
* Mean difference was significant at the 0.05 level.
Setting 1 (N = 144, treatment = 73, control = 71)
Avg. 5% 0.115 0.31
exam
Setting 2 (N = 44, treatment = 25, control = 19)
Avg. 14% 0.024* 0.75
exam
Finally, the distribution of letter grade percentages (Fig. 5) and DFW rates
between treatment and control groups were compared at the two settings. As
shown in Fig. 5, in general, the bars in red shifted more on the top as
compared to the blue bars. That is, students in the treatment groups earned
higher grades than the control groups overall. As a result, at the first setting,
the difference in DFW rates between the treatment and control groups was
7%, with 37% for the control group and 30% for the treatment group.
Similarly, at the second setting, in the treatment groups obtained more A, B C
grades and less D and F, leading the difference in DFW rates between the
two groups became 8% with 32% for the control group and 25% for the
treatment group. Chi-square analyses were carried out to determine if the
differences between letter grades (DFW vs. non-DFW) of the two groups were
statistically significant. The results indicated statistical differences with
moderate effect size at the second setting (χ2 = 4.565, p = 0.033,
Cramer's V = 0.322) but no difference was found at the first setting (χ2 =
1.350, p = 0.245, Cramer's V = 0.097).
Fig. 5 Letter grade comparison between treatment and control groups.
Comparing the effect of CEs on students’ cognitive outcomes to the
medium effect size of 0.57 using Cohen's d of six studies researched on the
effect of concept mapping conducted by Hattie (Hattie, 2009), we found a
similar average effect size (0.53) of the use CEs on exam scores at the both
settings. It is worth noting that the impact of CEs on average exam scores at
the second setting was more than twice compared to impact at the first
setting. We believe the difference between the two settings is mainly
because of the frequency and quality of the implementation of CEs. First of
all, the frequency of using CEs was almost doubled at the second setting. CEs
were used five times at the first setting while nine times at the second
setting. Relating to assumptive theory of learning, the higher frequency of
using CEs provides more opportunities for students to link concepts and
enable them to link topics with shorter time gaps within the course. These
experiences allow students construct more coherent understanding in minds
and undergo more meaningful learning through the more CE activities,
leading to a larger effect size. More importantly, for the treatment groups, the
time students spent in class for CEs at the second setting was probably with
higher quality. At the first setting, when students were given the CEs as in-
class group activities, they were given along with other traditional chemistry
problems. While at the second setting, the instructor pre-assigned the CEs to
students individually before the class as homework, and CEs were used as
the only problems that students had to complete during in class group
activities. As such, students had their own answers when they came to class
and were ready to share and discuss their answers with peers in groups.
Students were also given sufficient time to compile individual answers and
ask instructor for feedback on the correctness of group answers during the
class time. This mechanism allows adequate time for group discussion and
student–instructor interactions, which makes the implementation of CEs more
efficient. Also, the larger effect size at the second setting could be due to the
differences between control groups at the two settings. Students in control
group lacked opportunities for group activity at the second setting but similar
amount of group activities using traditional assessment were completed by
the students in control group at the first setting. In sum, the frequency and
quality of the implementation of CEs might be the important factors that
enhance the effectiveness of CEs on students’ performance.
Conflicts of interest
The authors claim no conflicts of interest.
Acknowledgements
The authors of this study would like to thank students who participated in this
study. Also, we appreciate the Office of Institutional Research provided the
demographic data our study and support from 2018–2019 Research,
Scholarship, and Creative Award at California State University, Northridge.
References
1. Aguiar J. G. and Correia P. R. M., (2017), From representing to modelling
knowledge: proposing a two-step training for excellence in concept
mapping, Knowl. Manage. E-Learn., 9(3), 366–379.
2. Ambrose S. A., Lovett M., Bridges M. W., DiPietro M. and Norman M. K.,
(2010), How learning works: seven research-based principles for smart
teaching, San Francisco, CA: Jossey-Bass.
9. Cohen J., (2005), Statistical power analysis for the behavioral sciences,
2nd edn, Hillsdale: Lawrence Erlbaum Associates. de Leeuw, E.D.
10. Cook E., Kennedy E. and McGuire S. Y., (2013), Effect of teaching
metacognitive learning strategies on performance in General
Chemistry courses, J. Chem. Educ., 90(8), 961–967.
13. Falk J. H., Dierking L. D., Osborne J., Wenger M., Dawson E. and
Wong B., (2015), Analyzing Science Education in the United Kingdom:
Taking a System-Wide Approach, Sci. Educ., 99, 145–173.
23. Nesbit J. C. and Adesope O. O., (2006), Learning with concept and
knowledge maps: a meta-analysis, Rev. Educ. Res., 76, 413–448.
24. Novak J. D., (1990), Concept mapping: a useful tool for science
education, J. Res. Sci. Teach., 27, 937–949.
27. Rich J. D., Colon A. N., Mines D. and Jivers K. L., (2014), Creating
learner-centered assessment strategies for promoting greater student
retention and class participation, Front. Psychol., 5, 1–3.
36. Vaitsis C., Nilsson G. and Zary N., (2014), Visual analytics in
healthcare education: exploring novel ways to analyze and represent
big data in undergraduate medical education, PeerJ, 2, e683.
39. Ye L. and Lewis S. E., (2014), Looking for links: examining student
responses in creative exercises for evidence of linking chemistry
concepts, Chem. Educ. Res. Pract., 15, 576–586.
40. Ye L., Shuniak C., Oueini R., Robert J., Lewis S., (2016), Can they
succeed? Exploring at-risk students' study habits in college general
chemistry, Chem. Educ. Res. Pract., 17, 878–892.
Footnote