Вы находитесь на странице: 1из 42

Running Head: STUDENT PERSPECTIVES ON COMPUTER-BASED TESTING

Student Perspectives For Improving Computer Based Testing


Nathan Brandsma
Colorado State University
Fall 2014

STUDENT PERSPECTIVES ON COMPUTER-BASED TESTING

Abstract
Computer-based assessments are becoming more and more common in K through 12 schools.
This study seeks understand student perceptions and preferences regarding computer-based
assessments, in order to improve the quality of computer-based assessments in the classroom.
The participants studied were high school students at an affluent school where laptops are issued
to students. The results showed that the students were familiar with computers, had taken
computer-based assessments before, did not find the test confusing, felt positively about the
experience, and generally felt that the test was fair for all students and that it reflected their
knowledge of the content. The students were divided on whether they preferred a computer test
to a pen and paper test, and whether the test reduced their anxiety. The students liked the speed,
the visuals the test offered, and the type of question (multiple choice). Students disliked that
cheating could occur, and technological problems such as misclicks, battery problems and
internet difficulties. The students wanted the test to be changed so that cheating could be
stopped, more questions added, and the ability to write on the test, both to annotate and to answer
questions. Action steps include stopping cheating, adjusting the format, and offering alternative
assessments.

STUDENT PERSPECTIVES ON COMPUTER-BASED TESTING

SECTION I
Introduction
Background
The public discourse about education over the last several decades has largely been
centered on reform and improvement. One of the strongest threads in the skein of educational
reform is technology. There has been a widespread belief that classrooms have lagged behind the
rest of society in the adoption of technology (General Accounting Office, 1995). As a result,
there has been a strong drive to update the technology in the classroom. While this development
has certainly been uneven, and many commentators see a digital divide between affluent and
impoverished school districts, the trend is toward increasing use of technology of various types
in the classroom (Garland, 2014).
The main focus of this interest in the application of technology in schools is the
computer. Computer skills are seen as crucial for the 21st Century economy (Trilling, 2009,
p. 169). Recently, many school districts have begun to loan each student a laptop or a tablet for
use during the duration of their time at the school. As such, the students at these schools are close
to the cutting edge of educational technology. These students are more wired than any previous
generation.
Rationale
The integration of these computers into each of the different content areas, however, has
been uneven. Some teachers use the computers in every class period, while others are more
reluctant or are unsure how to incorporate the technology (Ertmer, Ottenbreit-Leftwich, Sadik,

STUDENT PERSPECTIVES ON COMPUTER-BASED TESTING

Sendurur, & Sendurur, 2012). This is especially true for the arena of assessment. Assessment
seems to be an area that teachers have been slow to incorporate computer technology, aside from
the oft used scantron answer sheets. At the same time, efforts are underway at state levels to
move standardized tests from paper onto computers (Dillon, 2010). Over time, more and more
teachers will likely incorporate computer-based assessment into their classrooms.
The discourse and discussion about computers and computer-based assessment include
politicians, parents, researchers, teachers, budget committees, state bureaucracies, district boards,
school principals and policy-makers. There is certainly no shortage of stakeholders in this
discussion, but one very important group's voice is not explicitly found in the discussion.
Students are certainly being tested and researched frequently, and yet their voices and opinions
in the matter are strangely absent from the public debate (Spires, Lee, Turner, & Johnson, 2008,
p. 498).
Students' opinions and beliefs are important to this discussion, even if they are not
constructors of policy or researchers. In some ways it seems that in much policy deliberation,
students are seen less as individuals with unique opinions, and more as a means of international
educational competition (Bidwell, 2013). Certainly students need to possess the skills necessary
for the 21st Century economy and citizenship in our democratic system. They also have opinions
and thoughts of their own, and we should include their voices in the discourse. Cook-Sather
(2006, p. 359-360) argues, "that young people have unique perspectives on learning, teaching,
and schooling; that their insights warrant not only the attention but also the responses of adults;
and that they should be afforded opportunities to actively shape their education."

STUDENT PERSPECTIVES ON COMPUTER-BASED TESTING

The inclusion of student voices has been lacking in the discourse and research
surrounding the integration of technology in the classroom (Spires, Lee, Turner, & Johnson,
2008). That certainly does not mean that the voices of students are not important. Indeed,
listening to students can have a positive effect on the students and their academic achievement.
According to Brooks, Brooks and Goldstein, "When students feel their voice is being heard, they
are more likely to be engaged in academic requirements, work more cooperatively with teachers,
and demonstrate greater motivation to meet academic challenges" (2012, p. 19)
This study will both allow students to share their opinions, as well as analyze their
responses to see what we can learn from them. As argued by Noguera (2007, p. 210)., "Soliciting
and responding to the perspectives of students can serve as another means of insuring quality
control, and unlike so many other reform strategies-this one costs nothing." Giving students a
voice in their computer-based assessment can help improve student engagement and
performance, while at the same time their feedback can help us to craft better assessments.
Purpose
The purpose of this study is to examine student opinions on computerized testing in an
affluent high school social studies classroom in Fort Collins, Colorado, where each student is
issued a laptop for their time in high school. The teacher of the students who are the subjects of
this study has introduced computer-based assessments for the first time this year. The Research
question that this study seeks to answer is what can we learn from students in order to improve
computer-based assessments?

STUDENT PERSPECTIVES ON COMPUTER-BASED TESTING

This study incorporates both quantitative and qualitative methods, in order to glean as
much and as varied information as possible from students. The instrument used includes survey
questions with Likert scale responses as well as open-ended questions that allow students to
describe what they experience, how the tests can be improved, and what they feel is important.
Hypothesis
As this research project has both quantitative and qualitative methods, my hypothesis is
reserved to the types of questions that are quantitative. These are Likert scale questions of
opinion. Given that the students being researched attend a school in which all students are issued
a laptop, the hypothesis of this study is that students who are familiar with computers are likely
to have positive reactions to computer-based examinations and feel that the computer-based test
accurately reflects their learning.
Central Phenomenon
While the quantitative portion of this study will demonstrate the reactions of students to
computer-based assessment, the qualitative portion is an opportunity for students to describe in
their own words the things that they like, dislike, and would like to have changed and why
regarding computer-based assessment. Through these reactions, the study will give insight into
students' beliefs about how computer-based assessments can be improved, especially for those
students who do not feel that the tests accurately reflect their learning.

STUDENT PERSPECTIVES ON COMPUTER-BASED TESTING

SECTION II
Literature Review
Introduction
As described above, it is not only the public discourse that lacks the inclusion of student
opinions. The research on computer-based assessment generally involved scores and
effectiveness, not student opinion, perspectives, opinions, or thoughts. The body of research
about student voices in regard to computer-based assessment is rather scant, but of the limited
extant research, one theme seemed to dominate, namely that students generally have positive
responses to computer-based assessment. This positivity seems to be correlated to the students'
previous familiarity with computers. Finally, there is a strong belief among researchers that
students' voices and opinions are important, and can positively contribute to the design of
assessments.
Paucity of Research
Extensive searches of ERIC, Google Scholar, JSTOR and EBSCO databases yielded few
results on student perspectives and voices on computerized testing. Searches included "student
voices", "student perspectives", "student opinions", and "student outlooks" combined with
"computer test", computerized test", "computer-based test", "computer assessment",
"computerized assessment", and "computer-based assessment." The paucity of research further
demonstrates the necessity of adding student voices to the discussion of computerized
assessment. As argued by Tierney and Charland (2007, p. 24) "Student voices, which are not
strongly heard overall in this body of work, could play a more significant role." This opinion is
echoed by Ogilvie, Trusk & Blue (1999, p. 828) "few studies have examined students' attitudes

STUDENT PERSPECTIVES ON COMPUTER-BASED TESTING

toward computerized testing." Spires, Lee, Turner, & Johnson (2008, p. 498) agree, stating,
"Noticeably absent from the dialogue are student perspectives." The view that student
perceptions, reactions, opinions and beliefs about computer-based testing are underrepresented is
widespread and reflects my difficulty in researching the subject. For the studies that have been
done, the theme of students' positive reactions to computer-based assessments is striking.
Positive Reactions of Students
In the few studies that explicitly address the reactions of test takers, many have found
positive reactions among students to computer administered assessments, though not without
exception. This variation appears to be based on computer experience and learning style.
(Charman & Elmes, 1998; Ozden, Erturk, & Sanli, 2004; Spires, Lee, Turner, & Johnson, 2008;
Williams, 2007; Wilson, Boyd, Chen & Jamal, 2011)
Over the academic years of 2008 and 2009, researchers Wilson, Boyd, Chen and Jamal
studied undergraduate students in a first year geography course. The students were given
multiple choice formative assessments on computer, followed by feedback questionnaires to
assess student response to the experience. According to Wilson, Boyd, Chen and Jamal (2011, p.
1493), "Feedback questionnaires from both academic years reveals that students are
overwhelmingly positive with over 95 percent indicating that the computer-assisted practice tests
assist them in identifying their strengths and weaknesses and help them prepare for in-class
midterms and final exam." This study is striking in its results, but how generalizable are the
results? As the students were college students, the likelihood of familiarity with computers is
high, which as another study shows, is correlated with positive views of computer-based
assessment.

STUDENT PERSPECTIVES ON COMPUTER-BASED TESTING

In 1998, Charman and Elmes studied the introduction of a computer-based assessment for
a geography course for first year students at the University of Plymouth. The focus of the study
was two-fold. First, the study examined whether the introduction of a computer-based
assessment improved scores. Second, and more important to this study, the researchers examined
student responses to the computer-based assessment. According to Charman and Elmes (1998),
"student response was generally positive, with 64 percent agreeing that the CBAs (Computer
Based Assessments) were a good way of learning, 68 percent agreeing that the feedback was
adequate and relevant and 56 percent agreeing that the CBAs were an improvement over other
forms of assessment." Here the results are less striking, but positive reactions are still in the
majority.
In 2004, researchers undertook a study to gain an understanding of student's perceptions
of computerized assessment and "to investigate the potential for using student feedback in the
validation of assessment" (Ozden, Erturk & Sanli, 2004, p. 79). The data was collected from 46
third-year students in the department of computer education at Kocaeli University. Participation
was voluntary. The students were given a questionnaire using a Likert scale. A follow up with
random students within the study involved an interview. The results of the study were that
"computer and assessment tool familiarity are the most fundamental key factors in the perception
of online assessment, especially for unfamiliar content and/or for low-attaining examinees"
(Ozden, Erturk & Sanli, 2004, p. 90). As this study demonstrates, student perspectives on
computer-based assessments are not universally positive. According to the study, there is a
spectrum of student response to computer-based testing. The students most familiar with
computers had the most positive responses to the computer-based assessments.

STUDENT PERSPECTIVES ON COMPUTER-BASED TESTING

10

In 2008, Hiller A. Spires, John K. Lee, and Kimberly A. Turner, working at North
Carolina State University, undertook a massive research project involving 4000 sixth, seventh
and eighth grade students who were involved in afterschool programs. Sixty three percent of the
students received free or reduced lunch, and 85% of the students scored at or above grade level
on standardized math and reading tests. (Spires, Lee, Turner, & Johnson, 2008, p. 499) As such,
the students were not a representative sample of North Carolina, but could certainly offer
insights into how some middle school aged kids felt about various subjects. The purpose of their
study was " to learn from middle grades students, though surveys and focus groups, what
engages them to achieve in school" (Spires, Lee, Turner, & Johnson, 2008, p. 497). The study
covers a wide range of pedagogical topics related to engagement, but technology was certainly a
central subject. Computers were very popular with the subjects of this study, to the point that,
"using computers is the one activity that all ethnic groups referred to as their favorite activity in
school" (Spires, Lee, Turner, & Johnson, 2008, p. 511). Likely a certain amount of this
excitement carries over to computer-based assessment.
Sansgiry and Bhosle's study of pharmaceutical students looked at a very specific types of
assessment, namely that PowerPoint timed quizzes. The participants were first and second year
students. The results showed that students slightly preferred the computer-based assessment,
though not by a large margin. Also, the students were generally positive about their experience,
though not strongly so. The students found the technique "somewhat exciting" and "slightly
more interesting" (Sansgiry & Bhosle, 2004, p.3). The study was limited in scope, both in terms
of the number of participants and what part of the overall population that they represent. That
said, this study does show the same themes of positive responses for computer-based assessment.

STUDENT PERSPECTIVES ON COMPUTER-BASED TESTING

11

In 1999, a study was conducted of medical students in a first year cell biology and
histology course at the Medical University of South Carolina. These students were given
computer-based extra credit formative assessments. The students then filled out an evaluation
form, where the results were very positive. Two hundred of 202 students reported the computer
exam experience as either highly or moderately useful (Ogilvie, Trusk & Blue, 1999, p. 829).
This study again shows a strong positive opinion among students, though the students were
likely quite familiar with computers.
Conclusion
The theme of positive reactions to computer-based assessments, particularly among
populations familiar and comfortable with computers, runs through the research. Much of that
research has been conducted on populations that are arguably more likely to be familiar with
computers, such as university and medical students. There has been far less research studying the
perceptions of students in K through 12 classes, where populations are more varied in regards to
their familiarity and comfort with computers. This study will be conducted at an affluent school
in which students are issued individual laptops for their entire high school experience. As such, I
hypothesize that the students in this study are familiar with computers and thus will likely have
quite positive perceptions of computer-based assessments. The literature on student responses to
computer-based assessment contains very little work in K through 12 schools. This study will
contribute the voices of high school students where it is missing, both at the high school in which
the study takes place as well is in the larger discourse about computer-based assessment
.

STUDENT PERSPECTIVES ON COMPUTER-BASED TESTING

12

SECTION III
Methodology
Context and Setting
The subjects of this study are the students in two high school history classes at Fossil
Ridge High School in Fort Collins, Colorado, in Poudre School District. Poudre School District
had 26,085 pre-kindergarten through 12th grade students enrolled in the 2012-13 school year,
with 1436 teachers (Poudre School District, 2013). In 2012-2013, the students in this district
were 74.31% white, 17.93% Hispanic, 3.06% Asian, 1.37% Black/African American, 0.53%
Native American, 0.13% Native Hawaiian/Pacific Islander and 3.15% two or more (Poudre
School District, 2013).
Fossil Ridge High School had 2094 students in the 2013-2014 school year (Colorado
Department of Education, 2014). These students are served by 124 teachers (Poudre School
District, 2013). The student population in the 2011-2012 school year consisted of 1% American
Indian/Alaskan, 2% Asian/Pacific Islander, 1% Black/African American, 7% Hispanic, 86%
White, and 3% Two or more Races (National Center for Education Statistics, 2013).
The study took place in two different classes, one a ninth grade world history course and
one an eleventh grade United States history course. The ninth grade world history course was
comprised of seventeen girls and fourteen boys. The class was 97% white and 3% Asian. The
eleventh grade US history course has thirteen girls and seventeen boys. Demographically, the
class is 6.5% Asian, 6.5% Hispanic, 6.5% two or more races and 80.5% white.

STUDENT PERSPECTIVES ON COMPUTER-BASED TESTING

13

Fossil Ridge High School is an appropriate setting for this action research project. All of
the students at Fossil Ridge are issued laptops through the high school. This means that teachers
can easily utilize these laptop computers in assessment. The teacher of the classes described
above has in fact introduced computer-based assessments this year in lieu of pen and paper
assessment. This is an opportunity to improve that process through this research project.
Participants
The participants in this study were the students in the US history and world history
classes at Fossil Ridge. The intention of the researcher was to collect data from every student in
both classes, but due to absences only 28 students in the world history class and 25 students in
the US history class filled out the survey and questionnaire. Due to the interest in anonymity to
allow for more candid responses, the surveys and questionnaires do not contain demographic
information. The sample is nearly all of the students in the two classes. Each student who did fill
out a survey completed it in its entirety, both qualitative and qualitative sections.
Study Design
This study is a mixed methods study, including both qualitative and quantitative
elements. The quantitative Likert survey allows for comparison between students' reactions to
different statements, attached below as Appendix A. The results of the students responses in both
classes are analyzed as percentages that can give us strong numerical evidence of how the
students feel. These Likert scale responses are not designed to correlate variables, but rather to
give us an overall sense of how the classes feel about the subjects in the questions.

STUDENT PERSPECTIVES ON COMPUTER-BASED TESTING

14

The qualitative portion of the test was designed with open-ended questions to allow the
students to share their own ideas about what they liked, disliked, and what they would like
changed about the computer-based assessment. This is important, because the students are able
to share a wide range of opinions without being constrained by a quantitative survey. The use of
both quantitative and qualitative elements in this study provide the greatest range of information
to researchers for the purpose of improving their computer-based assessments.
Data Sources
The instrument used to record student perceptions was a survey on paper with two
sections, filled out with a pen or pencil to circle the Likert scale responses and written responses
to the qualitative questions. The first section contains a series of eight statements with five Likert
scale responses from Strongly Agree to Strongly Disagree. The statements were: I consider
myself computer literate, I have taken tests on the computer before, I prefer to take tests on a
computer, rather than with pen and paper, The computer-based exam was a positive experience,
The computer-based exam was confusing, I think the computer-based exam was fair for all
students, I am less anxious with a computer-based exam, and, I felt that the computer-based
exam accurately reflected my knowledge of the content (see Appendix A).
The second section of the survey contained open-ended questions for the students to
answer. The questions were: What did you like about the computer-based exam? Why?, What
did you dislike about the computer-based exam? Why?, What would you like changed about the
computer based exam? Why? and How could the computer-based exam be improved to more
closely reflect your learning? (see Appendix A).

STUDENT PERSPECTIVES ON COMPUTER-BASED TESTING

15

Researcher's Role
The researcher is a practicum teacher in the classes in which the study was undertaken.
As such the researcher has spent much of their time in the classroom as an observer, with an
occasional lesson taught. The researcher was not the designer of the computer-based assessment,
but did observe the assessment. The researcher designed the study and the instrument, and
analyzed the data.
Validity, Reliability, and Credibility
The surveys and questionnaires were distributed directly after the completion of the
computer-based assessment, in order that the students had the experience of the test fresh in their
minds. This was done to try to get the most accurate responses possible from the student
participants. The researcher felt that keeping the temporal proximity close helped to ensure that
participants' thinking and judgment were closest to the reactions felt during the computer-based
assessment.
The Likert scale, created in 1932, is a scale that allows research participants to choose
from a series of possible responses to a given statement, thus allowing for a range of opinions
that can at the same time be analyzed statistically (Likert, 1932). Widely used, the Likert scale is
intuitive and are easy for participants to understand, increasing its validity. The statements were
crafted to be accessible, simple, and easily read and understood. The reliability of the survey is
also bolstered by this simplicity of statement and responses. Students can easily transfer their
opinions into the Likert scale of five possible responses.

STUDENT PERSPECTIVES ON COMPUTER-BASED TESTING

16

The surveys were filled out anonymously, because the researcher felt that the results
would be more accurate to the students' beliefs. If students put their name down, they may have
felt some pressure to answer with responses that they think the researcher might desire to hear.
This possibility may also have been mitigated by the fact that the researcher does not grade the
students' work.
One difficulty that affected the research process was a formatting problem on the test
given to the world history class. In that test, the images used for many of the questions on the test
were located together at the top of the test, which led to students needing to scroll back and forth.
This may have skewed the results in that class compared to the US history class.
Another limitation was the choice of the word neither in the middle of the Likert scale.
One of the quantitative statements was "I prefer to take tests on computer, rather than with pen
and paper. The research intent was to determine if students had a preference for neither type of
test. Instead, students may have circled neither because they wanted to take neither a computer
nor a pen and pencil test. Since this could be taken either way, the results of this question cannot
be read with certainty.
Ethical Issues
The ethical issues surrounding this study were slight. The study did not contain any
alterations in teaching, aside from taking a few minutes for the survey to be filled out by
students. The researcher had taught several lessons in the class, so the notion of consent to
participate in the study may be slightly fraught due to the potential power dynamic between the
researcher and the participants. As a inchoate teacher, the researcher had some influence in the

STUDENT PERSPECTIVES ON COMPUTER-BASED TESTING

17

classroom and this may have caused students to feel that they needed to fill out the survey as
though it were class work.
The researcher worked to mitigate this potential ethical problem by making it clear to the
class that the survey was voluntary and that their grade was not in any way affected by their
participation in the study.
Procedures and Timeline
The instrument was administered after the completion of a computer-based unit
assessment in both classes. Students were told what the purpose of the study is, namely to work
to improve computer-based assessments through their voices and opinions. Students who wished
to participate were instructed to fill out both sections of the survey instrument. Students who
wished to participate then filled out their surveys and were given as much time as they needed to
finish. The surveys were then handed back to the researcher. The process only took about ten
minutes in each class.

STUDENT PERSPECTIVES ON COMPUTER-BASED TESTING

18

SECTION IV
Data Analysis
Quantitative
US History Class.
The following chart breaks down the student responses in the US history class by
percentages:

Statements

Strongly
Agree

Agree

I consider myself computer literate.

44%

44%

8%

4%

0%

60%

40%

0%

0%

0%

16%

24%

36%

12%

12%

16%

52%

32%

0%

0%

0%

4%

12%

40%

44%

16%

56%

16%

12%

0%

8%

32%

48%

12%

0%

12%

44%

12%

32%

0%

I have taken tests on the computer


before
I prefer to take tests on a computer,
rather than with pen and paper.
The computer-based exam was a
positive experience.
The computer-based exam was
confusing.
I think the computer-based exam was
fair for all students.
I am less anxious with a computerbased exam.
I felt that the computer based exam
accurately reflected my knowledge of
the content.

Neither Disagree

Strongly
Disagree

Familiarity with computers was high in the US history class, which was not surprising
given that all students have laptops . For these students, 88% either agreed or strongly agreed

STUDENT PERSPECTIVES ON COMPUTER-BASED TESTING

19

with the statement, "I consider myself computer literate." two students circled "Neither," and a
single student disagreed. None of the students in this class strongly disagreed.
In response to the statement "I have taken tests on the computer before," the US history
students all either agreed or strongly agreed, with the majority, 60%, strongly agreeing. This
class is thus experienced with computer-based assessments.
The next statement, " I prefer to take tests on a computer, rather than with pen and
paper," the responses in the US history class were fairly evenly split, with 40% agreeing or
strongly agreeing. Twenty four percent of the students disagreed or strongly disagreed. Neither
was the choice of 36% of the students. It seems that the US history class is ambivalent about how
the tests are taken, though the 36% who answered neither may have been expressing that they
would prefer to take neither test, rather than that they do not have a preference about what format
the test is taken in (as described in the methodology section).
These results are slightly in conflict with the results of the following statement, "the
computer-based exam was a positive experience." In the US history class, 68% of students
agreed or strongly agreed. None of the students disagreed or strongly disagreed, so it would seem
that none of the students felt that it was a negative experience. Thirty two percent felt it was
neither positive or negative. It seems that though many of the students would prefer to take the
assessment with pen and paper, they still felt positively about the computer-based assessment.
Confusion was certainly not a problem for the US history class. The statement "the
computer based test was confusing," had no one strongly agree. Only four percent, or one

STUDENT PERSPECTIVES ON COMPUTER-BASED TESTING

20

student, agreed, while neither was the response for 12%. Eighty four percent of the US history
class disagreed that the test was confusing, with 44% strongly disagreeing.
Students in the US history class also tended to believe that the computer based exam was
fair for all students. Fifty six percent agreed with the statement, "I think the computer-based
exam was fair for all students" but only 16% strongly agreed, for a total of 72%. Sixteen percent
said neither, while 12% disagreed and none strongly disagreed. Overall the US history students
felt that the exam was fair for all, but not strongly.
The next statement, "I am less anxious with a computer-based exam," showed that while
overall the US history class felt less anxious, the most often chosen response was neither, with
48%. In terms of student agreement, 8% strongly agreed and 32% agreed. As for disagreement,
12% disagreed but none strongly disagreed.
The final statement, " I felt that the computer based exam accurately reflected my
knowledge of the content," was generally agreed with. In the US history class, 44% agreed and
12% strongly agreed. Twelve percent said neither, 32% disagreed but none of the students
strongly disagreed. Thus approximately a third felt slightly that the test did not reflect their
knowledge of the content. This could be due to the questions on the test, not the computer-based
assessment itself.

STUDENT PERSPECTIVES ON COMPUTER-BASED TESTING

21

World History Class.


The following chart breaks down the student responses in the world history class by
percentages:
Statements
I consider myself computer literate.
I have taken tests on the computer
before
I prefer to take tests on a computer,
rather than with pen and paper.
The computer-based exam was a
positive experience.
The computer-based exam was
confusing.
I think the computer-based exam was
fair for all students.
I am less anxious with a computerbased exam.
I felt that the computer based exam
accurately reflected my knowledge of
the content.

Strongly
Agree

Agree

Neither Disagree

Strongly
Disagree

36%

50%

11%

0%

4%

75%

25%

0%

0%

0%

21%

18%

14%

36%

11%

25%

32%

29%

14%

0%

0%

11%

18%

39%

32%

29%

32%

21%

14%

4%

14%

21%

46%

14%

4%

21%

46%

21%

11%

0%

In the world history class, 86% of students either agreed or strongly agreed with the
statement, "I consider myself computer literate," while eleven percent reported neither, and no
students disagreed. One student strongly disagreed, though given that none of the students
disagreed, this appears to be an outlier.
The world history class certainly has experience with computer-based assessments, as
75% of the students strongly agreed with the statement "I have taken tests on the computer
before, and the other 25% agreed. None of the students said neither or disagreed.

STUDENT PERSPECTIVES ON COMPUTER-BASED TESTING

22

The world history class overall tended toward disagreement with the statement "I prefer
to take tests on a computer, rather than with a pen and paper." Here, 39% of the students agreed
and strongly agreed that they prefer to take tests on computer. Forty seven percent disagreed or
strongly disagreed, which was a surprisingly high number. Only 14% of the world history class
circled neither in response to this question. This seems to indicate that a almost half of these
students would rather take a test with pen and paper rather than on the computer, while a sixth
did not care either way. More preferred pen and paper to the computer-based assessment.
The world history class felt generally positive about the computer-based test, where 57%
of the students felt that the test was a positive experience. Twenty nine percent circled neither,
and 14% disagreed. None of the students strongly disagreed. Thus the world history class tended
toward positive reactions, though many did not have an opinion. This positive response is
surprising given the preference for pen and paper tests.
The World history class did not find the test confusing. Seventy one percent of the
students disagreed or strongly disagreed with the statement, "The computer-based exam was
confusing." Eighteen percent said neither, while 11% agreed. None of these students strongly
disagreed.
The world history class had a bit more anxiety with the computer-based exam, with 14%
disagreeing and 4% strongly disagreeing with the statement, "I am less anxious with a computerbased exam." The most popular answer was neither, which garnered 46% of the responses.
Thirty five percent agreed or strongly agreed that they were less anxious with the computerbased exam.

STUDENT PERSPECTIVES ON COMPUTER-BASED TESTING

23

Finally, the world history class generally agreed with the statement, "I felt that the
computer-based exam accurately reflected my knowledge of the content." Sixty seven percent of
the students either agreed or strongly agreed. Twenty one percent circled neither, and 11%
disagreed. None of the world history students strongly disagreed.
Comparisons and Contrasts Between the Classes.
In response to the statement "I have taken tests on the computer before," all students in
both classes either agreed or strongly agreed, with the majority in both strongly agreeing. The
percentage was higher in the ninth grade world history class, where 75% of the students strongly
agreed, compared to 60% in the US history class. The numbers in both classes also showed that
both classes are experienced with computer-based assessments, as all students in both classes
agreed and strongly agreed that they have taken tests on computers before. From these results, it
is clear that even if some students do not consider themselves computer literate, certainly they
have experience with computer-based assessments.
In terms of the preferences for test types, both classes had a range of answers. Overall,
the world history class was more interested in pen and paper tests than the US history class. The
differences were not stark however. The preference of test type was reflected in the feelings
about the test, where the world history class felt much less positive about the computer-based
assessment than the US history class. This discrepancy may be due to the formatting problem
described in the validity, reliability and credibility section. Whatever the reason, the world
history class was slightly less positive, though the majority of students in both classes felt
positively about the test.

STUDENT PERSPECTIVES ON COMPUTER-BASED TESTING

24

Neither class felt that the computer-based test was confusing, but the world history class
was very slightly more confused. The US history class had higher numbers in agreement, while
the world history class had slightly higher numbers in the neither and disagree categories.
Both classes felt that the computer-based assessment was fair for all students, but, given
that the world history class was generally less positive and more likely to prefer pen and paper, it
was a bit surprising to find that they had stronger agreement that the test was fair for all students.
The percentage breakdowns for the responses about test anxiety were remarkably similar
between the two classes. In the US class, 48% said neither, while 46% said neither in the world
history class. Twelve percent disagreed in the US class, quite close to the 14% in the world class.
The difference was in the agreement, where the US history class was slightly less anxious.
Overall it seems computer-based assessments can help reduce test anxiety, but for many students
the type of test given does not affect their anxiety.
So what can we glean from these data sets? First, we can tell that these students, who
spend so much time with the laptops distributed to them, consider themselves quite computer
literate, and are experienced with computer-based examinations, and do not generally find them
confusing. Given this familiarity, this study hypothesized that the students would have positive
views toward the test, as previous research has shown. The results were far less clear cut. While
overall the classes felt that the computer-based exams were a positive experience, the students
responses were more ambivalent than expected by the hypothesis. This could be due to the how
many computer-based exams the students have taken. As they become routine, the students no
longer have as positive a view correlated to familiarity. Another possibility is that the high
school students may not appreciate exams in general, so the notion of a positive experience of an

STUDENT PERSPECTIVES ON COMPUTER-BASED TESTING

25

exam may be less likely to occur. Whatever the cause, the results show that the correlation
between familiarity with computers and positive reactions to computer-based assessments is
more complicated than earlier research suggested.
Students also seem to be less anxious overall taking a computer-based assessment, but the
results suggest that the difference is not enormous. These results encourage the continued use of
computer-based assessment, at least with the students in this setting. The results also bring up
more questions, such as why do some of the students feel negatively toward computer-based
tests or how can the test more accurately reflect their knowledge. This was why the study was
conducted with a qualitative portion; the researcher desired answers to these questions.
Qualitative
The qualitative portion of the survey contained four open-ended questions for the
participants (see Appendix B). Both classes received the same questions, and nearly all of the
participants responded in some way to each of the questions. Most of the responses were a
sentence or two in length, and none exceeded five sentences. Despite the brevity of the
responses, several themes emerged for each question.
The data from the qualitative portion of the survey was analyzed as a whole, based on
themes, rather than by class. There were two reasons for this. The first was that the data between
the two classes was very similar. The second was that there was not enough collected data to
divide the responses by class. Each theme could be more powerfully articulated using the
responses from both classes, rather than individually. Each subsection is titled the open-ended
question that the students responded to.

STUDENT PERSPECTIVES ON COMPUTER-BASED TESTING

26

What did you like about the computer-based exam? Why?


The first question asked what the students liked about the computer-based assessment and
why. The two themes that were most often repeated in the student responses were that the
computer-based assessment was fast or efficient, and that it was easier to read and the colored
maps and images were helpful and pleasant.
Theme 1
Students in the world history class described the computer-based assessment as
"quicker," "faster and easier," and "it is quick and easy to take," and "it is fast and easy." One of
the responses seemed to indicate that the notion of speed was subjective, such as one student
who wrote that "it felt a lot faster than a written test."
The US history class shared the sentiment. One student said, "it moves quickly and is
easy to follow," while another said "it's faster; the multiple choice is faster." The comparison
with pen and paper was made explicit by one student, who wrote, "it felt much more efficient
than bubbling or circling answers with a pen on a paper." Other responses included, "faster," "I
also think it goes quicker," and "the computer-based exam was faster, and more efficient."
Clearly many of the students in both classes appreciated that the test was either faster or felt
faster.
Theme 2
A second theme that emerged from the student responses in both classes was that the
computer-based exam was easier. "Easier" was a larger theme that encompasses such topics as
ease of reading, ease of analyzing, and ease of test-taking. Several of the world history class

STUDENT PERSPECTIVES ON COMPUTER-BASED TESTING

27

students felt that the test was easier due to the fact that all the questions were multiple choice.
One student wrote that they like "that it is multiple choice because it is easier." Another agreed
with that sentiment, stating "I liked that it was multiple choice." A third wrote that they liked
"multiple choice because it is easier than writing a response," and another said "it was easier to
just click a button than write a sentence." Similarly, another stated that it "wasn't as tiring. I
didn't have to write a lot."
Along with the ease of test taking, several world history class students commented on the
ease of reading the images. One student wrote, "there are pictures. They are clear and easy to
see/interpret." Another said, "accurate images and coloring" and "I like having a picture to look
at."
These sentiments were shared by the US history class. One student in that class liked that
it "didn't have to be writing." For another, "it makes testing seem easier." The US history
students also found the presentation easier to engage with. One said, "I liked that the maps were
easy to read, whereas on paper they can come out too dark or blurry and you can't read
anything." Another said, "I like that it is always in color." A third wrote, "I can see the color in
the pictures which helps me understand and answer questions." Another had a similar thoughts,
explaining, "I liked the clear visuals of the exam, because I feel the maps and pictures helped me
to understand or process the questions better." Finally, one commented, "I enjoyed how pictorial
diagrams could be conveyed and interpreted much more easily."

STUDENT PERSPECTIVES ON COMPUTER-BASED TESTING

28

Theme 3
Another theme is that the test was less stressful. For one US history student, "they are
less stressful." Another wrote, "it gives the illusion of being an assignment or quiz rather than a
test, so I can think clearer." A third: "computer based exams just feel less stressful for some
reason." Students in the world history class agreed: "I like how there doesn't seem to be as much
pressure. You can just sit there all relaxed."
What did you dislike about the computer-based exam? Why?
The main themes that students wrote and described in response to the questions "What
did you dislike about the computer-based exam? Why?" were technology problems, problems
with the laptops, and most worrisome, cheating.
Theme 1
Technological difficulties were the first them that emerged in response to what students
disliked. As one student said, "It sometimes doesn't work." Another student was more specific:
"What would happen if you had computer problems or slow internet?" It is not just internet
connections that can be a problem. For one student, "Sometimes I bring my computer to school
charged and because it is 3rd period my battery is already drained."
Beyond battery draining, another common problem was the difficulty of using the mouse
pad. As the mouse pad operates both the motion of the cursor and the clicks of the mouse, many
students had trouble with errors in clicking and inadvertently changed answers. A US history
student put it thusly: "I accidentally changed my answers a few times by using the mouse pad."
Another student had similar problems: "I don't like how easy it is to click the wrong answer

STUDENT PERSPECTIVES ON COMPUTER-BASED TESTING

29

without realizing it." Finally: "I don't like how much easier to make a mistake (i.e. tapping a key
and changing your answer accidentally without knowing)."
Other students had problems with the screen. For some students the exam was "harder to
read because of the screen brightness and font," "The text on the computer hurts my eyes," and
"sometimes the screen is hard to follow." The primary source documents used in the test are
given a gray hue. For one student, "the light grey font on the paragraphs under the question was
hard to read."
Theme 2
A second theme that students described that they disliked was cheating. As one student
wrote, "there is always a temptation to cheat using the internet, for everyone." Other students
shared this concern: "I disliked the possibility for all students to cheat," "it would be very easy
for someone to cheat," and "more tempted to cheat and look up a website for answers." Three
students were less speculative about the possibility, instead describing the cheating with
certainty: "I dislike that some people are cheating," "Cheating, people asking other through lync
to get answers, or just looking up answers," and "I've seen a lot of kids cheat."
Theme 3
The necessity of scrolling through the test was something that students dislike as well.
This feeling was much more acute among the world history class. This was likely due to the
formatting error that placed all of the assessment's images at the top of the page, so students had
to scroll each time a question was asked about an image.

STUDENT PERSPECTIVES ON COMPUTER-BASED TESTING

30

In the US history class, several students lamented their inability to annotate the text: "the
quotes cause they couldn't be annotated on," "I don't like how you can't annotate the questions,
because it makes it harder to pick out key details and information that I need," "I like to write
notes and thoughts down when take a test, and if it is computer-based, I can't do that."
What would you like changed about the computer-based exam? Why?
The responses to the questions "what would you like changed about the computer-based
exam? Why?" were related to the things that students disliked about the exams. Quite a few
students were satisfied with the current exam and would like nothing changed. The other
prominent themes in the responses were the ability to write on the test, less scrolling, to stop
cheaters, and more variety in questions.
Theme 1
In the world history class, seven students responded to the formatting error that caused
the images for all questions to be at the top of the page and thus necessitated scrolling between
question and image. This sentiment extended to the US history class, where one student wrote:
"Instead of putting a map at the beginning of a section. Put it next to the actual question so that
you don't have to keep scrolling through the test to find it again." Another gave a suggestion in
conjunction with their desire to limit scrolling: "I would like it to be one question per page
instead of scrolling." For another student, "perhaps make each question required so you can not
submit with out answer them all. I would like that cause sometimes I feel as though I skipped
one."

STUDENT PERSPECTIVES ON COMPUTER-BASED TESTING

31

Theme 2
Some students wanted to be able to write on their tests, an option not given by the current
format. For some, this meant the ability to annotate: "Be able to write on them! Like paint or
smartboard technology. It would improve understanding of quotes." Another: "If there was a way
to be able to highlight key information, that would help me figure out what the question in
asking." Writing on the exams is important to students for more than just annotation. One
student answered, "We should write the essays on the computer," and another said, "make an
option that the written portion can be typed in the test."
Theme 3
Several students wrote about changing the test to stop cheating: "Somehow monitor them
so no cheating could happen." "Needs to be more well monitored, cheating would be very easy."
"I would like it to be changed to where it is more like a maps test, where there is no way to cheat
using the internet."
Theme 4
Other students desired a greater variety of question or question types. Some examples of
this request include: "Less multiple choice." "Add more questions because there weren't
questions that covered all the content." "perhaps some short answer questions on the computer
test would be nice." "More questions." "Add more variety to the questions."

STUDENT PERSPECTIVES ON COMPUTER-BASED TESTING

32

How could the computer-based test be improved to more closely reflect you
learning?
The final qualitative question was "How could the computer-based test be improved to
more closely reflect your learning?" The student responses were similar to the answers to the
previous question about what they would like changed about the test. The most prominent
themes for this section were the addition of more questions, adding other types of questions, and
written response questions.
Theme 1
The students asked for more questions in a number of ways: "The computer-based test
could have more questions so that if one simple small thing is forgotten it won't have as big of an
impact." "Add more questions." "I think more questions would have been nice." "more multiple
choice questions." and "more questions."
Theme 2
Several students felt that there learning could be better reflected through the use of a
greater variety of questions. The students wrote: "Have more than just multiple choice like
matching, true false" "There should be multiple answer question, like 'choose the reasons'" "It
should show more diagrams and be more creative, rather than just multiple choice questions" and
finally, "have fill in the blank questions."
Theme 3
Other students felt that writing should be part of the computer-based assessment. These
students wrote: "Involve writing about the topics." "There could be more open ended questions."

STUDENT PERSPECTIVES ON COMPUTER-BASED TESTING

33

"Some short answer questions would help improve my learning more." "Short answer questions
should be asked."
Theme 4
Finally, one student described how a test must be individualized to truly reflect a
student's learning. The student wrote, "I think that in order to truly reflect one's learning, tests
have to be more tailored to the individual, because everyone learns differently/ has different
strong suits/etc. but that's kind of an impossible dream considering the number of students here,
or anywhere, really." This response, not shared by other students, shows that at least one student
feels that the uniqueness of each individual makes it difficult for any standard exam to reflect an
individual's learning.
The students in these two classes seem to have some divergent opinions. Some like
multiple choice, while others would like a greater variety of question types. Some like not being
required to write on their computer-based test, while others feel that their learning could be better
reflected through writing. Commonly held beliefs are that anti-cheating measures need to be
implemented, and that problems with technology and formatting would improve the test.
SECTION V
Conclusions and Recommendations
Dissemination Plan
The research found in this paper is of use to anyone who administers or creates computerbased assessments for use in the classroom. The first person to receive this research will be the
teacher in the classes in which these tests were administered. This research will also be shared

STUDENT PERSPECTIVES ON COMPUTER-BASED TESTING

34

with the master's committee and with any member of the department or school who is planning
on or currently using computer-based assessments.
Action Plan
Learning
So what can we learn from students in order to improve computer-based assessments? It
turns out that we can learn quite a bit from surveys and open-ended questionnaires. The research
showed that familiarity with computers does not immediately correlate with positivity toward
and preference for computer-based assessments. Computer-based assessments also do not relieve
students of test anxiety.
The research also showed that students are concerned about cheating, have formatting
preferences, and would like more and varied questions to show their learning, including writing.
That said, these beliefs were not universally shared. Some students wanted to write, while others
were glad not to write.
Actions
The computer-based assessment needs to be scrutinized with the intent of stopping any
possible cheating. The formatting of the test could be improved, according to several students,
and this might be done without too much difficulty. Finally, the desire of students for more
questions and for a larger variety of questions is certainly worth considering.

STUDENT PERSPECTIVES ON COMPUTER-BASED TESTING

35

Cheating
In response to the data collected by this research study, the first step must be to address
the problem of cheating. The system of the computer-based assessment is through Google, so the
test is online. This complicates the process, as it requires an internet connection to function. This
potentially opens up the possibility of using the internet in a different tab or browser to cheat.
Research to see if and how other instructors have dealt with this problem is a good place to start.
If this does not yield any solutions, it is worth looking into other ways of creating computerbased assessments in order to find a way to administer the tests that prevents students from
accessing the internet in order to seek out answers to the questions. Consultation with other
teachers in order to see how they manage to stop cheating using the internet is another method of
addressing the problem. One student referred to the "maps tests" as an example of a computerbased test that could not be cheated on. Finding out what this test is and how it works is another
avenue of addressing this problem.
Formatting
Given that students have expressed concerns about the font, size, and color of the text,
the teacher could see if it is possible to darken the text of the quotes so it is easier to read.
Increasing the size of the font might be possible on an individual basis, so students could tailor
the size of the text to their preference on their own laptop. This might also be done with the font,
depending on the nature of the program that hosts the tests.
The formatting problem that caused all of the images to be at the top of the page should
be corrected, so that images are found next to the question that addresses them. This should be an

STUDENT PERSPECTIVES ON COMPUTER-BASED TESTING

36

easy fix, and one that will make the assessment more pleasant for students with minimal effort
from the instructor or test designer.
Annotation is another function that may be possible within the current computer-based
assessment format. This will require some research on the test to see if there are ways to go
about this. Students could also use paper in conjunction with the computer-based assessment,
allowing them to write and annotate quotes. This is also a problem with a number of possible
solutions.
Question types
The inclusion of test questions beyond multiple choice seems like a good way to improve
the assessment of students. Adding true/false, matching, fill in the blank and short answer
questions could be added to the test in order to widen the range of knowledge demonstrated by
the assessment. The addition of these types of questions to the computer-based assessment could
help students share their learning.
As shown by the research above, some students like to have multiple choice and do not
like to write. Perhaps a better option would be to increase writing, through short answers as
suggested by some, or further essay questions. Several students argued that writing hurts their
hands. One solution would be to allow students to type their answers into the computer rather
than write them by hand. Adding this option would require another investigation of how to
prevent potential cheating using the computer.

STUDENT PERSPECTIVES ON COMPUTER-BASED TESTING

37

Paper Test
Given that the quantitative data showed that not all students felt positively about the
computer-based assessment, perhaps providing the option of taking the test with a pen and paper
allows students to differentiate based on their test taking preferences. This would allow those
who wish to annotate to do so, and allow those who have trouble with looking at the screen to
have another option. There is value in employing a variety of assessments in order to incorporate
differing learning styles and preferences. The computer-based assessment will be one of many
tools for assessment.
Future
In the coming semester, the researcher plans to address the possibility of cheating on the
computer-based assessment. Additionally, the researcher will add questions to computer-based
assessments and increase the variety of the questions, including adding written prompts to the
computer-based assessment. The formatting of the computer-based assessment will be addressed
to reduce the possibility of poorly placed images and hard to read type. The researcher will also
undertake to introduce tutorials to students in how to use the tools at their disposal to change font
size to fit their visual preference. It may be possible for students to annotate the test in its current
format. If so, demonstrating how to do this would be part of the tutorial. Finally, encouraging
students to bring a mouse to use in lieu of the mouse pad on the laptop may reduce the amount of
accidental clicking on future computer-based assessments.

STUDENT PERSPECTIVES ON COMPUTER-BASED TESTING

38

A teacher cannot please all of the students all the time, but there are definitely ways to
improve and try to meet a differentiated classroom. The research into student preferences offers
ideas and ways to improve computer-based assessments.

References

Barkley, A. P. (2002). An analysis of online examinations in college courses. Journal of


Agricultural and Applied Economics, 34(3), 445-458.
Bidwell, A. (2013). American students fall in international academic tests, chinese lead the pack.
U.S. News & World Report. Retrieved November 3, 2014, from
http://www.usnews.com/news/articles/2013/12/03/american-students-fall-ininternational-academic-tests-chinese-lead-the-pack.
Bocij, P., & Greasley, A. (1999). Can computer-based testing achieve quality and efficiency
in assessment?. International Journal of Educational Technology, 1(1), n1.
Brooks, R., Brooks, S., & Goldstein, S. (2012). The power of mindsets: Nurturing engagement,
motivation, and resilience in students. In Christenson, S., Reschly, A. L., & Wylie, C.
(eds.), Handbook of Research on Student Engagement. New York, NY: Springer.
Retrieved November 3, 2014 from
http://www.isacs.org/uploads/file/Annual%20Conference/Annual%202014/Brooks%20St
udent%20Engagement%20Chapter%20.pdf
Charman, D., & Elmes, A. (1998). A computer-based formative assessment strategy for a basic
statistics module in geography. Journal of Geography in Higher Education, 22(3), 381385.
Colorado Department of Education. (2014). Fossil ridge high school enrollment [Data file].
Retrieved November 25, 2014 from
https://edx.cde.state.co.us/SchoolView/DataCenter/reports.jspx?_afrWindowMode=0&_a
frLoop=1861269398215013&_adf.ctrl-state=csjcmboo7_4
Cook-Sather, A. (2006). Sound, presence, and power:" Student voice" in educational research
and reform. Curriculum Inquiry, 36(4), 359-390.
Dillon, S. (2010). US asks educators to reinvent student tests, and how they are given. New York
Times, 2, A11.

STUDENT PERSPECTIVES ON COMPUTER-BASED TESTING

39

Ertmer, P. A., Ottenbreit-Leftwich, A. T., Sadik, O., Sendurur, E., & Sendurur, P. (2012).
Teacher beliefs and technology integration practices: A critical relationship. Computers
& Education, 59(2), 423-435.
Garland, S. (2014, may 14). How can schools close the technology gap and how much will it
cost? Retrieved from hechingerreport.org: http://hechingerreport.org/content/can-schoolsclose-technology-gap-much-will-cost_15911/
Garland, S. (2014). How can schools close the technology gap and how much will it cost?
Retrieved November 14, 2014 from http://hechingerreport.org/content/can-schools-closetechnology-gap-much-will-cost_15911/
General Accounting Office. (1995). School facilities: America's schools not designed or
equipped for 21st century (GAO/HEHS-95-95). Washington, DC: U. S. Government
Printing Office. Retrieved November 14, 2014 from
http://www.gao.gov/assets/230/221084.pdf
Gigliotti, R. J., Falk, R. F., Smerglia, V. L., & Neiswander, N. (1994). Computer-based testing in
sociology: A description and evaluation. Teaching Sociology, 32-39.
Likert, R. (1932). A technique for the measurement of attitudes. Archives of psychology. 140: 1
55.
National Center for Education Statistics. (2013). Fossil ridge high school [Data file]. Retrieved
November 25, 2014 from
http://nces.ed.gov/ccd/schoolsearch/school_detail.asp?Search=1&InstName=fossil+ridge
+high+school&State=08&SchoolType=1&SchoolType=2&SchoolType=3&SchoolType
=4&SpecificSchlTypes=all&IncGrade=-1&LoGrade=-1&HiGrade=1&ID=080399001848
Noguera, P. A. (2007). How listening to students can help schools to improve. Theory into
Practice, 46(3), 205-211.
Ogilvie, R. W., Trusk, T. C., & Blue, A. V. (1999). Students attitudes towards computer testing
in a basic science course. Medical Education, 33(11), 828-831.
Ozden, M. Y., Erturk, I., & Sanli, R. (2004). Students' perceptions of online assessment: A
case study. Journal of Distance Education, 19(2), 77-92.
Poudre School District. (2013). Fossil ridge high school. Retrieved November 25, 2014 from
https://www.psdschools.org/school/fossil-ridge-high-school
Poudre School District. (2013). Psd demographics. Retrieved November 25, 2014 from
https://www.psdschools.org/about-us/psd-demographics
Sansgiry, S. S., & Bhosle, M. (2004). Students' attitudes toward powerpoint timed
quizzes. American Journal of Pharmaceutical Education, 68(4), 85.

STUDENT PERSPECTIVES ON COMPUTER-BASED TESTING

40

Sheader, E., Gouldsborough, I., & Grady, R. (2006). Staff and student perceptions of computerassisted assessment for physiology practical classes. Advances in Physiology
Education, 30(4), 174-180.
Spires, H. A., Lee, J. K., Turner, K. A., & Johnson, J. (2008). Having our say: Middle grade
student perspectives on school, technologies, and academic engagement. Journal of
Research on Technology in Education, 40(4), 497-515.
Tierney, R. D. & Charland, J. (2007, April). Stocks and prospects: Research on formative
assessment in secondary classrooms. Paper presented at the annual meeting of the
American Educational Research Association, Chicago, IL. Retrieved November 10, 2014
from http://files.eric.ed.gov/fulltext/ED496236.pdf
Trilling, B., & Fadel, C. (2009). 21st century skills: Learning for life in our times. Hoboken, NJ:
John Wiley & Sons.
Williams, B. (2007). Students perceptions of prehospital web-based examinations. International
Journal of Education and Development using ICT,3(1).
Wilson, K., Boyd, C., Chen, L., & Jamal, S. (2011). Improving student performance in a firstyear geography course: Examining the importance of computer-assisted formative
assessment. Computers & Education, 57(2), 1493-1500.

STUDENT PERSPECTIVES ON COMPUTER-BASED TESTING

41

Appendix A
Quantitative Instrument
Please circle the most accurate option below each question.
Please thoroughly answer the questions on the back side.
I consider myself computer literate.
Strongly Agree

Agree

Neither

Disagree

Strongly Disagree

Neither

Disagree

Strongly Disagree

I have taken tests on the computer before.


Strongly Agree

Agree

I prefer to take tests on a computer, rather than with pen and paper.
Strongly Agree

Agree

Neither

Disagree

Strongly Disagree

Neither

Disagree

Strongly Disagree

Neither

Disagree

Strongly Disagree

Disagree

Strongly Disagree

Disagree

Strongly Disagree

The computer-based exam was a positive experience.


Strongly Agree

Agree

The computer-based exam was confusing.


Strongly Agree

Agree

I think the computer-based exam was fair for all students.


Strongly Agree

Agree

Neither

I am less anxious with a computer-based exam.


Strongly Agree

Agree

Neither

I felt that the computer-based exam accurately reflected my knowledge of the content.
Strongly Agree

Agree

Neither

Disagree

Strongly Disagree

STUDENT PERSPECTIVES ON COMPUTER-BASED TESTING

Qualitative Instrument
What did you like about the computer-based exam? Why?
What did you dislike about the computer-based exam? Why?
What would you like changed about the computer-based exam? Why?
How could the computer-based test be improved to more closely reflect your learning?

42

Вам также может понравиться