Вы находитесь на странице: 1из 17

A review of rubrics for

assessing online discussions

Bobby Elliott
Scottish Qualifications Authority

Abstract
This paper explores current practice in the assessment of online forums. It does this
by reviewing the literature relating to this area of online learning, and extracting the
rubrics contained within that literature to discover best practice as defined by the
leading writers in this field.
Twenty rubrics were analysed to discover their common characteristics. These
rubrics were also reviewed for their quality in terms of validity, reliability and a new
characteristic called “fidelity”.
The results of this study show that there is an inconsistent approach to the
expression and definition of rubrics for the assessment of online discussions; that the
purpose of the rubrics appears to be confused; and that their validity and,
particularly, their fidelity are low.
The paper concludes by recommending that rubrics continue to be used for the
assessment of online discussions but that a more consistent approach is taken to
their construction and definition, and that current practice is changed to improve
validity and fidelity.
The characteristics of a “good” marking scheme are provided to help faculty to
develop rubrics.
Keywords: assessment, learning, discussion, forums, criteria, rubric, marking
scheme, online, writing.

Page 1
A review of rubrics for assessing online discussions

Introduction
This introductory section seeks to provide background information on the key themes
in this paper, which are: online discussion boards, online writing, and assessment.

Educational benefits of discussion boards


The educational benefits of online discussion boards are well known and long
established. Jonassen, Davison, et al (1995) expressed their suitability for learning as
follows: “The dialogue serves as an instrument for articulation because in the process
of explaining, clarifying, elaborating, and defending ideas, cognitive processes
involving integration, elaboration and structurisation took place”.
Newman, Webb and Cochrane (1995) found that students contributed more outside
material and experiences, and integrated ideas better, when using online discussions.
Hoag and Baldwin (2000) report that students learn more in an online collaborative
class than in a face-to-face classroom, and they also acquire greater experience of
team work, communication, time management, and technology.
Some academics have claimed that asynchronous discourse is inherently self-
reflective and therefore more conducive to deep learning than synchronous discourse
(see Riel, 1995, for example). Berliner (1987) reported that the increased “wait time”
(the time available to discuss each topic) increases the opportunities for reflective
learning.
Swan, Shen, & Hiltz (2006) summarise the benefits of online forums:
“Many researchers note that students perceive online discussion as more equitable
and more democratic than traditional classroom discussions because it gives equal
voice to all participants. Online asynchronous discussion also affords participants
the opportunity to reflect on their classmates’ contributions while creating their
own, and to reflect on their own writing before posting it. This creates a certain
mindfulness and reflection among students. In addition, many researchers have
noted the way participants in online discussion perceive the social presence of their
colleagues, creating feelings of community. Such findings have led educators to
conclude that asynchronous online discussion is a particularly rich vehicle for
supporting collaborative learning.”

Some drawbacks
The reduction in constraints on time and place can result in pressures on students
and faculty to read and participate in online forums (see, for example, Gabriel, 2004,
Hiltz and Wellman, 1997, and Wyatt, 2005). Peters and Hewitt (2009) carried out
formal interviews with 57 post-graduate students undertaking an online programme
and discovered that the volume of messages was their greatest frustration.
The lack of visual communication clues, “disembodied learning” as Dreyfus (2002)
put it, can cause conflict and anxiety among learners. Mixed patterns of participation
are common, with some learners dominating discussions and some inhibited from
contributing. The text-only format of traditional discussion boards can also inhibit
interaction and can disadvantage learners with poor writing skills.

Page 2
A review of rubrics for assessing online discussions

Online writing
Online discussion boards are one instance of what has been variously described as
“Web 2.0 writing” , “online writing”, “digital writing” or “digital learning narratives”,
which can take various forms including online forums, blogs, wikis, social networks,
instant message logs, and virtual worlds. Many of the issues raised in this paper are
relevant to all forms of online writing. Most of the afore-mentioned benefits of online
discussion boards are, in fact, benefits of any form of (asynchronous) online writing.
This new medium provides new affordances – new ways of utilising the medium to
communicate and collaborate. For example, online writing makes visible such things
as co-operation, collaboration, and self-reflection; the learner’s thought processes
are also more apparent. The inclusion of multimedia (such as audio and video) is
straight-forward. Referencing (hyperlinking) to related resources or information is
simple. The asynchronous nature of many online communications makes the time
and place of contributions more flexible, and provides more “wait time” (see above)
to improve opportunities for reflective writing. The writing may have an audience far
beyond the walls of the university (perhaps a national or global audience).
Authenticity can be improved by tackling real-world issues and seeking feedback
from peers and experts across the world. The potential for producing authentic, co-
constructed, interconnected, continuously improved, media-rich information objects
of national or international interest is unique.
These new affordances have significant implications for assessment. They provide an
opportunity to assess skills that were previously considered difficult, or impossible, to
assess using traditional approaches.

Assessment of online learning


The word “assessment” is derived from its Latin root assidere, which means “to sit
beside”. At its most basic level, assessment can be defined as “observing learning”
(Glossary of assessment terms, 2002). For the purposes of this paper, a more
detailed definition will be used:
“Assessment is an on-going process aimed at understanding and improving
student learning. It involves making our expectations explicit and public; setting
appropriate criteria and high standards for learning quality; systematically
gathering, analyzing, and interpreting evidence to determine how well performance
matches those expectations and standards; and using the resulting information to
document, explain, and improve performance.” (Angelo, 1995)

Characteristics of assessment
Assessment is traditionally viewed from two perspectives1:
 validity
 reliability
For the purposes of this paper, a third characteristic will be considered: fidelity.
Sadler (2009) proposed this characteristic of assessment, which is explained below.

1
“Fairness” (the equality of an assessment) and “practicality” (the feasibility of an
assessment) are sometimes also considered as characteristics of assessment.

Page 3
A review of rubrics for assessing online discussions

Fidelity
Sadler (2009) expresses two concerns about contemporary assessment practice. The
first relates to continuous assessment, which he criticises for assessing student’s
learning while they are learning rather than at the end of the learning period, when
learning may be better internalized. As Sadler puts it: “Assessment in which early
understandings are assessed, recorded and counted, misrepresents the level of
achievement reached at the end of the course.”
His second criticism relates to the match between assessment and learning
objectives. He calls this “fidelity”, which he defines as: “Fidelity is the extent to which
elements that contribute to a course grade are correctly identified as academic
achievement”.
He points out that, in many instances, what is rewarded is not, in fact, real
achievements, as defined by the course objectives:
“Many academics cannot help but be impressed by the prodigious time and
persistence that’s some students apparently invest in producing responses to an
assessment task. However, effort is clearly an input variable and therefore does not
fall within the definition of academic achievement”.

Current practice in the assessment of online learning


There is growing concern about the relevance of current assessment practices to the
present generation of learners. There appears to be a consensus in the academic
community that there is a need to modernise assessment practices to embrace some
of the more contemporary skills, such as collaboration, and take account of new
media. But this, itself, will present challenges:
“Most of its advocates [of writing using Web 2.0 tools] offer no guidance on how to
conduct assessment that comes to grips with its unique features, its difference
from previous forms of student writing, and staff marking or its academic
administration. […] The few extant examples appear to encourage and assess
superficial learning, and to gloss over the assessment opportunities and
implications of Web 2.0’s distinguishing features.” (Gray, Waycott, et al, 2009).
This criticism is linked to the “new affordances” of online writing, previously
described. There is a sense that traditional assessment is being applied to the new
learning environment, neglecting the unique characteristics and opportunities
presented by this environment.

Marking rubrics
The definition of assessment used in this paper includes “making expectations
explicit”, “setting appropriate criteria” and “systematically interpreting evidence”. The
importance of clear and transparent marking criteria is well known (see, for example,
Hounsell et al, 2007).
The use of a marking rubric provides a transparent and objective way of assessing
learning (Pickett and Dodge, 2007). Rubrics typically contain specific performance
criteria, each assigned one or more marks, with an associated narrative to aid
marking.
Hazari (2004) identifies two ways of marking: analytic marking and holistic
marking. Analytic marking involves assigning marks to each criterion; holistic
marking is a more impressionistic approach, assigning marks as a whole, without
scoring individual criteria. Analytical criteria are normally applied to each message

Page 4
A review of rubrics for assessing online discussions

and each message is awarded a small number of points. The sum of these scores is
the learner’s overall mark. Holistic criteria are applied to groups of messages.
A hybrid approach, one which combines analytic and holistic marking, can also be
used. In this scheme, two sets of criteria are produced: one for analytic marking and
one for holistic marking. The analytical criteria score each message; the holistic
criteria score groups of messages, such as a learner’s weekly contributions or their
contributions for an entire topic or, even, their contributions over the whole course.

Research question
The research question was:
What is the current practice in the construction of rubrics to assess online
discussions?

Significance of the study


Online writing is increasingly common throughout society but particularly within
education. Online forums have been used within Higher Education for many years
and they continue to be a popular means of supporting learning. But even in this
long-established environment, the assessment of learners’ contributions is not well
researched.
While the specific recommendations in this paper relate to online discussion boards,
many of them could be applied to any form of asynchronous online writing, which is
a medium set to grow significantly in the coming years.
It is hoped that the findings and recommendations stimulate debate in the field of
assessment of online forums (and, more generally, the assessment of online writing)
and, by so doing, improve professional practice in this increasingly used, and
assessed, learning space.

Page 5
A review of rubrics for assessing online discussions

Research methodology and data collection methods


The research design involved an exploratory approach to the academic literature
relating to the assessment of learner contributions to online forums.
A large number of peer-reviewed, published papers were studied to identify best
practice in the assessment of online discussions and to locate and extract the rubrics
contained within them. In total, 20 marking schemes were identified and analysed to
identify patterns in their contents. The rubrics were analysed from three
perspectives:
1. the type (holistic or analytic)
2. the scoring (grades or marks)
3. the types of criteria (see below).

Classifying the criteria used in rubrics


To aid the analysis of the rubrics, it was necessary to categorise the criteria used
within the rubrics. A three stage process was used to classify the criteria.

Figure 1: process used to classify criteria

The first stage was to list each criterion. Within the 20 rubrics extracted from the
literature, 128 criteria were identified. Many of these criteria expressed the same
thing in slightly different ways. So the second stage was to group the criteria so that
criteria that essentially related to the same standard were grouped together. This
reduced the 128 criteria to 33 unique criteria. For example, every criterion that
related to the appropriateness of the message was added to the “relevancy” group,
however it was expressed. The final stage in the analysis was to take these 33
criteria and categorise them by their type. Ten categories were used to do this.
These categories are listed in Table 1.

Codes Category of criterion


1 Relating to quality of academic discourse
2 Relating to cognition
3 Relating to critical analysis
4 Relating to learning objectives
5 Relating to participation
6 Relating to etiquette
7 Relating to quality of writing
8 Relating to presentation
9 Relating to attitude
10 Relating to digital writing

Table 1: Coding system used to categorise the criteria in rubrics

Page 6
A review of rubrics for assessing online discussions

Each rubric typically incorporated several types of criterion. For example, a single
rubric might include criteria relating to quality of academic discourse (code 1),
criteria relating to student participation (code 5), and criteria relating to the quality of
writing (code 7).
Coding each rubric in this way permitted the creation of a frequency table to
illustrate the most common characteristics of rubrics. The results of this analysis are
provided in the Findings section.

Strengths and weaknesses of the research design


The quantitative analysis of the rubrics for assessing online discussions was done by
extracting rubrics from peer-reviewed, published papers. This methodology would be
expected to produce higher quality results than those produced from a random
sample of marking schemes in actual use within educational institutions, which are
not formally peer-reviewed.
This paper’s findings are not generalizable. The sample was small (20) and the
method of sampling was not statistically robust (they were simply the rubrics that
appeared in the literature review). Consequently, no claims are made about the
validity or reliability of the findings. However, a clear pattern emerged from the
rubrics to give confidence that a more systematic approach may not have produced a
significantly different outcome. The pattern of rubrics in the literature was
established early in the review and was re-inforced as the sample was increased.

Page 7
A review of rubrics for assessing online discussions

Findings
During the literature review, 20 rubrics were identified and analysed to identify their
characteristics, particularly their similarities and differences. The first characteristic
examined was the type of marking. Thirteen of the rubrics (65%) proposed holistic
marking and seven (35%) proposed analytical marking.2 The scoring systems were
also measured: five rubrics (25%) proposed grading student contributions (such as
“A/B/C” or “Average/Above average”) and 15 (75%) proposed a points system.

Type Scoring
Holistic 13 Grades 5
Analytic 7 Marks 15

Table 2: Analysis of rubric scoring systems

Each rubric was then studied to identify its components (often, but not always,
expressed as criteria). A single rubric would typically contain several criteria, relating
to various aspects of the student’s contribution, such as its originality, relevance and
word length. In total, the rubrics reviewed (20) contained 128 statements, giving an
average (mean) of six criteria per rubric.
However, their expression varied considerably from rubric to rubric. Some were
expressed as criteria, some were written as statements, and some were simply
examples of contribution at a particular performance point. There was little
consistency in terminology, with each rubric expressing similar performance using
different language. For example, the quality of academic discourse was expressed in
terms of “depth of insight”, “intellectual challenge”, “quality of knowledge”, and
“much thought”. Subjective language was also common; words such as “good”,
“substantive” and “regular” (without further definition) were often used. The
language used varied in formality. While the majority of rubrics used formal English,
some used a conversational style of language including such phrases as “Your
messages didn’t leave me jazzed”.
To overcome the problem of language, criteria relating to the same broad domain
were grouped together, reducing the 128 unique criteria to 33. For example, every
criterion relating to the appropriateness (to course contents) of the message,
however it was expressed, was included in the “relevancy” group.

Table 3 shows the frequency of each criterion once the original criteria had been
grouped in this way.

2
In fact, none of the rubrics proposed holistic marking in the strict definition of that word.
The rubrics deemed to be holistic in the table were those that suggested that the marks could
be applied to more than a single message.

Page 8
A review of rubrics for assessing online discussions

No. Criterion Type Frequency


1 Quantity 5 12
2 Relevancy 4 10
3 Frequency 5 9
4 Referenced to reading or research 1 7
5 Positive attitude 9 7
6 Quality of writing 7 7
7 Originality 1 6
8 Initiate or stimulate discussion 5 6
9 Related to personal experience 1 5
10 Timing 5 5
11 Support for ideas is provided 1 5
12 Show respect 6 5
13 Advance discussion 1 4
14 Accuracy 1 4
15 Questions or challenges positions 3 4
16 Quality of research 1 3
17 Depth of response 2 3
18 Demonstrates critical thinking 3 3
19 Co-operation/collaboration 6 3
20 Demonstrates analysis 2 2
21 Number of views or log-ins 5 2
22 Sociability 6 2
23 Presentation 8 2
24 Personal opinion without support 1 2
25 Ask questions 5 2
26 Quality of explanation 1 1
27 Identify important issues 1 1
28 Demonstrates creativity 2 1
29 Proper citation 7 1
30 Encourage participation 6 1
31 Receptive to different views 6 1
32 Suggest resources 10 1
33 Constructive comments 6 1
Table 3: Frequency of criteria in rubrics

The table illustrates that 12 criteria related to the quantity of messages posted
(criterion 1), 10 related to the relevancy of the message (criterion 2), and five were
about “respect” (criterion 12). The top five criteria were:
1. Quantity
2. Relevancy
3. Frequency
4. Referenced to reading or research
5. Attitude

Page 9
A review of rubrics for assessing online discussions

Each of these 33 groupings was then assigned to a category. Ten categories were
created, as shown in Table 4, and their frequencies computed.

No. of
Code Category of criterion Frequency
rubrics
1 Relating to quality of academic discourse 16 38
2 Relating to cognition 9 6
3 Relating to critical thinking 8 8
4 Relating to learning objectives 8 10
5 Relating to participation 18 36
6 Relating to etiquette 8 12
7 Relating to quality of writing 7 8
8 Relating to presentation 2 2
9 Relating to attitude 7 7
10 Relating to digital writing 1 1
Table 4: Rubrics by category

This table illustrates that 16 (of the 20) rubrics included at least one criterion relating
to the quality of academic discourse, and that criteria relating to this domain
appeared 38 times (among the 128 criteria in total). It is interesting to note the
almost complete absence of criteria related to “digital writing” (code 10).
From this table, it can be seen that the most commonly occurring criteria related to:3
1. participation
2. academic discourse
3. etiquette
4. learning objectives
5. critical thinking.

Fidelity of the rubrics


The criteria were examined to gauge their fidelity. It can be seen from the above list
that three of the top five relate to academic ability; two relate to non-academic
competencies, what Sadler would call “non-achievements” (see page 10). In fact, of
the 10 categories in Table 4, five broadly relate to academic standards and five relate
to such things as the number of messages posted, how often students visited the
forum or the attitude of the student (such as her “being positive”). As Sadler points
out, these are not outcomes, they are inputs, and are contaminants when used for
grading.
Eight rubrics made direct or indirect reference to the associated learning objectives.4
However, 12 rubrics made no reference to the learning outcomes whatsoever. In
theory, student contributions could have been about irrelevant subject matter and
still been awarded full marks.

3
This list was sorted by the number of rubrics that included the type of criterion; where this
was equal, the list was sorted on the frequency of occurance.
4
In fact, only one rubric made explicit reference to “learning objectives” (or equivalent). Most
referred to “relevancy” or “appropriateness” or similar.

Page 10
A review of rubrics for assessing online discussions

Eighteen rubrics rewarded participation, which was the single most common
category. However, unless the course objectives included outcomes relating to
participation then providing marks for this (which would contribute to the course
grade) would be incorrect. Four rubrics did not include any criteria relating to
academic discourse whatsoever, preferring to focus solely on participation or
etiquette or other non-academic criteria.
It is instructive to take a complete rubric and gauge its fidelity. In Online and
traditional assessment: what is the difference? (2000), Rovai proposes 14 criteria for
the assessment of online discussions, aligned against three grades (A-C). Of these
criteria, 10 relate to the number of messages posted or the style of the message
(“caring”, “compassionate”, “rude”) or other non-achievement variables. Four criteria
relate to academic standards (including one that relates to the standard of spelling
and grammar).
A similar picture emerges from other rubrics. In his Strategy for Assessment of
Online Course Discussions, Hazari (2004) proposes 20 criteria (for awarding points
ranging from 1 to 5). Of these criteria, 10 relate to non-achievements such as
“Demonstrates leadership in discussions” and “Posted regularly throughout the
week”. Some criteria were crude: “Time between posting indicated student had read
and considered substantial number of student postings before responding” is clearly
not a valid way of measuring how much prior reading a student had completed
before replying.

Page 11
A review of rubrics for assessing online discussions

Conclusions & Recommendations


The final section draws conclusions from the findings, and makes recommendations
relating to the assessment of learner contributions to online forums.

Conclusions from the study


While there is a consensus that the assessment of learner contributions to online
forums should be done using a rubric, current practice in the construction of rubrics
is varied. An important finding from this research is that the majority of rubrics
for assessing online discussions exhibit low fidelity. There appears to be
confusion about what faculty is seeking to assess. Many of the assessment criteria
relate to knowledge, skills and attitudes that have little, or nothing, to do with
learning objectives. Many rubrics focus on participation rather than achievement, and
inputs (such as effort) rather than outputs (such as understanding). The criteria
often relate to engagement, interaction and collaboration, which, while undoubtedly
linked to learning effectiveness are proxies for this, and are rarely stated as learning
objectives. They exhibit low fidelity, as defined by Sadler. Many do not appear to be
valid assessments of the course objectives.
Sadler’s other criticism of online assessment, that continuous assessment awards
marks while students are still learning, was also evident. All of the rubrics awarded
marks on an on-going basis; most awarded marks for each message posted by the
student; none took account of the student’s final level of understanding.
While many papers provided advice about what should be included in a “good”
rubric, these characteristics tended to vary from author to author. There was little
consistency in what should be included in a rubric or consistency in the
language used to express these characteristics.
None of the rubrics took account of the unique “affordances” made
possible by online writing. In fact, several appeared to presume that students
would write a small number of essay-like, set-piece messages rather than engage in
on-going knowledge-building that online discussions facilitate.
Current rubrics exhibit low fidelity and questionable validity due to the poor match
between the rubrics and learning objectives. Part of the reason for this may, in fact,
be that the learning objectives themselves are wrong. It may be that teachers are
consciously seeking to reward participation and collaboration skills. If this is true then
the learning objectives should be changed to encompass these skills. The present
situation – of rewarding unstated skills in an indirect manner – is not a valid
approach to assessment.

Characteristics of a good rubric


As Rovai wrote: “The general assessment principles do not change in the online
environment.” (Rovai, 2000). From this, and the findings and conclusions of this
research, the following list of characteristics should be evident in any rubric designed
to assess learner contributions to online discussions.

Page 12
A review of rubrics for assessing online discussions

No. Characteristic
1 The rubric should be expressed as criteria, which exemplify levels of
performance or cognition across various levels.
2 The rubric should employ holistic and analytical marking; the holistic marks, at
least in part, should reward the learner’s final level of competency.
3 The criteria should be valid measures of the course objectives.
4 The criteria should exhibit high levels of fidelity and not reward non-
achievements such as effort or participation.
5 The criteria should be expressed clearly and simply to maximise reliability.
6 The rubric should use terminology consistently.
7 The criteria should be free of bias.
8 The criteria should recognize and reward the unique affordances of online
writing.
Table 5: Characteristics of a good rubric

Stages in developing a rubric


The stages involved in developing a rubric were outlined by Swan, Shen and Hiltz
(2006). Their suggested approach does not make explicit reference to course
objectives and could produce rubrics with low fidelity.
Their approach has been modified to directly link rubrics to course objectives (see
Figure 3). There is an unambiguous focus on course objectives in this approach, and
the fidelity of a rubric developed using this modified procedure is likely to be higher.

Figure 3: Stages in developing a rubric

Page 13
A review of rubrics for assessing online discussions

Recommendations
These conclusions lead to the following recommendations.
1. Assess learner contributions to online discussions.
2. Use a rubric to carry out the assessment.
3. Design rubrics to have high fidelity.
4. Incorporate the unique affordances of online writing into rubrics.
Each recommendation is now explained.

Recommendation 1: Assess learner contributions to online discussions


The value of assigning marks to online discussions has been established. Research
confirms that assessment increases participation and improves the academic quality
of online discussions. There appears to be conflicting research relating to the amount
of marks that should be awarded. The range of marks discovered during the review
of rubrics varied from 5% to 25%. A typical weighting was 10% of overall course
marks.

Recommendation 2: Use a rubric to carry out the assessment


Assessment of online discussions should be carried out using a rubric. The
characteristics of a “good” rubric are described in Table 5. The use of a rubric should
reduce the risk of invalid, unreliable or unfair assessment.
The process to develop a rubric is illustrated in Figure 3 on page 13. It is
recommended that this process is followed since it is likely to produce assessments
with improved validity and fidelity.
The rubric should be supplied to students at the start of the course. Not only does
this improve transparency but research has shown that providing learners with the
criteria to be used to assess their contributions to discussion boards increases their
participation and fosters deeper learning. (Swan, Schenker, Arnold, & Kuo, 2007).

Recommendation 3: Design rubrics to have high fidelity


Rubrics must clearly align with the course objectives. In particular, rubrics should not
reward non-achievements, such as effort or participation.
The characteristics of a good rubric are given in Table 5. Writers should seek to
encompass as many of these characteristics as possible.
Rubrics should be written using the four stage process illustrated in Figure 3. This
will improve the validity and fidelity of the assessment.

Recommendation 4: Incorporate the unique affordances of online writing


into rubrics
Notwithstanding recommendation 3 (that rubrics must match learning objectives),
rubrics should embrace the unique affordances provided by the online environment.
These affordances were described on page 3. This means considering the medium as
well as the message. It means requiring students to take advantage of the medium
when articulating the message, and rewarding this through the rubric.
This does not mean that learning objectives should be invented. Nor does it mean
rewarding participation or effort or other non-achievements. It means that learning

Page 14
A review of rubrics for assessing online discussions

objectives should be interpreted in such a way that their assessment embraces the
additional affordances made possible by online writing.

In Transforming Education: Assessing and Teaching 21st Century Skills (2009), it


was stated that there was a “need to develop new models of scoring students’
processes and strategies during assessments”. This paper provides evidence for that
view, having discovered issues with the current way that rubrics are constructed and
used.

Page 15
A review of rubrics for assessing online discussions

References
Angelo, T. A. (1995). Classroom assessment for critical thinking. Teaching of
Psychology, 22(1), 6-7. doi:10.1287/s15328023top2201_1.
Berliner, D. C. (1987). Simple views of effective teaching and a simple theory of
classroom instruction. In D. C. Berliner & B. Rosenshine (Eds.), Talks to teachers
(pp. 93-110). New York: Random House.
Dreyfus, H. L. (2002). Intelligence without representation. Phenomenology and the
Cognitive Sciences, 1, 367-383.
Gabriel, M. A. (2004). Learning Together: Exploring group interactions online. Journal
of Distance Education, 19(1), 54-72.
Glossary of assessment terms, (2002) New Horizons for Learning (2002). Assessment
Terminology: A Glossary of Useful Terms. Retrieved September 2009 from
www.newhorizons.org.
Hazari, S. (2004). Strategy for assessment of online course discussions. Journal of
Information Systems Education, 15(4), 349-355.
Hiltz, S. R., & Wellman, B. (1997). Asynchronous learning networks as a virtual
classroom. Communications of the ACM, 40 (9), September, pp. 44-49.
Hoag, A. & Baldwin, T. (2000). Using case method and experts in inter-university
electronic learning teams. Educational Technology & Society, 3 (3), 337-348.
Hounsell, D., McCune, V., Hounsell, J. & Litjens, J. (2008). The quality of guidance
and feedback to students.Higher Education Research & Development, 27(1), 55-
67.
Jonassen, D.; Davison, M.; Collins, M. et al. (1995). Constructivism and computer
mediated communication in distance education. The American Journal of Distance
Education, 9(2):7-26. Available online at http://www.uni-oldenburg.de
/zef/cde/media/readings/jonassen95.pdf.
Microsoft, Transforming Education: Assessing and Teaching 21st century Skills,
(2008) http://download.microsoft.com/download/6/E/9/6E9A7CA7-0DC4-4823-
993E-A54D18C19F2E/Transformative%20Assessment.pdf. Accessed 20
March 2009.
Newman, G., Webb, B. & Cochrane, C. (1995). A content analysis method to
measure critical thinking in face-to-face and computer supported group learning.
Interpersonal Computing and Technology, 3(2), 56-77. Available online at
http://www.helsinki.fi/science/optek/1995/n2/newman.txt.
Peters, V. L., & Hewitt, J. An investigation of student practices in asynchronous
computer conferencing courses. Computers & Education (2009),
doi:10.1016/j.compedu.2009.09.030
Pickett, N; Dodge, B., (2007) Rubrics for Web Lessons. Available online at:
http://webquest.sdsu.edu/rubrics/weblessons.htm
Riel, M. (1995). Cross classroom collaboration in global learning circles. In: Star, S.
(ed.), The cultures of computing. Oxford, UK: Blackwell, pp. 219–242.

Page 16
A review of rubrics for assessing online discussions

Rovai, A P. (2002) Online and traditional assessments: what is the difference?, The
Internet and Higher Education, Volume 3, Issue 3, 3rd Quarter 2000, Pages 141-
151, ISSN 1096-7516.
Sadler, D. (2009). Grade integrity and the representation of academic
achievement. Studies in Higher Education,34(7), 807-826.
Swan, K., Schenker, J., Arnold, S., & Kuo, C. (2007). Shaping online discussion:
Assessment matters. E-mentor, 1(18). Retrieved March 6, 2008, from
http://ementor.edu.pl/_xml/wydania/18/390.pdf.
Swan, K., Shen, J., & Hiltz, S. R. (2006). Assessment and collaboration in online
learning. Journal of Asynchronous Learning Networks, 10(1), 45-62.
Jenny Waycott, Sue Bennett, Gregor Kennedy, Barney Dalgarno, Kathleen Gray,
Digital divides? Student and staff perceptions of information and communication
technologies, Computers & Education, Volume 54, Issue 4, May 2010, Pages
1282-1211, ISSN 0360-1315, DOI: 10.1016/j.compedu.2009.11.006.
Wyatt, G. (2005). Satisfaction, academic rigor and interaction: Perceptions of online
instruction. Education, 125, 460-468.

Page 17

Вам также может понравиться