Вы находитесь на странице: 1из 14

Evaluation of CAL software for higher education: a task for three

experts


by

Jeff James*




Abstract

Presently, in universities there is an abundance of work taking place developing
technological teaching aids. One of the most common of these learning tools is
computer-driven interactive multimedia programs. As with any teaching method or
material, evaluation of the product is necessary, especially in the current climate of
quality assurance.

This paper describes in detail three main areas to consider in educational software
evaluation- content, educational outcomes, and operation. It emphasizes the fact
that evaluation within each of these three areas is best performed by professionals
with different expertise. A software evaluation instrument is included.
















*Senior Officer, Educational Development Unit; Hong Kong Polytechnic
University
etjjames@polyu.edu.hk
INTRODUCTION

Presently, in universities there is an abundance of work taking place developing
technological teaching aids. One of the most common of these learning tools is
computer-driven interactive multimedia programs. As with any teaching method or
material, evaluation of the product is necessary, especially in the current climate of
quality assurance.

This paper describes in detail three main areas to consider in educational software
evaluation- content, educational outcomes, and operation. It is written with the
assumption that experts in each of these three areas (subject specialist, education
specialist, and human-factors expert) would jointly form an evaluation team.

Appended to this paper is an instrument which may be useful in a variety of ways.
It could be used as a guide for the developers when designing the software
(Maulden [1996,b] looks at formative evaluation of computer-based software). It
could also be used by observers in evaluating a prototype in, for example, a
works in progress forum. However, the main purpose of the instrument is to be
used as a summative tool to evaluate the final product. Furthermore, if the
instrument is used in a combined effort by three evaluators, academic credit can
be acknowledged for the development of computer assisted learning (CAL) tools,
the evaluators performing a function similar to referees evaluating a refereed
journal article.


EVALUATION TEAM

CAL materials have a purpose far greater than simply providing attractive
multimedia. They are intended to aid learning. A typical development team would
consist of a content expert, an educationalist, a graphic artist, and a programmer.
Few professionals have the experience and talent to perform the development
functions of more than one member of this team. Likewise, a team can join together
to provide a more comprehensive, valid evaluation of interactive multimedia
learning software. A content expert can ensure that the software content is accurate
and up-to-date. An educationalist can inspect a programs educational environment
for maximum learning potential. A third evaluator, knowledgeable in user-interface
design, can assess an interactive learning tools look and feel with the view that
there should be few design and navigation barriers to learning.

Before running the software, evaluators need to have a basic profile of the intended
users. This description would include the age and experience of the learners.
Consider the following two descriptions of learners:

A: First-year university mathematics students
B: Fourth-grade children studying fractions

Based upon the target group, the content expert could judge the relevance of the
programs mathematics language and the required students background. For
example, it may be assumed that group A has a fundamental understanding of
algebra whereas group B needs to only have a fundamental understanding of value
and ordering.

The educationalist may have in mind the vast difference between these two groups
maturity, leading to their abilities to be responsible for their own learning and their
need for guidance. Finally, the evaluator of the softwares user-interface design
may consider factors such as the amount of typing required, and the
appropriateness of audio content for the two groups.

Following are more detailed descriptions of specific sub-points within each of the
three main evaluation categories: content, educational outcomes, and operation
(Mauldin (1996a) offers a different system of categories). In considering these
topics, the reader can better appreciate the roles performed by the three evaluation
team members.


CONTENT

A content expert needs to make a judgement about the subject material within the
CAL program. Questions which should be considered include Is the content
appropriate for the target group?, and Is the material accurate and up-to-date?.
Specific learning objectives should be identifiable. For example Following the use
of this software, students should be able to list three causes of volcanic eruption
and explain chemical reactions which occur with each cause.

The material may be sequenced in several ways. It should be logically ordered so
that pre-requisite information comes first. Content also should be sequenced by
difficulty. For example, simpler concepts may give the learner confidence before
tackling more complex ones. Another sequencing strategy is by time, such as
chronological ordering.

The language used is another very important aspect of the program. Are there
inappropriate spelling conventions (e.g. American vs. British) for the intended
users? Is the level of language matched to the students (e.g. is the vocabulary too
complex for those whose second language is English?)? Another important
language issue relates to gender. Is the software gender-biased? (e.g. does it always
use the masculine form?).

Overall, in order to maintain the software users interest, the material should also
be varied and challenging. The content expert needs to have an intimate
understanding of both the material and the user in order to make judgements about
this aspect of the content.

Another content consideration of the CAL program includes any documentation
associated with the package. The evaluator needs to determine if the documentation
is useful, or even necessary both to the user and to the teacher. There are different
categories of documentation; these can include a users manual and a technical
guide. There can be abridged and unabridged versions, and even a quick-reference
card and/or a function key strip. Often extra support material is included such as
maps and booklets. The content expert can judge if these materials are appropriate
and useful.


EDUCATIONAL OUTCOMES

The education specialist will attempt to assess the learning environment provided to
the programs user. While typically this is accomplished by running the program,
conclusions will be more valid if observations are made of the user and perhaps
even data is collected (via questionnaires, interviews, etc.). The primary judgement
to be made is will the students learn from using this software? To this end, many
points should be considered.

The program should be motivating for the learner. There are various ways this can
be accomplished; the program may be in game format, it may incorporate the use of
pictures, animations, video clips, and sound. It can be interesting, challenging, and
perhaps use humour.

A major factor to consider is whether the user or the computer is in control while
the program is running. On the one hand, the program may be designed to be little
more than an electronic book. In this case, the user may have only forward and
backward buttons to press to go to the next or previous pages. The user has little
choice and essentially is under the control of the program.

Alternatively, in for example a World Wide Web browsing environment, the user
has complete control over information access. This may be entirely appropriate for
some learners, but it also requires a more mature, self-disciplined student.

Software which both guides the user and offers alternatives places the learner in a
shared-control environment, which in the best sense is a learning partnership. The
key in this situation is in the dynamics of interaction. In the most basic sense, the
computer presents information, asks the user questions and then makes intelligent
responses.

The evaluator attempts to assess the interaction taking place. For example, is
positive reinforcement friendly and helpful? Software which gives the user a Well
done! message for every correct answer lacks imagination and can become boring
compared with a program which may respond with Thats right, (a) is correct
because of the chemical bonding which takes place. Well done!.

The potential exists for negative reinforcement to be particularly cruel. When
running a program, the evaluator should make incorrect answers as well as correct
ones in order to assess the programs negative reinforcement. Software which
responds Wrong! for each incorrect answer punishes the user and offers little
encouragement and help. A program which responds with You have the correct
digits, but you should check your decimal point recognises that many wrong
answers contain some degree of correctness within them.

Programs which involve a wide range of high-level skills and concepts are more
powerful than those which only require low-level skills such as memorisation.
Software which involves the learner in analysis, synthesis, decision-making, and
discovery provides such an environment. For example, a computer program which
incorporates a plant-growth simulation and which encourages the user to
manipulate variables such as amounts of sunlight, water, and chemicals in order to
answer questions regarding resultant growth patterns, gives the learner the
opportunity to discover facts about plant growth.

Reeves (1993) has designed a whole software evaluation instrument around the
degree of high-level activities incorporated in educational software. One example
of a point the evaluator should consider in Reeves instrument is the amount of
construction which takes place by the learner (as contrasted to instruction by the
computer).

OPERATION

The specialist evaluating the operation of a program needs to consider a great
number of factors which contribute to making the program easy to use. Certainly,
poorly presented and/or difficult-to-navigate software will inhibit the learner from
achieving the learning outcomes desired by the programs designers.

Basically, operation can be classified into two general categories- visual design and
navigation.

Visual Design

Often it is easy to tell if a graphics designer was involved in the basic screen design
of the program. The general format of the screen layout should be logically set out
with for example, text and graphics windows. An obvious question which the
evaluator would ask is Is the screen display clean, attractive, and informative?
Hopefully, there will not be distracting animations (visual fluff). World wide web
pages often include distractions such as twirling balls which not only have little
purpose, but are a drain on computer resources, and hinder response time.

There are simple rules for screen design, some of which the reader may recognise
from basic guidelines for overhead transparency design; these include using colours
sparingly (a maximum of four can help insure that colours can be used effectively).
Colour combinations for foreground and background should be chosen
thoughtfully. Pastel colours can be used for backgrounds which are visually
pleasing and which allow for maximum contrast with dark-coloured text. Examples
of pleasing combinations include black on gray, or dark brown on beige.

Navigation

The main question to be answered in evaluating this component of educational
software is How easy is it to navigate? Prompts (directions) should be clear.
Complex software may include on-line help, but well-designed software should not
require help to be given to the user.

Typically, modern software has a point-and-click interface. If any typing is
required, it should be minimal and data validation should be incorporated. Also,
software which is case-insensitive will be easier to run.

The design of icons can add to the profession appearance of the software. Use
should be made of standard icon design; for example, arrows or pointing-hands
are pseudo-standards for Next/Previous page icons. Including text in addition to
icons ensures that the user knows the exact function of each icon. Furthermore, the
icons should be consistent in size and location.

Software should also be designed so that the user has seamless entry into the
program and also a graceful exit upon completion.


CONCLUSION

Designing and creating high-quality educational computer software is a complex
task which requires a variety of skill and experience. The evaluation of this
software is performed to assess the softwares value for a group of intended
learners. It is important to inspect these educational tools to ensure that the content
is up-to-date and valid for the intended learners. Furthermore, the software should
be assessed for its educational value and its ease-of-use.

As an addendum to this paper is an evaluation instrument which can be used by an
expert team to assess a piece of softwares adherence to well-established teaching
and computer program design principles. These principles include suitable,
accurate, and up-to-date content, and an educational environment in which the
student can comfortably participate in learning activities. The main focus of this
instrument is to evaluate the potential of the software to be used as a learning tool.

REFERENCES

Mauldin, M. (1996,a), The Development of Computer-Based Multimedia: Is a
Rainforest the Same Place as a Jungle?. In TECHTRENDS , v41, n3, pp.15-18.

Mauldin, M. (1996,b), The Formative Evaluation of Computer-Based Multimedia
Programs. In Educational Technology , pp. 36-40, March-April.

Reeves, T. (1993), Evaluating technology-based learning. In Piskurich, G.M. (Ed.),
The ASTD handbook of instructional technology , pp.15.1-15.32, McGraw-Hill,
New York.

Reeves, T. (1994), Evaluating what really matters in computer-based education. In
Wild, M., and Kirkpatrick, D. Computer education: New perspectives. MASTEC,
Edith Cowan University Press, Perth, West Australia.
EVALUATION INSTRUMENT

Title of Software:

Intended Users:


Date:


Evaluators

name: signature:





===========================================================

Directions

answer each question with Y for yes, or
A for amendments needed, or
C for major changes needed.

please write down in the space provided amendments or changes that are needed to make
the software a more useful learning tool.


CONTENT


Can specific, appropriate objectives be identified?
Y A C






Are the objectives appropriate to the level and nature of study?
Y A C






Is the content suitable for the intended users in terms of subject
material?
Y A C






Is the content suitable for the intended users in terms of degree of
difficult?
Y A C






Is the material up-to-date?
Y A C






Content (continued)


Is the material accurate?
Y A C





Is the material arranged in a logical structure?
Y A C





Is any documentation/support material which is included
useful, necessary,and well presented?
Y A C





additional comments















EDUCATIONAL OUTCOMES


Can specific, appropriate objectives be identified?
Y A C






Does it appear that the student will achieve the objectives by using this
Y A C software?







Is the program motivating?
Y A C






Does this software involve a wide range of the student's higher- level skills?
Y A C






Are the student's tasks appropriate?
Y A C





Educational Outcomes (continued)

Is feedback on performance given to the user/ tutor?
Y A C






Is there appropriate reinforcement (positive, negative, partial)?
Y A C





additional comments




















OPERATION


Are directions clear and prompts (and icons) appropriate?
Y A C






Is help available (by request or otherwise)?
Y A C






Is the screen format clear, attractive, and informative?
Y A C






Is the program bug-free (including spelling/grammar)?
Y A C






Is there data validation?
Y A C





Operation (continued)


Is there appropriate use of multimedia?
Y A C Replay of sound/movies?
Sound option?






Is aid given to the user for difficult/technical words?
Y A C





Is there any indication that the English language within the program
Y A C interferes with the student's learning?






additional comments












TEHE Ref.: R107

Jeff, J. (n.d.) Evaluation of CAL software for higher education: a task for three experts. Enhancing
Learning, Teaching & Curricula with a University-wide Integrated World Wide Web Framework [online
document]. The Hong Kong Polytechnic University, Educational Development Centre.

Вам также может понравиться