Академический Документы
Профессиональный Документы
Культура Документы
org
doi:10.14355/jitae.2014.0302.02
QualityDevelopmentofLearningObjects:
Comparison,AdaptationandAnalysisof
LearningObjectEvaluationFrameworksfor
OnlineCourses:
ELearningCenterofanIranianUniversitycasestudy
PeymanAkhavan*1,MajidFeyzArefi2
DepartmentofManagementandSoftTechnologies,MalekAshtarUniversityofTechnology,Iran,Tehran
*1
akhavan@iust.ac.ir;2majidfeyzarefi@gmail.com
Received10July2013;Revised23December2013;Accepted4March2014;Published27May2014
2014ScienceandEngineeringPublishingCompany
Abstract
Thepurposeofthepresentstudyisinvestigating,comparing
evaluation frameworks of learning objects and deriving the
more important indices. After doing so, comparison,
correspondence and analysis of the frameworks are done.
Realizing the most important indices, the econtent of the
twolessonsofM.Sccoursesinoneoftheforesaidcentershas
been evaluated. In this research, we pursue the analytic
approach of literature review in offering theoretic bases to
practical application of the theory, and then by tworound
Delphi technique, important criteria and their significance
were identified by experts. The purpose of this study is to
showtheimportantscalesfortheevaluationofthelearning
objects, having the most important role in econtent of E
Lesson and the use of casestudy for detecting weak points
andimprovableaspects.Thefindingsreflectthatmostofthe
performedstudieshavefocusedonqualityoftheobjectsand
on correspondence of the elearning standards with objects
formulation and there isnt sufficient study in strategic and
planningaspectsoflearning,havingcrucialroleinlearning
quality and effectiveness of the Econtent of Elessons. In
addition, in these studies, the classification of the scales is
notbasedonspecifiedaspectsforlearningobjectsevaluation.
The present study answers the following questions: Which
factors affect the design and formulation of the quality of
learningobjects?Ontowhichaspectsonemustpayattention
to effective designing of Econtent of ELessons? Which
scales of learning objects are the main highly referred ones
evaluation in previous studies? Which aspects are the
improvableandchallengingones,notbeingfocusedenough?
Keywords
Elearning, Econtent; Learning Object; Learning Object
Evaluation;ElearningStandards;InstructionalDesign
Introduction
Elearning as a very functional term has been
introduced in education field along with information
technology and elearning has been considered as a
long term planning with a huge investment in
educational centers, especially in the universities of
many countries (Triantafillou, et al. 2002; Kuo, 2012).
Many universities and educational institutes around
the world have been designed for providing e
learning from the beginning in order to response to
increasing educational demands. Betts reports that in
many developed countries application in eleaning
courses are many times higher than the total higher
educationgrowth(Betts,2009).
The essential ability of Elearning is more than its
access to information and its transactional and
communicative abilities are its base. The purpose of
the qualitative Elearning is combining variety and
solidarity in such an active learning environment
which is mentally challenging. Level of these
transactions is more than a one sided transition of
contentandembedsourthoughtrangewithrespectto
the relations between the participants in learning
process(GrisonandAnderson,2003;BonnerandLee,
2012).
ELearningElements
The first are the ones capable of being named
physically these exist physically or at least
57
www.jitae.orgJournalofInformationTechnologyandApplicationinEducationVol.3Iss.2,June2014
TechnicalLOEvaluation
Literature Analysis
Evaluation of learning objects, is needed to develop
criteria for judging them (Kurilovas and Dagiene,
2009).
Vargoetal(2002)LearningObjectReviewInstrument
(LORI)forevaluatinglearningobjectshavedeveloped.
Version1.3LORIusing10criteriatoevaluatelearning
objects, including: 1 Presentation: aesthetics 2
Presentation: Design for Learning 3 Accuracy of
content4supportforlearninggoals5Motivation6
Interaction: usability 7 Interaction: feedback and
adaptation 8 Reusability 9 Metadata and
interoperabilitycompliance10Accessibility(Vargo,et
al. 2003). The 10 criteria of a literature review related
to instructional design, computer science, multimedia
development and educational psychology have been
achieved(KurilovasandDagiene,2009).Eachmeasure
was weighted equally and was rated on a fourpoint
scale from weak to moderate to strong to
perfect(Vargo,etal.2003).
ElearningStandards
Aviation is one of the first industries in early 80s
which accepts the computerbased learning in a large
scale. AICC was founded in 1988 (Fallon and Brown,
2003). At the same year, US Department of Defense
founded Advanced Distributed Learning Initiative
(ADL).Itsprimarymissionistodevelopandpromote
the learning methods for US Military, but the results
have many applications in other public and private
sectors.ADLpublishedthefirsteditionoftheSCORM
in1999(FallonandBrown,2003).
The foresaid standards offered proposals in both
conceptualdesignandperformance.Someofthemlike:
AICC and IMS are more focused on determination of
the technical specifications and the others like:
SCORM and AICC are the reference models of
performance (Triantafillou, et al. 2002). One can refer
to Dublin Core and ARIADNE as other standards
institutions. The first was commenced offering
metadatastandards.Itrepresentedamodelincluding
fifteen parts, supporting online storage and recovery
of public resources. It differed over these issues from
1
Inthesameyear,BelferetalPresentedtheversion1.4
LORI, with the 8 criteria which are such as: content
quality, learning goal alignment, feedback and
adaptation,
motivation,
presentation
design,
interaction usability, reusability, and value of
accompanyinginstructorguide(Belfer,etal.2002).
Nesbit et al in 2004 propose version 1.5 of LORI that
hadninecriteria,thesecriteriainclude:contentquality,
learning goal alignment, feedback and adaptation,
.Learningobject
58
JournalofInformationTechnologyandApplicationinEducationVol.3Iss.2,June2014www.jitae.org
59
www.jitae.orgJournalofInformationTechnologyandApplicationinEducationVol.3Iss.2,June2014
TABLE1LOSEVALUATIONFRAMEWORKSANDECONTENT
Year
LOsevaluationFrameworksandE
content
F1
2002
Vargoetal(LORI1.3)
F2
2002
Belferetal(LORI1.4)
F3
2004
Nesbitetal(LORI1.5)
F4
2005
KraussandAlly
F5
2005
SusanSmithNash
F6
2006
PaulssonandNaeve
F7
2006
NicoleBuzzettomoreandKayePinhey
F8
2007
MELT
F9
2007
SREB(SREBSCORE)
F10
2007
Q4R
F11
2008
KubilinskieneandKurilovas
F12
2009
KurilovasandDagiene
F13
2012
Guenagaetal
F14
2013
KurilovasandZilinskiene
TABLE2CRITERIAOFLOSEVALUATIONFRAMEWORKSANDECONTENT
Symbolsused
intable3
C1
C2
C3
C4
C5
C6
C7
C8
C9
C10
C11
C12
C13
C14
C15
Guenagaetal.in2012,introducedatoolforevaluating
learning objects, that this tool is composed of two
aspectsofthetechnologyandpedagogy.Inthissurvey,
aquestionnairewasdesignedwhereeachofthesetwo
aspectswasincludedinseveralcriteria(Guenaga,etal.
2012).
Kurilovas and Zilinskiene in 2013, introduced a new
AHP method for evaluating quality of learning
scenarios. In this research, qualitative measures of
learning scenarios are divided into three sections:
learning objects, learning activities and learning
environment, that each of three sections has several
indicesin terms of internalqualityand quality in use
(KurilovasandZilinskiene,2013).
C16
C17
C18
C19
C20
Research Analysis
AnalysisandComparisonofLOsEvaluation
FrameworksandEcontent
C21
C22
C23
C24
C25
C26
C27
C28
C29
C30
C31
C32
C33
Inthissection,frameworksandmodelsaretechnically
focusedonLOsandEcontent(Table1).
Intable2,wehavequalitativescalesofthesefourteen
frameworksalong witharespective figure. At last, in
table3,thescalescorrespondedtoeachother.Infact,
table 3 shows the scales frequency in models and
frameworks, then their importance, it also shows the
differences between models and frameworks and the
scales existing in some of them. We can recognize
researchers focus, improvable challenges and aspects
by such comparison that the future studies will
handletheweakpointsandstrongpoints.
60
Symbolsused
intable3
C34
C35
CriteriaofLOsevaluationFrameworksand
Econtent
Presentationaesthetics
Presentationdesignforlearning
Accuracyofcontent
Supportforlearninggoals
Motivatioon
Interactioonusability
Interactionfeedbackandadaptation
Reusability
Standardscompliance
Accessibility
Learninggoalalignment
Student/Instructorguides
moreextensiveStandards
Bestpracticeforuseofexistingstandards
Architecturemodels
TheSeparationofPedagogyfromthesupporting
technologyofLOs
Organizationalstrategies
LifeCycleStrategies
Intellectualpropertyandcopyright
Losarrangementpossibilities
Communicationandcollaborationpossibilities
andtools
Technicalfeatures
Interoperability
Cultural/learningdiversityprinciples
Technicalinteroperability
Retrievalquality
Informationquality
Relevance
Redundancyofaccess
Infrastructuresupport
Sizeofobject
Relationtotheinfrastructure/delivery
Assessment
Expectationsforstudentdiscussion/chat
participation
Navigation
JournalofInformationTechnologyandApplicationinEducationVol.3Iss.2,June2014www.jitae.org
TABLE3COMPARISONOFCRITERIAOFLOSEVALUATIONFRAMEWORKS
F14
F8
F13
F7
F12
F6
F11
F5
F9
F4
F10
F3
Criteria
C1
C2
C3
C4
C5
C6
C7
C8
C9
C10
C11
C12
C13
C14
C15
C16
C17
C18
C19
C20
C21
C22
C23
C24
C25
C26
C27
C28
C29
C30
C31
C32
C33
C34
C35
F2
Frame
work
F1
ANDECONTENT
Method of Research
TworoundDelphiMethod
We used Delphi Method to determine the validity of
the papers criteria. Delphi has been designed as a
structured communication technique by RAND in
1950stocollectdatathroughcollectiveopinionpolling
(Gallenseon,etal.2002).
In fact, experts opinions were used to assess the
validity of the papers criteria. In this tworound
Delphi method, the panel of experts includes 16
professors and experts in elearning, information
technology, computer engineering, instructional
technology and systems engineering fields. The
questionnaire construct was designed based on the
following Likert Scale: 1. strongly unimportant 2.
unimportant 3. neutral 4. important 5. strongly
important.Researchstructureillustratedinfigure2.
LimitsofQualitativeFrameworksofLOEvaluation
Undoubtedly,eachoftheframeworksandmodelsand
Econtents has its limits because of unique activity of
respective institution, model or the strategy and
purpose of the researchers. According to the above
table,LORI,Krauss&AllyandSREBhavemanysame
scales and their focus is on Econtent evaluation and
qualitativepromotionofcontentwithrespecttosome
standardscomponents.Thereisnofocusonmetadata,
objectdetailsandorganizationtechnicalfeaturesand
its strategies. But Q4R has 4 main strategies to
guarantee the quality of LO which Kurilovas &
Dagieneofferedamorecomplicatedmodelbytheuse
ofthemastheirbasealongwithsomelimitsincluding:
(LORI 1.3), (LORI 1.5), (MELT,2007), (SREB,2007),
(Guenaga, et al. 2012). They didnt investigate the
Delphi
Delphi
#Round1
#Round2
Validation
of
Indicators
Adding
and
subtracting
indicators
Output:
d1,,d12
Reviewof
Indicators
Determine
importance
of
indicators
ReviewIndicators
anddetermine
theirimportance
Output:
Table2
LearningObjectEvaluation
FIG.1RESEARCHSTRUCTURE
61
www.jitae.orgJournalofInformationTechnologyandApplicationinEducationVol.3Iss.2,June2014
roundDelphimethod,importantindicatorshavebeen
identified and their significance was determined by
experts. These indicators are: Presentation design for
learning, Accuracy of content, Motivation, Interaction
usability, Interaction feedback and Adaptation,
Reusability, Standards compliance, Accessibility,
Learning goal alignment , Architecture models,
Organizational strategies and Cultural/learning
diversity principles which represented in table 4; d1,
d2,d3,d12,respectively.
TABLE4PAIREDCOMPARISONINDICES
Index
d1
d2
d3
d4
d5
d6
d7
d8
d9
d10
d11
d12
d1
d2
d1:2
d3
d1:1
d3:1
d4
0
d4:2
d4:1
d5
d1:1
d5:1
0
d4:1
d6
0
d6:2
d6:1
0
d6:1
d7
d1:3
d2:1
d3:2
d4:3
d5:2
d6:3
d8
0
d8:2
d8:1
0
d8:1
0
d8:3
d9
d1:1
d9:1
0
d4:1
0
d6:1
d9:2
d8:1
d10
d1:2
0
d3:1
d4:2
d5:1
d6:2
d10:1
d8:2
d9:1
d11
d1:2
0
d3:1
d4:2
d5:1
d6:2
d11:1
d8:2
d9:1
0
d12
d1:1
d12:1
0
d4:1
0
d6:1
d12:2
d8:1
0
d12:1
d12:1
TABLE5THEFINALSCOREFOREACHCRITERIAUSINGPAIREDCOMPARISONSWITHOTHERINDICATORS
62
Index
TotalScore
Rating
d1
13
d1 : 17.33 %
d2
d4 : 17.33 %
d3
d6 : 17.33 %
d4
13
d8 : 17.33 %
d5
d3 : 6.66 %
d6
13
d5 : 6.66 %
d7
d9 : 6.66 %
d8
13
d12 : 6.66 %
d9
d2 : 1.33 %
d10
d10 : 1.33 %
d11
d11 : 1.33 %
d12
d7 : 0
JournalofInformationTechnologyandApplicationinEducationVol.3Iss.2,June2014www.jitae.org
Case Study
Accordingtotheindiceswhichidentifiedbyexperts,2
lessonsofElessonshavebeenofferedinoneoftheE
learning centers in Iran universities and then they
wereevaluated.
This university Elearning center performed some
studies in 2003. It then commenced its activity as the
pivotal project in order to reach to the knowledge of
designing and performing E learning systems and to
establishininternalnetworkofuniversity.Theproject
isfocusedononelessoninsomefieldsofM.Scin2008
2009.Inthemid2010,itsuccessfullytakestheMasters
course of those fields. As a result, Econtent of the 2
lessons was selected for evaluation by the use of a
frameworkincludingtheforesaidindices.
First lesson content is about human resources which
representedasaWORDandaPOWERPOINTfile.83
hoursessionscoverthe134pageWORDfile(font:11).
Thecontentconsistsoftables,figuresandforbetter
understandingofthelesson.Attheendofeachsession
therearesomerelativequestions.ThePOWERPOINT
file consists of 206 slides presenting the headlines. In
thiscenter,therearesomerecordroomsformastersto
recordsaudiofileofthelessonsandalsotheycanchat
withstudentsandanswertheirquestions.
Resultofecontentevaluationofsecondcourseare
shownintable7.
TABLE6.RESULTOFECONTENTEVALUATIONOFFIRSTCOURSE
Indices
Professors
d1
d2
d3
d4
d5
d6
d7
d8
d9
d10
d11
d12
Averageofall
indices
Firstprofessor
3.08
Secondprofessor
3.42
Thirdprofessor
3.08
Average
2.66
4.33
3.33
3.66
3.66
2.66
4.33
2.33
2.33
2.66
2.33
3.19
TABLE7.RESULTOFECONTENTEVALUATIONOFSECONDCOURSE
Indices
Professors
d1
d2
d3
d4
d5
d6
d7
d8
d9
d10
d11
d12
Averageofall
indices
Firstprofessor
Secondprofessor
2.75
Thirdprofessor
2.83
Average
3.33
3.33
2.33
3.66
2.66
3.66
4.33
2.33
2.33
2.33
2.86
63
www.jitae.orgJournalofInformationTechnologyandApplicationinEducationVol.3Iss.2,June2014
Discussion
Asshownintable6andtable7,thecontentofthefirst
lessonhasalowperformanceinscalesnumber1,6,9,
10,11and12whichmeansitwasweakatPresentation
design for learning, Reusability, Learning goal
alignment, Architecture models, Organizational
strategies and Cultural/learning diversity principles;
because master designing of them are weak at
designing regarding visual, hearing, image display
and graphical elements relative to learning purposes.
Also, they have unacceptable level of LO designing
with respect to reuse and effectiveness and their
respective content dont appropriately assign the
purposes. This content has medial performance
regardingscalesnumber3,4and5whichmeansthat
ithasmediallevelinMotivationalcontent,Interaction
usability, Interaction feedback and Adaptation. One
can say that the use of tables, Figures and etc as well
as coherence of the content, raising questions and
incitement challenges are useful, but lack of visual
slides, films and animations makes it medial. The
content is also medial regarding to offering LOs
havingsimplemutualrelationandoperationalliaison
with qualitative features, so corresponding to
studentsneedsandgivingfeedbacktothemasters.At
last, it has acceptable performance regarding scales
number 2, 7 and 8; that is, Accuracy of content,
StandardscomplianceandAccessibility,becauseithas
content credit, accuracy and enough details and
balanced ideas and the foresaid standards and has
accessibility for everyone. The content of the second
lesson is weak at Presentation design for learning,
Reusability, Learning goal alignment, Architecture
and
models,
Organizational
strategies
Cultural/learning diversity principles; it means, it is
weakatdesigningLOswithnecessarycapabilitiesand
designing inappropriate Econtent regarding to meet
the purposes. It has the medial level with respect to
the scales number 2, 3, 5 and 7. At last it is only
acceptable in scale number 8_ accessibility. Paying
attention to the weaknesses and challenges in the
center, we should promote masters ability and
familiarity regarding to qualitative indices of LO
evaluation to improve the Econtents. Designers also
should interact with design models and learning
technology and the approaches to meet the learning
needsofcontent.Wecanalsopredictandevaluatethe
offeredcontentsandgivefeedbacktomasters.
Conclusion
Asshowninthisstudy,themostimportantcriteriafor
evaluating learning objects are analyzed by studying
64
JournalofInformationTechnologyandApplicationinEducationVol.3Iss.2,June2014www.jitae.org
centuryaframeworkforresearchandpractice,2003.
Guenaga, M., Mechaca, I., Romero, S. and Eguiluz, A. A
tool to evaluate the level of inclusion of digital learning
objects.ProcediaComputerScience14(2012):148154.
Haughey,M.andMuirhead,B.Evaluatinglearningobjects
for
Contributing
Across
the
Curriculum:
ReviewInstrument(LORI).Version1.4,2002.
Betts,K.OnlineHumanTouch(OHT)Training&Support:
Framework
to
Increase
Faculty
ApplicationinEducation,Vol.1No.1,918,2012.
Kurilovas,E.DigitalLibraryofEducationalResourcesand
Teaching,Vol.5No.1,2009.
mokslai(InformationSciences).Vilnius,4243(2007):69
[online],
jist/docs/vol8_no1/fullpapers/Haughey_Muirhead.pdf
REFERENCES
Conceptual
2005.
http://www.usq.edu.au/electpub/e
schools,
Setting,
Journal
of
77.
Information
Kurilovas,E.andDagiene,V.LearningObjectsandVirtual
164172,2012.
2009.
learningobjects2(2006):95104.
Fallon,C.andBrown,Sh.ELearningstandards:aguideto
Vol.19No.1,7892,2013.
65
www.jitae.orgJournalofInformationTechnologyandApplicationinEducationVol.3Iss.2,June2014
Leacock,T.L.andNesbit,J.C.AFrameworkforevaluating
Wiley,
D.
A.
Learning
objects:
Difficulties
and
docs/lo_do.pdf
2007.
MELT.MetadataEcologyforLearningandTeaching,2007.
projectwebsite.[online],http://meltproject.eun.org.
Nash, S.S. Learning Objects, Learning Object Repositories,
and Learning Theory: Premilinary Best Practices for
OnlineCourses.Interdisciplinaryjournalofknowledge
andlearningobjects1(2005):217228.
Paulsson, F. and Naeve, A. Establishing Technical Quality
Criteria
for
Learning
Objects,
2006.
[online],
http://www.frepa.org/wp/wpcontent/files/paulsson
EstablTechQual_finalv1.pdf
Q4R. Quality for Reuse, 2007. project web site, [online].
http://www.q4r.org.
SREBSCORE. Checklist for evaluating SREBSCORE
Learning Objects, Educational Technology Cooperative,
2007.
Triantafillou,E.,Pomportis,A.andGeorgiadou,E.AESCS:
66