Вы находитесь на странице: 1из 6

On the Role of Technical Standards for

Learning Technologies
Erik Duval, Member, IEEE Computer Society, and Katrien Verbert
AbstractThis paper presents some of the lessons that we have learned during our involvement with the development of learning
technology standards. We believe that the decision to develop a new standards is sometimes taken too quickly and that, when possible,
existing generic standards should be profiled for the domain of learning. At this point in time, the development of tools, technologies,
and methodologies that fully exploit the affordances of existing standards is more relevant than the development of new standards.
Moreover, we believe that the relationship between technical standardization and research is often misunderstood: The main role of
standards for research is to enable a large-scale technological infrastructure that promotes innovation through its open nature.
Index TermsLearning technology standards, learning object metadata, SCORM.

1 INTRODUCTION
O
VER the last 15 years, numerous learning technology
standards and specifications have been developed in
areas such as learning object metadata, learning content,
and learning object repositories.
. Standards for learning object metadata define ways
for the description of learning objects and enable
learning object discovery, sharing, reuse, and ex-
change. Examples are the IEEE Learning Object
Metadata standard [1] and the Dublin Core Meta-
data Element Set [2].
. Content standards and specifications define how to
package learning content to enable exchange [3], how
to sequence content in a uniform way [4], or how to
associate runtime behavior with content [5], [6].
. Repository standards and specifications focus
mainly on federated search [7] or harvesting [8].
Other areas in which standardization efforts have been
made are learner profiling [9] and learning design [10].
Major organizations that contribute to the development
of e-learning standards and specifications are [11]:
. accredited standards bodies with representation
based on individual experts, like the IEEE Learning
Technologies Standards Committee (LTSC) [12] or
the CEN/ISSS Workshop on Learning Technologies
(WSLT) [13],
. accredited standards bodies with representation by
countries, like the ISO IEC JTC1 SC36 [14] or the
CEN TC353 [15], and
. nonaccredited specification bodies with membership
based on an individual and/or an organizational
basis, like the ADL [16] (now LETSI) group [17], the
IMS Global Consortium [18], or the ARIADNE
Foundation [19], [20].
These organizations typically play different roles: Accre-
dited standards bodies define actual standards. Those
developed by such organizations with representation based
on countries typically carry substantial legal weight: Their
use may be mandatory in the case of public procurement
procedures for goods or services in many countries.
Accredited standards developed by the likes of the IEEE
LTSC are typically more industry oriented. Specifications
from groups like IMS or ARIADNE are often used as the
starting point for a more formal standardization process.
In the case of IEEE Learning Object Metadata, for
instance, the original specifications were developed by
ARIADNE and IMS. Together, these organizations pro-
posed a base document to the IEEE LTSC. The CEN/ISSS
WSLT developed several complementary standards once
LOM was finalized.
This paper is structured as follows: In the next section,
we present some lessons we have learned through our
involvement in standards development. The section there-
after discusses the relationship between research and
technical standards.
2 ISSUES FOR TECHNICAL STANDARDS IN
LEARNING TECHNOLOGIES
2.1 Introduction
We have actively contributed to a number of standards and
continue to do so. We remain convinced that these are very
worthwhile initiatives. However, we have learned some
lessons from experience and want to share those in this
section, as we are convinced that some of the ongoing
standardization efforts in the domain of technology-
enhanced learning are somewhat misguided.
2.2 Assess Carefully the Need for a New Standard
Our personal experience is that the decision to develop a
new standard is often taken without proper due diligence.
IEEE TRANSACTIONS ON LEARNING TECHNOLOGIES, VOL. 1, NO. 4, OCTOBER-DECEMBER 2008 229
. The authors are with the Computer Science Department, Katholieke
Universiteit Leuven, Celestijnenlaan 200 A, BE 3000 Leuven, Belgium.
E-mail: {erik.duval, katrien.verbert}@cs.kuleuven.be.
Manuscript received 16 Dec. 2008; revised 3 Jan. 2009; accepted 5 Jan. 2009;
published online 14 Jan. 2009.
For information on obtaining reprints of this article, please send e-mail to:
lt@computer.org, and reference IEEECS Log Number TLT-2008-12-0114.
Digital Object Identifier no. 10.1109/TLT.2009.6.
1939-1382/08/$25.00 2008 IEEE Published by the IEEE CS & ES
The enthusiasm and will to create something useful is of
course positive, but channeling that energy in the develop-
ment of a new standard is often misdirected. Developing a
standard takes a long timetypically 3 to 6 years and
sometimes many moreand many projects run out of
steam before they deliver a standard.
More fundamentally, after a standard is finalized, it takes
even longer before we typically develop tools that actually
deliver the functionality to end users in a way that is useful
and usable. In the beginning, the standard shines through
and developers do not focus on the user experience. In the
case of Learning Object Metadata, for instance, early tools
typically presented an electronic form, like the one in Fig. 1,
for querying repositories or inserting metadata into them.
This approach suffered from many usability issues,
and we launched the slogan electronic forms must die
[21] to encourage developers to change their perspective
on the kind of user experiences that could be supported.
For querying purposes, this has led to the development of
alternative user interfaces, like the elastic lists of Fig. 2
[22], recommender systems for learning objects [23], and
even tabletops for direct manipulation of information
about learning objects on architecture [24]. For inserting
new objects with associated metadata, combinations of
tagging and automated metadata generation are becoming
available [21].
In our opinion, the most pressing need with respect to
learning technology standards nowadays is the develop-
ment of tools, technologies, and methodologies that really
exploit the affordances of the standards more fully. For
instance, in our experience, rather than developing an
alternative specification like Common Cartridge [6] or
SCORM2.0 [25], the most urgent and relevant work with
respect to learning content is the development of more
useful and usable authoring tools, delivery platforms,
management systems, etc. We have often observed how
even quite tech-savy users are deeply challenged by the
many bells and whistles that current SCORM tools typically
include and how they fight to develop engaging content or
to deploy it in their learning environments not because the
functionality that SCORM provides is broken or lacking, but
because the tools still very much reflect the structure of
SCORM and require a good understanding of how SCORM
is implemented [26], rather than focusing on the needs,
characteristics, and goals of the end user.
2.3 Avoid Not Invented Here
Even if there is a need for a specific technical standard, we
should avoid continuing the not invented here approach
that has made us develop learning specific standards when
there may be quite appropriate generic standards already
available or being developed.
In the early days of learning technologies standardiza-
tion, this caveat applied less, as there were fewer relevant
standards around. For instance, when the IEEE LTSC
started the development of the LOM standard around
1997, the Dublin Core Metadata Initiative (DCMI) was
starting as well. In retrospect, it would probably have been
useful to collaborate more directly from the start, but in the
early years, it was quite unclear which initiative would
eventually lead to a mature standard and by when. As both
DCMI and LOM matured, collaboration intensified [27].
Nowadays, there is quite a close collaboration [28].
Around 2002, when learning object repository services
became the focus of specification and standards work, a
new standard was developed in the CEN/ISSS Workshop
on Learning Technologies: the Simple Query Interface (SQI)
[29]. Related work led to the development of the PLQL
query language [30]. As SQI was being developed, it was
clear that there was substantial overlap with related
developments in the digital libraries world, more specifi-
cally, with Search/Retrieval via URL/Web (SRU/SRW), as
well as with the Open Knowledge Initiative (OKI) that
addresses Learning Technologies interoperability [31]. In
retrospect, we believe that SQI could probably have built on
SRU/SRW more directly.
There is a growing realization of the need to avoid
continuing this development in isolation of learning specific
standards: the ongoing development of the Simple Pub-
lishing Interface (SPI) [32], for instance, is seriously
230 IEEE TRANSACTIONS ON LEARNING TECHNOLOGIES, VOL. 1, NO. 4, OCTOBER-DECEMBER 2008
Fig. 1. Electronic forms must die.
Fig. 2. Elastic forms.
considering whether existing generic approaches like
SWORD, itself an application profile of the Atom publish-
ing interface [33], could fulfill the requirements in the
learning domain, or whether they can be extended to do so.
A nice vehicle for building standards on existing ones is
the use of application profiles. Basically, in an application
profile, a standard is constrained to take into account the
specific requirements and constraints of a particular
community. In metadata schemas, for instance, depending
on the base standard, this may involve including particular
data elements and excluding others, adding local data
elements, making data elements mandatory, restricting
their domain to a subset of that of the standard, etc. [34].
Application profiles should not be confused with
bindings: Whereas application profiles restrict the set of
conforming entities to a subset of that of the standard,
bindings define how conforming entities can be represented
in a particular technology. For example, there is a standard
for the XML binding of LOM that defines how LOM can be
represented using XML Schema Definition [35].
Still another vehicle for working with existing standards
is a reference model that defines how different standards
can be used together as a complete solution for a particular
need. For example, for learning technologies, SCORM is the
most widely deployed reference model: It defines how
LOM, content packaging, simple sequencing, and runtime
communication can be used together for learning objects
that can be deployed in state-of-the-art learning manage-
ment systems [36].
2.4 Standards Are Not Research
It is very important to understand the differences between
standards and research, as Fig. 3 illustrates. The basic idea
is that research can produce (among many other outcomes)
specifications that can be implemented in software arte-
facts. The experience gained during the development and
use in practice of the software typically leads to newer
versions of the specifications. At some point in that
feedback cycle, the specification can be used as input in
the standardization process.
Once the standardization process is started, the focus
shifts to consensus building and deciding on what should
be in the scope of the standard and what is best left out. As
an example, the LOM standard was based on early
specifications from IMS and ARIADNE that had already
been implemented in several versions of tools that were
being evaluated in practice. During the standardization
track, consensus was built around, for instance, whether or
not some elements would be made mandatoryand if so,
which ones. Eventually, the decision was made not to make
any data elements mandatory. Similarly, the clustering of
data elements in several categories (technical, educational,
etc.) was agreed upon during standardization.
The above is a rather ideal scenario: Often, researchers
submit premature specifications. There sometimes seems to
be an assumption that, once the outline of a specification
has been drafted, it can be further finalized during the
standardization process. Researchers then expect standar-
dization participants to consolidate, implement, and im-
prove the specification without bringing into question the
basic approach or practical details of the original document.
In our experience, this leads to a very frustrating experi-
ence, as the standardization group is disappointed that the
specification is not more mature and the originators of the
specification worry that their baby is mishandled.
Therefore, as a rule of thumb, we would suggest that a
specification is only ready for the standardization process
once it has been implemented by at least two independent
development teams and has been evaluated in at least two
independent user studies. (The users in question can be either
developers that make use of the specification or end users of
tools that implement the specifications.) A better situation is
one where several similar specifications exist that attempt to
solve the same problem and that overlap significantly in
scope and approach. In that case, the standardization
process can help to build consensus between the different
communities and rapid development feedback cycles can be
created by teams that implement the standard under
development on top of existing closely aligned specifica-
tions and tools.
Admittedly, this is a somewhat idealistic situation, but
we believe that it is very important to caution against
entering the standardization process prematurely. The
focus of that process is on consensus building, not on
developing the best solution. If one or more good solutions
are not already available at the start, then the whole process
is jeopardized and may never lead to a relevant result.
3 STANDARDS MATTER FOR RESEARCH
3.1 Introduction
Even though we believe that the standardization process is
not the right venue for research activities, we do believe that
standards can be very relevant for research. Indeed, as will
be explained below, standards enable large-scale experi-
mentation and research based on nontrivial, nontoy
applications. Moreover, standards are essential for an open
infrastructure in which innovation and research thrive.
3.2 Large Scale
Standards are the infrastructures that work behind the
scenes [9] and enable deployment on a large scale, and
therefore make it possible to do research on global
DUVAL AND VERBERT: ON THE ROLE OF TECHNICAL STANDARDS FOR LEARNING TECHNOLOGIES 231
Fig. 3. Standards are not research.
infrastructures with critical mass. This is, in our opinion,
extremely valuable in the context of technology-enhanced
learning, where researchers all too often make rather wide-
ranging conclusions from toy application development
and minimal (if any) evaluation studies.
Moreover, standards help remedy the balkanization
problem that used to plague the field of learning technol-
ogies, where every research project had to start from scratch
as none of the earlier results from previous research projects
would continue to be available after the end of the projects
or would interoperate with the results of other projects.
Certainly within the European Commission research
and development funding programs, attention for the
appropriate use of existing standards and support for
feeding project results into the standardization process has
become much more explicit, specifically in order to avoid
the continuous redevelopment of identical, or at least very
similar, tools and technologies with the associated redis-
covery of the same issues. Indeed, many calls for projects
now explicitly mention the adoption of existing standards
or the creation of new standards as an explicit evaluation
criterion.
Our vision is that technology standards should enable an
open global learning infrastructure [37]. The basic building
blocks in this infrastructure would enable
. finding relevant learning resources (content, people,
physical artifacts, events, etc.),
. deploying them in an appropriate context-depen-
dent way (through a learning management system,
personal learning environment, etc., on a desktop,
laptop, tabletop, mobile device, etc.), and
. capturing the effects through attention metadata so
that feedback loops can be established to improve
human learning.
In the periphery of such an open infrastructure, novel
research prototypes could then build on the global infra-
structure to create exciting value-adding experiences, much
in the same way that mash-up technologies on the Web 2.0
leverage the massive amounts of information and the main-
stream functionalities provided by the likes of Google,
Amazon, Twitter, etc. We acknowledge that such an applica-
tion of standards is not without issues [38], but believe that
these issues are not that difficult to address if the goals being
pursuedare clear. Nice examples of this approachto research
include our own work experimenting with the automated
decomposition and recomposition of powerpoint slides for
learning [39], the development of relevance metrics for
learning objects [40], the FOLKSEMANTIC project [41], the
Visual Understanding Environment [42] based on the OKI
work, as well as the work mentioned above on elastic lists
(Fig. 2) [22], recommender systems [23], and tabletops for
learning objects on architecture [24].
3.3 Openness as a Driver for Innovation
Finally, standards enable openness, and that enables
innovationthat is, another way for standards to be
relevant to research.
In technology-enhanced learning, open source has so far
probably been at least as important a driver for innovation
as open standards. In fact, it is interesting to note that many
of the early implementations of open standards used in
research have been released under an open source license
(like EXE [43] and RELOAD [44] for SCORM), whereas, on
the other hand, the main open source learning management
system, Moodle [45], has been relatively slow in supporting
open standards.
An example of how the absence of standards can restrict
innovation and research is the situation with search
engines, where the far majority of research papers
originate with Google, Yahoo, or Microsoft. Besides the
financial leverage that these companies can apply to their
research efforts, they are also the only ones who can in
effect make use of the large data sets and user interactions
that they collect.
If the Web crawls that they use would be available
through standardized interfaces and if the user interac-
tions would be available through an open attention
metadata format, then many more, smaller labs could
apply a wider variety of research ideas to the same
infrastructure. We strongly believe that this would accel-
erate scientific progress, an opinion shared by the open
science movement [46], [47].
In this context, it is important to make sure that
standards enable, and do not preven, pedagogical innova-
tion: Their role is to make sure that the expectation of
students and teachers is met that the e-learning infra-
structure simply works [48]. The authors acknowledge
that learning technology standards do not guarantee
learning: Excellent content can be nonconforming and
useless content can be conforming. More importantly,
learning is not in the first place about content, reuse, or
discoverabilityit is just that that part of the problem can
be addressed by standards!
4 CONCLUSION
Standardization work is time consuming and, due to its
consensus-based approach, can be quite tedious. We are
proud of our involvement in this work over more than a
decade and the valuable results that it contributed to.
However, as we explained in this paper, we caution
against the ill-advised premature development of new
standards when generic standards are available that can be
profiled for the domain of learning. We believe that the
greater need at this moment is to develop more usable and
useful tools that hide everything but the benefits.
Technical standards are very relevant for research and
development in the domain of technology-enhanced learn-
ing, as they enable a global infrastructure for large-scale
realistic experimentation. However, that does not mean that
research activities should be confused with the standardi-
zation process: Immature research results can derail
standardization and the consensus-based approach of
standardization bodies leads to risk-adverse repetitive
research. We hope that by spelling out these risks, this
paper can help to avoid them, so that research and
standards for technology-enhanced learning can proceed
in a mutually supportive, healthy relationship.
ACKNOWLEDGMENTS
The authors would like to thank those who commented on a
very early draft of the ideas outlined in this paper on the
232 IEEE TRANSACTIONS ON LEARNING TECHNOLOGIES, VOL. 1, NO. 4, OCTOBER-DECEMBER 2008
first authors blog, http://erikduval.wordpress.com/2008/
11/28/standards-for-technology-enhanced-learning. Spe-
cial thanks go to Martyn Cooper, Wolfgang Reinhardt,
and Brian Kelly. The authors also gratefully acknowledge
the support of the European Commission through the
ProLearn Network of Excellence and the MELT and MACE
eContentPlus projects.
REFERENCES
[1] IEEE 1484.12. 1-2000 Standard for Learning Object Metadata, IEEE,
2000.
[2] S. Weibel and T. Koch, The Dublin Core Metadata Initiative
Mission, Current Activities, and Future Directions, D-Lib Maga-
zine, vol. 6, no. 12, http://www.dlib.org/dlib/december00/
weibel/12weibel.html, 2000.
[3] IMS Content Packaging Information ModelVersion 1.1.4 Final
Specification, http://www.imsglobal.org/content/packaging/
cpv1p1p4/imscp_infov1p1p4.html, 2004.
[4] IMS Simple Sequencing Information and Behavior ModelVer-
sion 1.0 Final Specification, http://www.imsglobal.org/simple
sequencing/ssv1p0/imsss_infov1p0.html, 2003.
[5] The SCORMRun-Time Environment, http://xml.coverpages.org/
SCORM-12-RunTimeEnv.pdf, 2001.
[6] Common Cartridge Working Group, http://www.imsglobal.org/
commoncartridge.html, 2008.
[7] CEN Workshop Agreement CWA 15454A Simple Query Inter-
face Specification for Learning Repositories, ftp://ftp.cenorm.be/
PUBLIC/CWAs/e-Europe/WS-LT/CWA15454-00-2005-Nov.pdf,
2005.
[8] The Open Archives Initiative Protocol for Metadata Harvesting,
http://www.openarchives.org/OAI/2.0/openarchivesprotocol.
htm, 2002.
[9] R. Robson, Report on Learning Technology Standards, Proc.
World Conf. Educational Multimedia, Hypermedia and Telecomm.
2000, J. Bourdeau and R. Heller, eds., http://go.editlib.org/p/
16192, pp. 971-976, 2000.
[10] IMS Learning Design Specification, http://www.imsglobal.org/
learningdesign, 2003.
[11] E. Duval, Learning Technology Standardization: Making Sense of
it All, Computer Science and Information Systems, vol. 1, no. 1,
pp. 33-43, Feb. 2004.
[12] IEEE Learning Technology Standards Committee, http://ieeeltsc.
org, 2008.
[13] CEN/ISSS Workshop on Learning Technologies, http://www.cen.eu/
isss/workshop/LT, 2008.
[14] SC 36, Information Technology for Learning, Education and
Training, http://isotc.iso.org/livelink/livelink?func=ll&objId=
806742, 2008.
[15] CEN/TC 35Information and Communication Technologies
for Learning Education and Training, http://www.cen.eu/
CENORM/BusinessDomains/sectors/isss/cen+tc+353.asp, 2008.
[16] Advanced Distributed Learning, http://www.adlnet.gov, 2008.
[17] LETSI, Learning-Education-Training Systems Interoperability
http://www.letsi.org, 2008.
[18] IMS Global Learning Consortium, http://www.imsglobal.org,
2008.
[19] ARIADNE Foundation, http://www.ariadne-eu.org, 2008.
[20] E. Duval, E. Forte, K. Cardinaels, B. Verhoeven, R.V. Durm, K.
Hendrikx, M. Wentland-Forte, N. Ebel, M. Macowicz, K.
Warkentyne, and F. Haenni, TheARIADNE Knowledge Pool
System, Comm. ACM, vol. 44, no. 5, pp. 72-78, 2001.
[21] K. Cardinaels, M. Meire, and E. Duval, Automating Metadata
Generation: The Simple Indexing Interface, Proc. 14th Intl Conf.
World Wide Web (WWW 05), pp. 548-556, 2005.
[22] M. Stefaner and B. Muller, Elastic Lists for Facet Browsers, Proc.
18th Intl Conf. Database and Expert Systems Applications (DEXA 07),
pp. 217-221, Sept. 2007.
[23] X. Ochoa and E. Duval, Use of Contextualized Attention
Metadata for Ranking and Recommending Learning Objects,
Proc. First Intl Workshop Contextualized Attention Metadata: Collect-
ing, Managing, and Exploiting of Rich Usage Information (CAMA 06),
pp. 9-16, 2006.
[24] Browsing ArchitectureMetadata and Beyond, series EAAE
Trans. Architectural Education, M. Zambelli, A.H. Janowiak,
and H. Neuckermans, eds., no. 40, 2008.
[25] SCORM2.0, http://www.letsi.org/display/nextscorm/Home,
2008.
[26] R. Godwin-Jones, Emerging TechnologiesLearning Objects:
Scorn or SCORM? Language Learning & Technology, vol. 8, no. 2,
pp. 7-12, http://llt.msu.edu/vol8num2/pdf/emerging.pdf, May
2004.
[27] E. Duval, W. Hodgins, S. Sutton, and S. Weibel, Metadata
Principles and Practicalities, D-Lib Magazine, vol. 8, no. 4, http://
www.dlib.org/dlib/april02/weibel/04weibel.html, Apr. 2002.
[28] M. Nilsson, P. Johnston, A. Naeve, and A. Powell, The Future of
Learning Object Metadata Interoperability, chapter 9, pp. 255-313.
Informing Science Press, 2007.
[29] S. Ternier and E. Duval Interoperability of Repositories: The
Simple Query Interface in ARIADNE, Intl J. E-Learning, vol. 5,
no. 1, pp. 161-166 http://go.editlib.org/p/21734, 2006.
[30] S. Ternier, D. Massart, A. Campi, S. Guinea, S. Ceri, and E. Duval,
Interoperability for Searching Learning Object RepositoriesThe
Prolearn Query Language, D-Lib Magazine, vol. 14, nos. 1/2,
http://www.dlib.org/dlib/january08/ceri/01ceri.html, 2008.
[31] T.M. Eap, M. Hatala, and D. Gasevic, Technologies for Enabling
the Sharing of Learning Objects, Intl J. Advanced Media and
Comm., vol. 2, no. 1, pp. 1-19, 2008.
[32] S. Ternier, D. Massart, F.V. Assche, N. Smith, B. Simon, and E.
Duval, A Simple Publishing Interface for Learning Object
Repositories, Proc. World Conf. Educational Multimedia, Hyperme-
dia, and Telecomm., pp. 1840-1845, http://go.editlib.org/p/28625,
Jan. 2008
[33] J. Allinson, S. Francois, and S. Lewis, SWORD: Simple Web-
Service Offering Repository Deposit, ARIADNE, no. 54, http://
www.ariadne.ac.uk/issue54/allinson-et-al, 2008.
[34] E. Duval, N. Smith, and M.V. Coillie, Application Profiles for
Learning, Proc. Sixth IEEE Intl Conf. Advanced Learning Technol-
ogies (ICALT 06), pp. 242-246, 2006.
[35] IEEE 1484.12.3Extensible Markup Language (XML) Schema Defini-
tion Language Binding for Learning Object Metadata, IEEE, 2005.
[36] Sharable Content Object Reference Model (SCORM), Third Edition
Documentation Suite, http://www.adlnet.gov/downloads/
DownloadPage.aspx?ID=237, 2004.
[37] E. Duval and W. Hodgins, A Lom Research Agenda, Proc. 12th
Intl Conf. World Wide Web (WWW 03), 2003.
[38] B. Kelly, A. Dunning, S. Rahtz, P. Hollins, and L. Phipps, A
Contextual Framework for Standards, Proc. 15th Intl Conf. World
Wide Web, Special Interest Tracks, Posters, and Workshops (WWW 06),
CD ROM, http://www.ukoln.ac.uk/web-focus/papers/e-gov-
workshop-2006, 2006.
[39] K. Verbert and E. Duval, ALOCOM: A Generic Content Model
for Learning Objects, Intl J. Digital Libraries, vol. 9, no. 1, pp. 41-
63, 2008.
[40] X. Ochoa and E. Duval, Relevance Ranking Metrics for Learning
Objects, IEEE Trans. Learning Technologies, vol. 1, no. 1, pp. 34-48,
Jan.-Mar. 2008.
[41] http://folksemantic.org, 2008.
[42] A. Kumar and R. Saigal, Visual Understanding Environment,
Proc. Fifth ACM/IEEE-CS Joint Conf. Digital Libraries (JCDL 05),
p. 413, 2005.
[43] eXeThe Elearning XHTML Editor, http://exelearning.org, 2008.
[44] Reload SCORM Player, http://www.reload.ac.uk/scorm
player.html, 2008.
[45] MoodleA Free, Open Source Course Management System for
Online Learning, http://moodle.org, 2008.
[46] The Science Commons, http://sciencecommons.org, 2008.
[47] S.A. Batts, N.J. Anthis, and T.C. Smith, Advancing Science
through Conversations: Bridging the Gap between Blogs and the
Academy, PLOS Biology, vol. 6, no. 9, http://biology.plosjournals.
org/perlserv/?request=get-document&doi=10.1371%2Fjournal.
pbio.0060240&ct=1, 2008.
[48] S. Marshall, E-Learning Standards: Open Enablers of Learning or
Compliance Strait Jackets? Beyond the Comfort ZoneProc. 21st
ASCILITE Conf. (ASCILITE 04), pp. 596-605, http://www.
ascilite.org.au/Conf.s/perth04/procs/marshall.html, 2004.
DUVAL AND VERBERT: ON THE ROLE OF TECHNICAL STANDARDS FOR LEARNING TECHNOLOGIES 233
Erik Duval is a professor of computer science at
the Katholieke Universiteit Leuven, Belgium. His
research focuses on management of and access
to data and content available in a more or less
structured way. More specifically, his interests
include metadata, reusable content, global
infrastructures based on open standards, and
mass personalization (The Snowflake Effect).
This work is applied in the fields of technology-
enhanced learning, music information retrieval,
and technology support for Science2.0. Dr. Duval is the president of
the ARIADNE Foundation, chair of the IEEE LTSC working group on
Learning Object Metadata, a fellow of the AACE, and a member of the
ACM and the IEEE Computer Society. He cofounded two spin-offs that
apply research results for access to music and scientific output.
Katrien Verbert is a post-doctoral researcher at
the hypermedia and databases research group
of the Katholieke Universiteit Leuven. Her
research interests include learning object con-
tent models, metadata, learning object reuse,
and interoperability. She participates in the
RAMLET IEEE LTSC standardization project
that is developing a reference model for re-
source aggregation. She participated in the
ACKNOWLEDGE project, which provides Flan-
ders with a proof of concept demonstrator for an easy and accessible
service in the field of learning and content management, and the
PUBELO project, which investigated technological and educational
standards for publishing in an electronic environment.
234 IEEE TRANSACTIONS ON LEARNING TECHNOLOGIES, VOL. 1, NO. 4, OCTOBER-DECEMBER 2008

Вам также может понравиться