Вы находитесь на странице: 1из 31

11

Measurement of
Knowledge:
Process and Practice in
Knowledge
Management

Mari Davis and Concepcin S.


Wilson

Not everything that can be counted counts; and not everything that
counts can be counted.
Albert Einstein

Introduction
Organisations of all kinds are aware of the need to manage their
knowledge resources for company success through enhanced
knowledge use and knowledge sharing, but few so far have gone
that extra distance to measure systematically the results of their
work. Many unresolved dilemmas remain about the theory and
practice of KM, not least of which is how to measure whether
KM initiatives or projects actually deliver results. Measurement
is an important area for the field of Knowledge Management
(KM), because as Koenig states, the field is noted for typically
Measurement of Knowledge 343

anecdotally described practice (2001, p.511). KM employs a


range of well known and innovative techniques for using, re-
using and generating knowledge that, if managed appropriately,
have the ability to generate new opportunities and to create
wealth. Crucial areas that can be measured, include
management and use of information technologies and services,
such as external and internal business-relevant information
(explicit or codified knowledge). More difficult to codify and
measure, are internal knowledge use embodied in the skills and
expertise (tacit knowledge) of the individuals within an
organisation. Thus, for the field, a next logical step is to develop
a set of techniques by which organisations can measure
knowledge-based assets, and evaluate and compare outcomes or
performance of knowledge-based work programs.
Research on effectiveness measures for KM initiatives is
sparse; it is an area of work that is still evolving. This paper
discusses how activities associated with KM programs might be
measured, and what might constitute an appropriate set of
methods or techniques for measuring aspects of KM practice
and process. This paper takes as appropriate starting points, a
range of perspectives and methodologies that have the potential
to measure aspects of KM performance. Directions are suggested
for future research on measurement issues for knowledge
management.

The Literature on Measurement of Information: Service and


Products
The measurement of information use and delivery of information
services is not a new field. Many of the traditional techniques
can be adapted to serve this new area of management of
knowledge for businesses and other enterprises. For example,
practical measures for studying the value and cost of
information services and libraries within organisations were
developed in the 1980s by King & Griffiths (1988, 1991). In
studies on the value of information, a persistent concern has
been with various dimensions of value, for example, with value-
in-use and value-in-exchange, which Saracevic & Kantor
344 Davis and Wilson

describe as value within some context [that] describes a relation


between an object or objects and their worth (1997a, p.539).
Worth is exemplified by notions of: merit, benefit, impact,
quality, utility/usefulness, desirability, and/or cost. Other
distinctions are also made among intrinsic value, extrinsic or
instrumental value, inherent value and contributory value
(Saracevic & Kantor 1997a, 1997b). Each of these types or
dimensions of value is closely related. The value of being
informed is intrinsic; the value of information is extrinsic or
instrumental; services that provide access to information are
contributory; and the value of informative objects themselves is
inherent. In the KM context, these issues are still pertinent and
the concern is that only the surface phenomena of information
quantum (the how many questions) or information use (the
how often questions) are measured. To broaden the concept of
information utility and value, Saracevic & Kantor (1997a, p.534)
see a clear distinction between use of information and use of
information services. To incorporate these distinctions, they
employ two approaches, Acquisition-Cognition-Application (ACA)
model on one hand, and on the other, Reasons-Interaction-
Results (RIR) model. The taxonomy of value they derived from
their examination of library services would serve as a helpful
model for the use aspects of KM; their taxonomy (see Figure
11.1 which employs both cognitive and affective states, tangible
and intangible [informative] objects, and interactions between
resources, use and operations (1997b, p.562). Figure 11.1 is a
copy of the Tree-like structure of Saracevic & Kantors (1997b,
p.562) Derived Taxonomy.
Measurement of Knowledge 345

Figure 11.1. Derived Taxonomy from Saracevic & Kantor


(1997b, p.562).

A Social Interpretative Approach


KM can be viewed as a set of processes and practices with the
express objective of bringing internal and external knowledge
resources to bear on creating innovation and wealth for the
organisation. Thus, in terms of evaluating and measuring
outcomes in KM, a primary focus would be on the capability of an
organisation for providing opportunities and contexts for workers
to freely negotiate and share information, and the results of the
organisations capacity to create meaning, and to generate new
knowledge that will benefit a range of organisational objectives.
346 Davis and Wilson

The objectives for KM can be conceived in both quantitative and


qualitative terms. Many of the inputs and outputs of KM initiatives
are quantitative things amenable to empirical data collection of
the how much, how many frequency and extent questions,
while much of the outcome involves qualitative intangibles, such
as organisational and individual learning, the nature of
information transformations (innovation), transference, skill
development and experience building.
Given KMs focus on use, re-use and knowledge sharing of both
tacit and explicit knowledge within an organisational context, a
relevant conceptual approach is that posited by Cornelius (1996).
He advocates using an interpretative approach, in which coherent
sense of a particular practice is made possible by pulling together
descriptions and data to develop clearer meanings than are
currently available. For Cornelius, knowledge is conceived as being
developed within social interaction between minds, contexts and
information. In KM, the emphasis on management, places strong
focus on the context and the nature of interactions that help to
shape use, sharing and exchange of information and in which
informative meaning or knowledge is mutually constructed or
generated from informative resources (e.g. objects, contexts or
situations, and individuals). In this sense, meaning is thus
constructed in relation to information already in the mind and the
external environment (Cornelius 2002). Creating meaning and
significance for organisational purposes (inter-subjective
meanings) is what concerns KM, rather than objective knowledge
out there. KM puts in place practices and processes that will foster
the abilities of an organisation to innovate; it places high priority
on using, interpreting and bringing together information in
knowing minds with information embedded in documents and
other media. With this conceptual approach, there is no
knowledge until workers interact and negotiate around a goal-
oriented activity. The management part of the KM endeavour
coordinates social and organisational conditions that best assist
individuals or groups to respond creatively to units of information.
Encounter with information through social interaction is what
Measurement of Knowledge 347

Cornelius describes as water-into-wine moments of information


ingestion (1996, p.420).

Measurement Issues and Approaches


There is not a great deal of research on effectiveness measures
for KM initiatives; it is an area of work that is still evolving. This
chapter thus takes as appropriate starting points for KM, a
range of perspectives and methodologies used in neighbouring
information fields that have potential for measuring aspects of
performance in KM settings. Directions are suggested for future
research on measurement issues for knowledge management.
One of the complexities in measuring knowledge and
measuring KM process is that evaluation of knowledge and its
use is different from discrete measurement of knowledge
process. In the latter, the units of analysis may more easily be
quantitatively analysed than the more subjective measures
applicable for evaluating knowledge and its use. However,
neither can be seen as separate areas of exploration; both are
necessary for obtaining a full picture. For remaining competitive,
organisations require continual evaluation of process and
procedure to achieve organisational goals cost-effectively and
with assessable benefits. Measurement for KM needs to reflect
the operational components of knowledge utilisation and the
strategies adopted to optimise advantage from use of knowledge
resources including artefacts, information technology (IT),
information systems (IS), intellectual and social capital and staff
expertise. Because initiatives in KM are context specific and are
prioritised to affect certain aspects of an organisations output
(for example, production, discovery, innovation or service), it is
more likely that measurement and evaluation techniques require
tailoring for each case. According to Kirk (1999), different views
on information need to be taken into account because
evaluations based solely on tangible information objects, without
also looking at information as a construct, and information
processes, will be misleading.
348 Davis and Wilson

Different approaches will be needed to measure effectiveness


of KM across different situations. One approach focuses on the
management processes of knowledge; another on the types of
organisational innovations put in place. In this latter approach,
Kirk (1999) advocates the consideration of the information
capabilities of managers and their co-workers. An exploratory
study by Southon, Todd & Seneque (2002) indicates that
knowledge structures and cultures differ substantially between
organisations and are influenced by the commercial
environment and the governing structures. They found that the
way knowledge is understood, used, and managed is embedded
in the operation of the organisation and the thinking of workers
at operational and strategic levels. The variety of manifestations
within each organisation studied was noted, although their
respondents viewed the concept of knowledge as
unproblematic. For some respondents, knowledge was
associated with task knowledge, skill, and ways of developing
process and practice management; for others it was personal
knowledge and skill in interacting with clients, or in knowing
about community interests and concerns. Thus, in approaching
the measurement of KM, each organisational context will require
some adaptation to the methods of capturing data or evidence to
be deployed in the research framework (variables, factors, units
to be examined); for example, type or depth of process,
availability of informative units/objects, or of knowledge
capability (intellectual or social capital and skills available).
Indices, comparisons, and analyses of productivity patterns
should all provide objective sources of information about trends
of use and draw attention to problems (e.g. such as those
encountered with specific initiatives or activities). However,
Druckers (1993) view of conceptualising knowledge as a utility,
provides a way to begin operationalising KM actions and
processes as research variables for measurement. Drucker
(1999) also alludes to problems in measuring the productivity of
knowledge workers where clear units of analysis of many
intangible factors relating to human knowledge acquisition and
use are difficult to define.
Measurement of Knowledge 349

Lehr & Rice (2002) suggest a number of theoretical


approaches to measurement of knowledge management,
including organisational learning, Weikian (Weick 1995, 2001)
sense-making, quality management, and critical theory. Other
approaches are mentioned, such as cultural studies,1 systems
theory, and organisational decision-making. The benefit of using
multiple approaches is that their use gives richer basis for
understanding; more comprehensive understandings and
heightened validity of results brought by triangulation of
methods and overlapping measures. The Lehr & Rice (2002)
emphasis is on communication-based explorations. An equally
important focus would be on use of knowledge objects within the
organisation, for example, the number of generated (created)
items of internal information and extent of their shared use
within an organisation; the number and extent of use of
knowledge objects brought in from the external environment for
enhancing shared knowledge structures within an organisation;
the extent and frequency of use of technologies (both IT and ICT
information and communication technologies) to store,
manipulate and disseminate information within an organisation.

Defining Knowledge and Knowledge Processes


Knowledge is a difficult concept to define. Knowledge has a
dimensional structure that covers:
Tacit knowledge where its extent or presence cannot be
objectively measured directly but where some subjective
assessment of its presence or utility can be gauged mostly
through the use of proxies that signal specific aspects of
its extent or presence

1
For example, Media Richness Theory was used by Guo Zixiu to explore the
effect of culture on the communication behaviour of a Multinational
Corporation with branches in Thailand, Malaysia, Korea and Australia. See:
Guo Zixiu (2003); Guo Zixiu & DAmbra (2001); Guo Zixiu, DAmbra, &
Edmundson (2002).
350 Davis and Wilson

Explicit knowledge where its extent, quality and


quantity are able to be objectively measured by a range of
statistical and other techniques
Tacit and explicit knowledge are exchanged from individuals
in a dynamic social context through various processes through:
Socialisation enables tacit knowledge to be transferred
from one individual to another. In this socialisation mode,
knowledge transfer is effected through teaching, training,
coaching, guiding, sharing, learning-by-doing, mentoring,
adoption of values and beliefs through sharing.
Combination enables existing explicit knowledge to be
combined into new explicit forms. In this mode, research
processes such as experimentation, hypothesis formation
and testing are written up as results that lead to
interpretations, knowledge claims, and conclusions.
Externalisation converts tacit knowledge into explicit
knowledge in the forms of concepts and models. In this
mode, process and practice are codified (written down) and
recorded as best practice, models, recipes, know-how rules
and regulations, databases, manuals, routinisation, and
person to document actions.
Internalisation enables individuals to absorb explicit
knowledge and to broaden their tacit knowledge so that new
knowledge can be developed. In internalisation, creative
thinking, learning, individual reading and note taking,
pattern recognition among data elements, speculating and
surmising, trialling and testing are prominent.

Knowledge Enablers
Knowledge processes are enhanced by external enablers, such
as information and communication technologies ICTs), and are
influenced by organisational factors (social and behavioural).
The whole rests on an assumption that information seeking,
using and learning are part of a holistic effort to make sense of
the world within specific contexts and situations, driven by
Measurement of Knowledge 351

factors with social, economic and learning implications, and by


external and internal factors. Critical to the success of KM are
enabling mechanisms within the following:
The organisational environment including leadership
and culture
The technological infrastructure technology and system
provision to assist knowledge processes.

Knowledge Capital
To fully realise the KM objective, layers of knowledge and
experience are built up to form the basis of the organisations
intellectual capital. It has been suggested that benchmarks can
be derived from empirical evidence about the number and health
of communities of practice, the contributions of individuals and
the contributions of the community itself or of specific work
teams within an organisation (Anklam 2002). These layers are
described as:
Accumulated understandings, experience and insight
of individuals that are brought to bear in formulating
ideas, resolving problems, creating new products, patents
or works, or in building new knowledge claims, for
example, knowledge derived from personal experience that
is embedded in people, in human brains. This kind of
knowledge is characterised as a tacit dimension of
knowledge.
The accumulated works of recorded knowledge,
bodies of knowledge contained in traditional printed or
other recorded formats for example, knowledge
embedded in objects, in repositories such as libraries,
archives, museums, and so on. Knowledge of this type is
characterised as being an explicit dimension of codified,
systematic knowledge.
The specialised expertise accumulated by an
organisation through its particular mix of staff and
organisational processes built up over time - exemplified
352 Davis and Wilson

by the sum of the personal knowledge of organisational


personnel (intangible assets) and knowledge as embedded
in the records of organisations, stored in computer
systems as codified, structured knowledge. Knowledge of
this type is characterised as combining both tacit and
explicit dimensions of knowledge.
Table 11.1 attempts to categorise how elements of knowledge
use within organisations might be examined, and the types of
units that could be measured to provide a systematic analysis of
KM process, practice and resources employed.

Methodologies for Measurement of Knowledge


In terms of how we evaluate business performance, everything is
related to the bottom line. Work to improve knowledge transfer,
increase people skills and knowledge structures within the
organisation is not taken into account in bottom line
performance measures. The process of assessing ability and
knowledge processes in the workplace is more complicated.
Perhaps what we need to measure is potential in less tangible
areas like KM and to develop a range of criteria for so doing,
similar to the specific criteria used in measuring business
performance. Organisations that are successful dont just
create, innovate and produce; they execute and complete
projects, they coordinate and manage the efforts of many people
to achieve results. Relying solely on technical know-how is
insufficient without the necessary organisational and knowledge
structures in place to apply that technical knowledge in the
field. Making organisations smart means coordinating
individuals (workers) to be more effective than they previously
were. So there is clearly a need to measure both sides of the
organisational success equation the hard identifiable units of
performance and the softer, difficult to identify aspects of KM,
organisational processes and structures. Traditional
performance areas of business, for example, those related to
growth, sales and customers, and financial results, are not
appropriate as KM measures. New measures related to the
Measurement of Knowledge 353

objectives, goals, and desired outcomes have to be defined.


Research on creating ways to measure new business strategies
using knowledge or intellectual capital as the focus has been
building since the mid-1990s (see Roos & Roos, 1997). There is
a general realisation that the future earning capabilities of an
organisation rely on the effective deployment of knowledge
assets within the organisation, such assets as are often called
intellectual capital that embody company experience, learning,
rules and routines, and are embodied in the core competencies
of its workers.
To depict the relationships in knowledge work, Moore (1999)
has canvassed the idea of modifying existing models, such as
those used for measuring software projects for forecasting cost
and schedules. He argues that the Putman Norden Rayleigh
(PNR) equations for productivity, delivery and defect density are
useful starting points for developing further work on metrics for
knowledge work. Although PNR equations were derived for
software development, Moore believes that they contain a
great deal of invaluable knowledge that can be generalised for all
knowledge work (Ch.6, p.9). A difficulty, however, is that inputs
and outputs from knowledge work are all intangible, cannot be
directly quantified, and, for such complex phenomena, there are
no standard units of measure. To avoid the dilemma of tangible
measurable units for intangible human factors, Moore (Ch.6,
p.6) focuses on three main attributes: size (e.g. the relationship
between size of a knowledge work project and its cost), effort and
duration (e.g. the relationship between manpower and time
taken to completion) of projects. However, he rather undermines
his belief in the PNR model by also stating that inaccurate and
unreliable metrics and cost estimation methods are a primary
cause of software project schedule and cost overruns (Ch.6,
p.10). Despite this, Moores analysis of the different factors
associated with knowledge work derived from a number of
models2 is very useful because he points to a high degree of

2
PNR-Putnam Norden Rayleigh model, COCOMO (Constructive Cost Model)
and FPA (Function Point Analysis).
354 Davis and Wilson

agreement, overlap and duplication among them which serves to


support the utility and findings from the models (Ch.6, pp.13-
15).
Table 11.2 indicates some ways in which examination of
human factors in KM might be explored: including individual
and group dynamics as well as other intangibles relating to the
complexity of information production and use, and the flexibility
required for adequate social process and enhancement of
communities of practice. This table focuses on some
methodological approaches that could be used to investigate and
measure processes, specific communication uses, and attitudes
towards and perceptions of knowledge management initiatives in
organisations. Using a range of techniques, strategies for
measurement at multiple, individual and collective levels could:
Provide platforms for assessment of performance
Better position individuals or organisations for success
Give focus to operationalised goals and strategies
Set standards
Evaluate professional competencies and the efficiency of
intellectual capital
Assess past performance, in order to monitor present
performance
Identify trends in a range of areas, e.g. development,
market share, customer satisfaction and so on.
Levels of Measurement
Measurement of KM practice and process (affects and effects)
can be multi-level or single level. These levels comprise:
Individual level explorations of effects on individuals
within an organisation structure and knowledge ecology
Group level examination and interpretations of effects
on team work, departmental or process groupings
Organisational level analysis of organisational
structure and benefits from KM practices and
comparisons with like organisations (in terms of output,
growth and bottom-line success).
Measurement of Knowledge 355

Industry level analysis of the effects of KM applications


of a whole sector or industry.
Saracevic & Kantor (1997a, 1997b) also approach the
measurement of library information services from a number of
levels: social, institutional/organisational, and individual. The
social level deals with providing value to the community; the
institutional/organisational level refers to the value accruing to
the institution or organisation that funds or houses and
supports the library or service; value at the individual level
relates to the perceptions of users regarding the value of the
services they use or receive.
Given that there are two major categories of methodology,
quantitative and qualitative, the table shows methods for
exploring the relationship between organisational productivity,
use of information technology, and information &
communication technologies (IT & ICT), and between
organisational productivity and Knowledge Management.
As noted by Kaplan & Norton (1996) and Arora (2002), in the
face of few systematic studies that tackle KM effectiveness
questions, the best solution might be to use multiple methods in
combination to indicate the strength and extent of progress,
where change is happening within an organisation and for which
parts of the KM program. There is a range of different
organisational settings in which KM is employed large, small
or medium-sized enterprises, public service, non-profit or public
for-profit, and private companies. This diversity demands flexible
measurement approaches in order to take account of the
dynamics of change or movement within specific settings. A
repertoire of different techniques and research designs is
required such that appropriate choices are made depending on
organisational context and type of KM initiative, activity or
process that is to be examined and evaluated.

Paradigms, Tools and Initiatives


For some time now, companies have built up experience and
bodies of measurement data on the cumulative effect of their
356 Davis and Wilson

efforts in knowledge initiatives in different industries and


business sectors. Reports of these efforts give ideas about what
works and what areas are not amenable within current
paradigms and tools of measurement (Housel & Bell 2001).
Although each company is in some way unique, a hope exists
that some generalisable units of analysis with accompanying
standards will emerge, but, as Housel & Bell point out, a high
tolerance for ambiguity is required by researchers with results
being more often suggestive rather than definitive(p.87).
Liebowitz & Wright (1999) propose to explore human capital
value through activity-based valuation. They suggest using
aspects of traditional accounting models in which tangible
assets are valued, such as those typically used in
manufacturing environments and combined with intangible
assets not previously included in balance sheets, such as
human resources (and intellectual capital) or goodwill.
Stewart (1998) defines intellectual capital (IC) as organised
knowledge that can be used to produce wealth.3 Organised
knowledge includes, inter alia, intellectual material such as
knowledge, intellectual property and experience. In the preface
to the paperback edition, Stewart distinguishes IC from
intellectual working capital; that is, the flow or torrent of
data, facts, meter readings, and so forth. A worker might need
precise, up-to-date information at any given moment, but not
necessarily at this moment. What he [the worker] does need, at
every moment, is a way to get the data he might need at any
moment. The pieces of data (or information) forming
intellectual working capital have different aims and different
measurements from IC. Knowledge management (including IC)
and knowledge databases should link people (e.g., people with
specialised expertise to people needing expertise) and they
should be about connection, not collection.

3
See also Measuring Intellectual Capital, a section in the general review of
various aspects of IC by Snyder & Pierce (2002, pp.479-490). This review has
over 100 references.
Measurement of Knowledge 357

In Stewarts (2001) follow-up book, he stresses the need to


consider intellectual versus bricks and mortar or other forms of
capital in measuring the worth of corporations. He contends that
the knowledge-based economy is a new economy with new rules
and new ways of doing business. Companies mastering the
knowledge agenda are those that will succeed in the 21st
century. He states that the knowledge economy stands on three
pillars:
Knowledge has become what we buy, sell and do;
Knowledge assets (i.e. Intellectual Capital) are increasingly
more important to companies than financial and physical
assets;
To prosper in the new knowledge-based economy,
businesses need new vocabularies, new management and
measurement techniques, new technologies and new
strategies.

Balanced Score Card for KM: Perspectives and Views


The Balanced Score Card Approach (BSC) with its management
framework to provide ways of linking strategic objectives of an
organisation with concrete measurements, offers a likely
scenario for monitoring KM progress over time. In the BSC
approach, a number of useful parameters are identified as data
capture points on a whole range of actions, processes and
behaviour. For example, number, extent, size, frequency of
usage of different forms of information, communication
behaviour such as choice of media, information sharing and
contribution, processes such as updates to databases, best
practice files, and inter-, intra-, and extranets; and behaviour
related to level of interaction, type of dissemination of
information, discussion and collaboration. Also, in the BSC
approach, not all parameters need to be measured at the one
time; research can be run strategically on a continuous
evaluative basis. Some of the parameters might even be drawn
into developing a multiple index of KM for the organisation.
358 Davis and Wilson

BSC in Human Resources


As human resources (HR) become central to the building of
intellectual capital, documentation on HR measurement systems
is appearing. Huselid et al (2001) introduce a seven-step process
for including HR systems within the firms overall strategy the
HR Score Card. The authors draw on their ongoing study of
nearly 3000 firms and build on the proven BSC model (Kaplan &
Norton, 1996) to show how to link HRs results to measures
such as profitability and shareholder value. Their approach
explores seven key success factors for implementation, starting
with leading change (who is responsible) and ending with
making it last (how it will be started and sustained). Huselid
and his co-authors claim to have strengthened Kaplan &
Nortons BSC by addressing the question of how best to
integrate the role of HR into an organisations measurement of
business performance.
BSC in Electronic Commerce Contexts
Hasan & Tibbits (2000) have modified the BSC approach for
measurements of progress in electronic commerce contexts.
Similar modifications for other fields have been discussed (e.g.,
Martinsons, Davison & Tse, 1999 with Information Systems).
Thus, a modification in which KM strategies are described in
terms of concrete goals and objectives could be expected to be a
useful means to monitor aspects of KM. Indeed, modifications or
enhancements of the Kaplan & Norton (1996) approach to BSC
have been commercialised through a number of vendors, in
particular by the Balanced Scorecard Collaborative, which is
advertised as a new kind of professional services firm
dedicated to the worldwide awareness, use, enhancement, and
integrity of the Balanced Scorecard as a value-added
management process (see their Web site at:
http://balancedscorecard.com/).
Measurement of Knowledge 359

Some Australian Cases KM Measurement in Action


De Gooijer (2000) describes an approach for embedding
knowledge management within the overall business performance
management model of public sector organisations, and for
discerning the degree to which people use knowledge
management in their work. De Gooijer, Director of Innovative
Practice in Victoria, developed a Knowledge Management
Performance Scorecard (KMPS)4 for the Department of
Infrastructure in that state one of the public sector agencies
formed in 1996 from the Departments of Planning, Transport,
and other major projects. The Department of Infrastructure has
about 700 staff engaged in various knowledge-consumption and
knowledge-creating activities, such as: policy development;
strategic planning; management of major projects in land use,
transport and building; research in infrastructure use, and
privatisation of infrastructure. The framework De Gooijer
developed for the agency integrated three basic approaches:
Russells (1995) information ecology framework to provide
a knowledge management map that considers the
organisations culture, structure and processes;
Nonaka & Takeuchis (1995) emphasis on the Japanese
knowledge-creating companies which transform tacit into
explicit knowledge; and
Weikian sense-making quality management as a key
element in electronic work and computer-mediated
communication.
Because De Gooijer (2000, p. 310) distinguishes
organisational performance from that of individual behaviour,
the framework developed provides for hard business measures
to be linked to fuzzy social measures in a coherent way.
Martin (2000), in looking at the problem and issues
surrounding knowledge measurement, provides a brief overview

4
The KMPS is an adaptation of the balanced scorecard (BSC) approach by
Kaplan & Norton (1996).
360 Davis and Wilson

of some of the measurement techniques for intangible assets


and argues for the importance of metrics to the overall
process of knowledge management. Martin comments on
Sveibys invisible balance sheet5 for the Australian management
consulting and recruitment firm, Morgan & Banks. The
difference between the net book value (A$10 million) and the
1996 market value (A$82 million) of Morgan & Banks was
calculated by Sveiby as some A$72 million in intangible assets
which were not present in the companys balance sheets. These
intangibles were classified under three broad categories:
employee competence (e.g., experience); internal structure (e.g.,
innovations, patents); external structure (e.g., company
reputation or image).

Knowledge-based Economy and Society


The Australian government has been working on developing a
range of indicators for assessing the status of four core
dimensions of a knowledge-based economy: innovation systems,
business environment, ICT infrastructure, and human resource
development. The governments report on Australia as a modern
economy draws on comparative international data from OECD
and national data from the Australian Bureau of Statistics (ABS)
and from the Department of Industry Tourism & Resources (ITR,
2002).6 This report suggests strongly that Australia is
currently among the most modern economies in the world
(p.xi) but warns that to maintain this status there is need to
continue investment in education and training, skill acquisition
and know-how, innovation, e-commerce, use of internet
technology (in various formats) and other ICTs. These factors
form the foundation of what is termed the new modern economy.
Australias performance in investment in knowledge assets of
7.7% of GDP when compared to other OECD nations
represents 32% of physical investment, but is lower than the

5
See also Sveiby (1977) Chapter 13, Implementing systems for measuring
intangible assets, p.149.
6
Formerly the Department of Industry Science & Resources (ISR).
Measurement of Knowledge 361

OECD average (8.8%) and well below that in Sweden (the highest
at 11.6%), the USA (10.1%), Korea (9.7%) and several other
countries (p.xi). The ITR report credits strong investment in
ICTs as partly responsible for Australias rising multi-factor
productivity,7 which is represented by that part of the economy
that cannot be accounted for directly by inputs of physical
capital and labour (p.xii). Among OECD countries, Australia is
seen as well placed for knowledge diffusion because of its
relatively high-access, low-cost Internet environment (ITR 2002).
Following quickly after the June 2002 ITR report, the ABS
released a discussion paper proposing a framework for
measuring a knowledge-based economy and society (ABS, 2002).
In this paper, the ABS presents definitions of data requirements
for building statistics or indicators of impact of knowledge and
use of ICTs. The basis for the proposed indicators derives its
theoretical foundation from notions of social capital. The extent
of social capital, together with intellectual, structural and
customer capital, is thought to be one of the key indicators for
signifying Australias prospects for success. Building community
through communities of practice, communities of learning, of
interest and purpose, is a key component in KM. Like social
capital, knowledge management consists of the stock of
relationships within a company, organisational context, trust
and norms for enabling, using, sharing and disseminating
knowledge.

Conclusion
It is clear that KM is evolving rapidly as one of the management
imperatives for maintaining innovative and competitive strength
in organisations of all kinds. Approaches to measuring KM
necessarily rely on the use of a number of different conceptual
frameworks in which to couch research questions, and a diversity
of methods for data collection and empirical observation.
Adaptation for application to KM practice of some of the

7
Third highest in the OECD at 11% of GDP in 2001 (ITR 2002).
362 Davis and Wilson

performance and productivity measures8 developed in other


areas of business management, will be one solution to the KM
measurement issue; another will be to borrow other methods
from related information fields, and in particular from
information science broadly defined9 and specifically from
studies of information seeking and use.10 Australia has made a
start at the national level with the ABS disseminating their
framework for measurement of the knowledge-based economy
and society (see ABS 2002; ITR 2002). Other research in
Australia has begun to implement and modify models and
methodologies (see De Gooijer 2000; Guo 2003; Hasan & Tibbits
2000; Southon, Todd & Seneque 2002). As the field of KM
matures and becomes embedded in business practice, we
should expect to see more activity leading to publications about
successful KM measurement research both internationally and
in Australia. What is clear at this stage is that the tangible
physical aspects of KM work are being measured, such as
number of, and expenditure on, ICTs, information services (e.g.,
consultants, libraries, databases etc.), in-house information
resources, and audits of staff qualifications, skills, and training.
In the case of specific KM projects or initiatives, a useful focus
would be to develop better ways to measure their effectiveness
on business outcomes. But KM comprises both tangible and
intangible knowledge assets and use patterns. The intangibles
underpinning the building of the intellectual and social capital
necessary for consolidating the KM advantage are the more
difficult and less traversed area for research on measurement of
KMs effects both in the short- and the long-term. Since the
future of KM requires effective methods for measuring outcomes
and evaluating processes, these will evolve as the field matures.

8
Such as those reviewed on productivity measurement by Singh, Motwani &
Kumar (2000).
9
See Footnote 5 for references on measurements in information science.
10
See, for example, Case (2002). The book presents models of information
behaviour and examples of information seeking.
Measurement of Knowledge 363

The interpretative approach requires that KM broaden its


understanding of its practice and to make sense of that practice
and its achievements. It will take time to develop systematic and
well-defined units of analysis that can be compared across a range
of organisations. Over time, some standards about which units of
analysis are appropriate in which contexts will emerge; validation
of previous trials in identifying which are the appropriate
parameters, will provide a needed stability, particularly for
across-institution comparisons. Research designs that provide
robust data about KM process and practice will be identified.
Exchange of information about techniques, approaches and
experimentation with new methods of measuring will bring us
closer to the systematic work necessary for obtaining evidence of
the benefits and effectiveness of knowledge use in this emerging
area of the metrics of Knowledge Management.

References
ABS Australian Bureau of Statistics (2002). Measuring a
knowledge-based economy and society: an Australian framework
2002. Discussion Paper, no. 1375.0. Canberra: ABS. Available
electronically as a pdf file through www.abs.gov.au.
<http://www.abs.gov.au/ausstats/abs@.nsf/0/FE633D1D2B900
671CA256C220025E8A3?Open> (Last accessed May 27, 2003)
Arora, R. (2002). Implementing KM: a balanced score card
approach. Journal of Knowledge Management, 6(3):240-249. (Also
available electronically).
Anklam, P. (2002). Knowledge management: The collaboration
thread, Bulletin of the American Society for Information Science and
Technology, 28(6): 8-11
Balanced Scorecard Collaborative Inc. See:
<http://balancedscorecard.com/> Note: A professional services
firm dedicated to awareness, use, enhancement, and integrity of
364 Davis and Wilson

the Balanced Scorecard as a value-added management process.


(accessed May 27, 2003).
Boyce, B.R., Meadow, C.T., & Draft, D.H. (1994). Measurement
in Information Science. San Diego, CA: Academic Press.
Case, D.O., ed. (2002). Looking for Information: A Survey of
Research on Information Seeking, Needs, and Behavior. New
York: Academic Press.
Cornelius, I. (1996). Meaning and Method in Information
Studies. Norwood, NJ: Ablex Pub.
Cornelius, I. (2002). Theorising Information for Information
Science. Chapter 9 in Annual Review of Information Science and
Technology, edited by B. Cronin. vol.36: 393-425.
De Gooijer, J. (2000). Designing a knowledge management
performance framework. Journal of Knowledge Management, 4(4):
303-310.
Drucker, P. F. (1993). Post Capitalist Society. New York: Harper
Collins.
Drucker, P. F. (1999). Knowledge-worker productivity: the
biggest challenge. Californian Management Review, 41(2): 79-94.
Guo, Zixiu (2003). Cultural Influence on Communication Media
Choice Behavior: A Cross-cultural Study within Multinational
Organisational Settings, PhD Dissertation, University of New
South Wales, Sydney.
Guo, Zixiu & DAmbra, J. (2001). Does Culture Matter? A
Cross-Cultural Comparison of Perceptions on Information
Technology within Multinational Organisation Settings, The 6th
Asia-Pacific Regional Conference of International
Telecommunications Society, Hong Kong.
Guo, Zixiu, DAmbra, J., & Edmundson, R. (2002).
Understanding Culture Influence on Media Choice Behavior: A
Cross-Cultural Study within Multinational Organisation Setting,
PACIS2001, KOREA.
Hasan, H. & Tibbits, H. (2000). Strategic management of
electronic commerce: An adaptation of the Balanced Scorecard.
Internet Research: Electronic Networking Applications and Policy,
Measurement of Knowledge 365

10(5): 439-450. Note: Text version also available at:


http://www.uow.edu.au/~hasan/aica/hasan-tibbits.htm
(accessed May 27, 2003).
Huselid, M.A., Ulrich, D. & Becker, B. (2001). The HR
Scorecard: Linking People, Strategy, and Performance. Cambridge,
MA: Harvard Business School.
Housel, T. & Bell, A.H. (2001). Measuring and Managing
Knowledge. Boston: McGraw-Hill Irwin.
Huysman, M. & De Wit, D. (2002). Knowledge Sharing in
Practice. Dordrecht: Netherlands, Kluwer Academic.
ITR Department of Industry Tourism and Resources (2002).
Australia as a Modern Economy: Some Statistical Indicators 2002.
Canberra: ISR. (Available electronically at www.itr.gov.au).
Kaplan, R.S. & Norton, D.P. (1996). Using the balanced
scorecard as a strategic management system. Harvard Business
Review, pp.75-85. (Available via the web).
King, D.W. & Griffiths, J.-M. (1998). Evaluating the
effectiveness of information use, in AGARD Lecture Series No.160,
Lecture Paper no.1, Evaluating the Effectiveness of Information
Centres and Services.
King, D.W. & Griffiths, J.-M. (1991). A Manual on the
Evaluation of Information Centers and Services. Prepared for
NATO/AGARD.
Kirk, J. (1999). Information in organizations: directions for
information management. Information Research, 4(3). at
<http://informationr.net/ir/4-3/paper57.html >(accessed 19
August 2002; again 6 June 2003).
Koenig, M.E.D. (2001). Lessons from the study of scholarly
communication for the new information era. Scientometrics 51(3):
511-523.
Lehr, J.K. & Rice, R.E. (2002). Organizational measures as a
form of knowledge management: a multitheoretic,
communication-based exploration. Journal of American Society for
Information Science & Technology, 53(12): 1060-1073.
366 Davis and Wilson

Liebowitz, J. & Wright, K. (1999). A look toward valuating


human capital. Chapter 5 in Knowledge Management Handbook
edited by J. Liebowitz. Boca Raton, FL: CRC Press.
Martin, W.J. (2000). Approaches to the measurement of the
impact of knowledge management programmes. Journal of
Information Science, 26(1): 21-27.
Martinsons, M., Davison, R. & Tse, D. (1999). The balanced
scorecard: A foundation for the strategic management of
information systems, Decision Support Systems, 25(1): 71-88.
(Also available electronically).
Moore, C. R. (1999). Performance measures for knowledge
management. Chapter 6 in Knowledge Management Handbook
edited by J. Liebowitz. Boca Raton, FL: CRC Press.
Nonaka, I. & Takeuchi, H. (1995). The knowledge-creating
company: How Japanese companies create the dynamics of
innovation. New York, Oxford University Press.
Roos, G. & Roos, J. (1997). Measuring your companys
intellectual capital, Long Range Planning, 30(3): 413-426.
Russell, R.H. (1995). Implementing the information ecology
framework, Ernst and Young, Boston, MA (Center for Business
Innovation Working Paper). Also available at:
<http://www.cbi.cgey.com/pub/docs/Information_Ecology_Fram
ework.doc> (accessed August 29, 2002; again May 27, 2003).
Saracevic, T. & Kantor, P. (1997a). Studying the value of
library and information services. I. Establishing a theoretical
framework. I. Establishing a theoretical framework. Journal of
the American Society for Information Science, 48(6): 527-542.
Saracevic, T. & Kantor, P. (1997b). Studying the value of
library and information services. II. Methodology and Taxonomy.
Journal of the American Society for Information Science, 48 (6):
543-563.
Singh, H., Motwani, J. & Kumar, A. (2000). A review and
analysis of the state-of-the-art research on productivity
measurement. Industrial Management & Data Systems, 100(5):
234-241.
Measurement of Knowledge 367

Southon, F.C.G., Todd, R.J, & Seneque, M. (2002). Knowledge


management in three organizations: an exploratory study, Journal
of the American Society for Information Science and Technology,
53(12): 1047-1059.
Stewart, T.A. (1998). Intellectual Capital: The New Wealth of
Organizations. New York: Doubleday. [paperback edition;
hardcover, 1997]
Stewart, T.A. (2001). The Wealth of Knowledge: Intellectual
Capital and the Twenty-first Century Organization. New York:
Doubleday.
Sveiby, K.E. (1997). The New Organisational Wealth: Managing
and Measuring Knowledge-based Assets. San Francisco: Berrett-
Koehler.
Snyder, H.W. & Pierce, J.B. (2002). Intellectual Capital.
Chapter 11, in Annual Review of Information Science and
Technology, edited by B. Cronin. vol.36: 467-500.
Tague-Sutcliffe, J. (1995). Measuring Information: An
Information Services Perspective. San Diego, CA: Academic Press.
Weick, K.E. (1995). Sensemaking in Organizations. Thousand
Oaks, CA: Sage.
Weick, K. (2001). Making Sense of the Organization. Malden,
MA, Oxford, UK: Blackwell Business.
Wilson, C.S. (1999). Informetrics. Chapter 3 in Annual Review
of Information Science and Technology, edited by M. Williams.
vol.34: 107-247.
368 Davis and Wilson

Table 11.1. KM Process, Practice and Resources Use: Internal


vs External Layers of Measurement.

Internal Sources of Knowledge External Resources


Process (accumulated Process (explicit mechanisms)
understandings etc.) reporting for external
- learning (in-house training consumption (units)
units/hours) success instances/awards
- routines/rules (number/type)
(number/currency)
power relations with external
- contexts (units/teams etc.)
orgs e.g. competitors,
- power relationships (internal)
collaborators etc. (number/type)
availability of ICT for capture,
storage and other purposes
(extent, depth type; cost as
proportion of other knowledge
investments)

Process (accumulated practice Process (tacit mechanisms)


etc..) equivocality resolution
sharing (units) change mechanisms
making-sense (proxy variables) notions of organisation (vis--vis
meetings of internal practice other orgs)
communities (number and conflict, problem resolution etc.
duration, decisions) external relationships (extent of
communication - forms of media collaborations formed,
for communication internally presentations to external
e.g. email, phone, sms etc. audiences, products exhibited,
(extent estimates) etc.)
mediation (via analysis & re-
formatting of K.)
Resources Use Resources Use
feedback mechanisms public or external document use
(number/type) (number/units/currency)
language & stories (content )
analysis)
Measurement of Knowledge 369

(continuation of Table 11.1)


classification systems for use of consultants (external) and
retrieval of internal records/ other information-rich services
documents (retrievability e.g. databases
indicator & re-use of K. (number/type/frequency of use)
indicator) IT & ICT spending & use
internal library or archive (size of patterns (type/expenditure)
collection, acquisition rate, use of external web pages
circulation, expenditure etc.) (frequency of access/type of
web resource)
Sources (bodies of knowledge) types of external library use
documentation (in-house units) e.g. inter-library loans,
intra-net use (extent web- clipping services, alert
indicators) services, etc.
(frequency/expenditure)
Specialised expertise, skill etc.
staff, capabilities & skills
(type/extent/currency)
intellectual capital (indicator of
the skill and staff available ass
proportion of workforce)

ICTs
software (number/type)
hardware (number/type)
network (number/type)
Internet, Intranet or Extranet
capacity
370 Davis and Wilson

Table 11.2. Methodologies for Measurement of Knowledge


Methods for Investigating Aspects of Knowledge in
Measurement KM Environments
Approach TACIT EXPLICIT
Communication & Behaviour Codified & Recorded Artefacts
Tracking Problem Trials; / Items
Content Analysis of Informetric Methods, e.g.
Information Units; Documentation or
Collaborative Network Publication (both public
(Work Team) Analysis. and internal) productivity;
Presentations at Meetings,
Information Search/Retrieval Conferences or other semi-
Access and Browsing private Forums;
Process; Re-use of information by
QUANTITATIVE Search Tactics - Seeking frequency of use counting;
Behaviour; Individual /Team
Company and Individual productivity;
Learning. Type of Document or
Source Usage.
Informetric-type Methods for
Measuring Knowledge Impact
and Reach, e.g.
Multi-dimensional
mapping of knowledge
domains, databases or
output of communities of
practice;
Data Mining Techniques;
Web visibility & presence;
Economic indicators of
business usage of ICT.

Productivity Indicators
Comparisons of
productivity;
Measures of growth and
quality;
Input-output measures, e.g.
Leontieff matrices.
Measurement of Knowledge 371

(Continuation of Table 11.2)


Methods for Investigating Aspects of Knowledge in
Measurement KM Environments
Approach TACIT EXPLICIT11
Social Order and Process Usage - ICT & other
Case Studies; Technologies
Participant Observation Survey Method;
Studies; Ethnographic Methods;
Information Seeking; Evaluation Methodologies;
Information Use e.g. Action Research.
Electronic versus Traditional
Print. Management
Communication & Behaviour Economic Indicators;
Use of Information and ICTs; Economic Implications of K
Motivations for sharing, using Use;
and contributing; Knowledge Management
Communication Media policies and strategy
QUALITATIVE Choice; implementation;
Usability studies e.g. IT or IS. General Business Policy
Communities of Practice and Reform Strategies.
Boundary descriptions, field
or specialist territories;
Disciplinarity Inter-, Multi,
Pluri-;
Network Analysis;
Career development and skill
enhancement strategies;
Sense-making activities
within specific contexts;
Social Network Analysis
diagnosing patterns of
interaction in organisations.

11
Documentation on and examples of quantitative measurements for
investigating explicit knowledge are abundant in the discipline of
Information Science. See, for example: Boyce et al (1994), Tague-Sutcliffe
(1995), Wilson (1999). Wilsons comprehensive review of the subdiscipline of
informetrics has over 300 references.
372 Davis and Wilson

Вам также может понравиться