Академический Документы
Профессиональный Документы
Культура Документы
Management
System
Management
Information System
IILM
Submitted To :
Miss Roma Chouhan
Submitted By :
Aditya Pradhan
Amol Srivastava
Arunangshu Pal
Apurva Agarwal
Acknowledgement
2|Page
3|Page
Table of Contents
1. Introduction ............................................................................... 4
2. Fundamentals............................................................................10
2.1.1. Knowledge................................................................10
3.6. Summary............................................................................24
6. Conclusions ...............................................................................23
7. References ................................................................................57
4|Page
1. Introduction :
People have always played a major role when managing events such as emergency events, the
environment, or the organisation of public or private activities . If organisations are involved in such
events, the energy of the actors has to be channelled and focused. These organisations can be
professional ones such as the police, fire departments and an emergency planning team in an
emergency incident. However, they can also be non-professional ones such as neighbourhood
communities organizing a street party or a group of friends going on vacation together. Since these
types of organisations are collaborating they share information and knowledge.
The members of the organisations share information and knowledge among themselves by working
and collaborating with the different layers of intelligence, namely Personal Intelligence, Media
Intelligence, Mass Intelligence, Social Intelligence and Organisational Intelligence. These different
layers of intelligence can be leveraged for achieving the organisations’ goals and thus reinforces
especially the necessity of Organisational Intelligence. Organisational Intelligence is the ability of an
organisation to understand and to leverage knowledge that is relevant to its goals and purpose. As a
consequence, it is the goal of Organisational Intelligence to bring the right piece of knowledge, at the
right time to the right person, in order to support decision making to best accomplish the
organisation’s purpose. This raises the question of how knowledge is managed and leveraged in
organisations.
5|Page
2. Fundamentals :
This section is divided into two parts. The first section introduces and discusses the basic concepts of
knowledge and knowledge management. Knowledge is the foundation of Organisational Intelligence
as it is crucially needed by organisation’s knowledge management to assess situations and to make
decisions to accomplish the organisation’s purpose.
Knowledge :
To be able to manage “knowledge”, an understanding of the basis of this very generic concept is
needed. There have been many investigations on the nature of knowledge so far, but one of the
most frequently used explanations in computer science literature is the following:
Data are raw signals or facts like digits, characters, or media streams e.g., a video from a CCTV with
view on a street. They can be handled without consideration of their meaning. Information in
contrast is data which has a meaning. For example, the video might contain the information that at 2
pm a particular street is flooded. Finally, knowledge has besides its meaning also a purpose within
the organisation and can be used to achieve specific goals. For example, the purpose of the CCTV
video could be to recognize that the traffic has to be rerouted because of a flooded street. Since our
focus is on supporting organisations, consideration is given only to data or information, which can
have a benefit for organisational processes.
Another perspective on knowledge besides the investigation of its nature and relation to data and
information is to find out where the organisational knowledge is situated. A common systematic is to
distinguish between tacit and explicit knowledge . The tacit knowledge of an organisation is situated
in its members and consists of cognitive and technical elements . While the cognitive elements are
the mental models of an individual (e.g., personal rating of a specific hotel), the technical elements
comprise know-how and context dependent skills (e.g., bargaining for the room rate). Explicit
knowledge on the other hand is always codified or communicated in symbolic form or some
language .
6|Page
An information system will never be able to deal with tacit knowledge directly. At the very moment
it is entered into the system, tacit knowledge becomes explicit knowledge. Furthermore, it is not
reasonable to make all knowledge explicit, since this can be a labour intensive task. On the other
hand tacit knowledge can be managed with the help of information systems by “managing” the
persons who possess it, i.e. by managing knowledge about the tacit knowledge available (e.g., with
“yellow pages”, or by rearranging teams). This knowledge about knowledge is called meta-
knowledge. It can be treated (e.g., made explicit) like other kinds of knowledge but still has a
connection to the knowledge it is about.
Both types, explicit as well as the tacit knowledge cannot be managed independently. For example,
tacit knowledge is needed in order to apply explicit knowledge. Furthermore there is evidence that
combination of support for both types of knowledge brings the best outcome .
After giving this short overview on knowledge, the following paragraph describes and defines the
concept of knowledge management.
Knowledge Management :
The definition of knowledge management has evolved over time. Different definitions emphasize
different activities for knowledge management .
The organisation plays a central role in knowledge management, since it is the instance which
defines the goals and the processes the comprised actors have to fulfil. In contrast to the
organisational knowledge management this shared purpose does not exist for personal knowledge
management.
Organisations can be divided into professional and non-professional ones. Professional organisations
have a set of strong and often legally enforceable rules and boundaries (e.g., enterprises or
governmental agencies).
In the consumer social group use case, travel operators or airlines can be considered as professional
organisations. In contrast to this, a non-professional organisation is usually a loosely coupled, non-
permanent association of persons. Examples of non-professional organisations in the emergency
response use case are neighbourhood communities or families. In the consumer social group use
case, a clique of friends going on a trip together. Agents (i.e. persons) can participate in both of
these organisations as members, but they can also have contact with an organisation as an
individual.
7|Page
The authors ground their framework on the experiences they made while consulting enterprises on
knowledge management.
• Sharing and distributing knowledge to ensure that knowledge is available where it is needed;
• Using knowledge, i.e. to ensure knowledge application for the benefit of the organisation;
• Preserving knowledge to guarantee that important knowledge remains available for a desired time
period
These core processes are complemented by the following knowledge management processes in
order to implement knowledge management in an organisation:
• Defining knowledge goals, to emphasise the important knowledge management processes for
each scenario;
• Measuring knowledge, to verify the success of the undertaken activities for improving knowledge
management.
• Exploration of knowledge and its adequacy, which comprises survey, description of knowledge and
related activities as well as elicitation, codification and organisation of knowledge;
• Manage knowledge actively by synthesis of knowledge related activities, by handling, usage and
control of knowledge as well as by leveraging, distributing and automating knowledge
Schreiber et al. suggest in their book a specific methodology for development of information
systems for knowledge management support. They propose a framework of knowledge
management related activities similar to the ones presented above:
8|Page
• Distribute the available knowledge;
• Foster the use and application of knowledge in the core processes of the organisation;
When comparing the above shown frameworks of knowledge management related activities and
other existing approaches , one can realize that there is a big overlap in the proposed processes. For
example the process of knowledge creation is part of most knowledge management frameworks,
while knowledge maintenance is rarely found as a distinct process in knowledge management
frameworks. Therefore, we decided to use the following generic activities for our further work,
which most authors have included in their frameworks .
Knowledge Creation :
The process of knowledge creation causes the existence of new knowledge, useful to solve problems
or making decisions which were not possible before. These creation and recreation processes are
initiated by organisational processes and needs. By considering the organisation’s environment as a
source of knowledge, one might also subsume knowledge acquisition under knowledge creation.
Nevertheless the focus lies on internal knowledge creation. Nonaka distinguishes between four
types of knowledge creation based on the assumption that knowledge is created by transformation
between external and tacit knowledge.
• Externalization means the codification of internal tacit knowledge to an external medium, e.g.,
writing down best practices.
• Internalization is done by learning from the explicit knowledge e.g., reading text documents for
gaining insights to similar problem areas.
• Combination is the (re)orchestration of existing explicit knowledge into new external knowledge
e.g., into a survey report.
• Socialisation is the creation of new tacit knowledge through social interaction (e.g., apprenticeship)
.
9|Page
Knowledge retrieval is used to support efficient access to the stored knowledge. Intuitive query
languages, for example, can improve the retrieval of explicit knowledge stored in an information
system.
Knowledge Transfer :
The process of knowledge transfer is needed for providing parts of the organisation with knowledge,
which was only available for other parts before. Participants of organisational knowledge transfer
can be individuals and/or sub-organisations. Important aspects of knowledge transfer are:
Knowledge transfer can be facilitated and supported by information system like corporate
directories, discussion databases, organisational knowledge maps or video technologies.
Knowledge Application :
The purpose of knowledge does not lie in itself. In order to gain benefit from the knowledge by
solving problems with the help of that knowledge, it has to be applied and used. One method to
support knowledge application is the creation of self contained task teams , which comprise a
selection of members with useful knowledge about a task even if the process of how to solve the
problem is not clear. Another important method to facilitate application of knowledge is to create
directives or organisational routines . These can be formally captured in an information system in
order to allow the enactment of workflows to accomplish organisational tasks.
Summary :
This section introduced the concept of knowledge and presented the fundamentals of knowledge
management. To this end the four central processes of knowledge management, namely knowledge
creation, knowledge storage and retrieval, knowledge transfer, and knowledge application were
identified and described. As technologies and applications develop tremendously fast, also support
for knowledge management changes quickly.
10 | P a g e
1. Knowledge Syndication :
In the process of knowledge syndication, individual users have the ability to publish their opinions,
experience, and knowledge to a broad community of recipients. The recipients can randomly access
the published information or subscribe to them (receive notifications when new information is
published). In knowledge syndication, the producer of the knowledge is typically known to the
recipients.
Such systems allow their users to carry out syndication of knowledge very easily, thus enabling lay-
users to become publishers (see inversion of consumer-producer-role above). Whereas blogs allow
the readers for providing feedback to the publisher in form of comments and discussions, podcasts
and newsfeeds typically do not. For example, podcasts and newsfeeds could be used to regularly
inform and update the citizens with the latest official information about the emergency situation.
This information broadcasted trough podcasts and newsfeeds could include evacuation plans and
directions, allowing the citizens to assess the situation and decide what to do. This might be even
further personalized with respect to e.g., specific neighbourhoods.
11 | P a g e
possible, thus all feedback, hints, answers, and solutions provided are visible to all users of the
community.
Knowledge sharing typically comes in combination with the possibility of creating meta-knowledge
and sharing it. Such meta-knowledge comprises descriptions of the pieces of knowledge provided by
the users, typically through the use of tags. Tags can be added by the authors themselves; however,
they can typically also be added by other users who have access to that knowledge. The activity of
tagging results in the so-called folksonomies and personomies.
Folksonomies are a collection of tags created by a mass of users . The collection of tags created by an
individual is termed personomy. Knowledge sharing can also be incorporated in domain-specific
portals with a focus on providing services. Successful examples for such types of systems are
recommender portals like Tripadvisor and HRS (Hotel Reservation Service). Among other services
(e.g., booking hotels of flights) they provide users with the functionality to rate the quality of already
experienced services. This knowledge is then summarized and provided in the form of
recommendations to other users interested in e.g., booking a room in a specific hotel. In that
respect, the investigation of knowledge flows within such systems may be used to identify influential
users and improve recommendations
Although knowledge is also shared through systems such as wikis in the collaborative knowledge
creation process and the Q&A systems in the collaborative knowledge exchange process, the main
difference to knowledge sharing is that here the users still possess their own contributed knowledge.
The identity of the users may be known in the knowledge creation and exchange processes and may
be associated with portions of the collective knowledge created, e.g., specific paragraphs in Q&A
systems.
However, the contributions are not under direct control of the users such as the maturation of a wiki
page or the direction a discussion takes in a Q&A system. The identity of the users with respect to
the created knowledge may even completely vanish as it happens in the case of wiki sites.
12 | P a g e
5. Social Networking :
In social networking we consider the process of users getting together in virtual communities. Users
typically provide some personal information such as interests and affiliation(s) and share it with the
community. In addition, the users can explicitly state that there is a connection between themselves
and other users (contacts). The users can be connected by links of different kinds, e.g., they can be
friends, collaborators, or university mates. The amount of information granted to the community as
well as to the contacts can typically be individually controlled. The contacts of a user can also be
traversed transitively, i.e., one can explore the contacts of a contact. Having meta-knowledge about
the users in a social networking application such as skills and previous employers enables for
answering queries such as “I need to hire an expert on service-oriented architectures and this
person should know someone I know.” Thus, social networking allows the creation and expression of
knowledge as well as search for it.
Today’s applications for social networking such as Facebook and MySpacetypically focus on end
users. However, they can also be targeted at professionals but here in the role of the individual as in
the case of LinkedIn or Xing or for professionals as members of specific organisations like emergency
response personnel as with a Sahana based portal.
Considering social networking with respect to the traditional processes of knowledge management ,
the main relation can be seen with knowledge creation (the social network itself) and knowledge
storage and retrieval (finding persons I am interested in). Secondary purpose of social networking is
the transfer of knowledge.
6. Knowledge Orchestration :
Finally, the process of knowledge orchestration implements the combination of different open
infrastructures and merging different resources of knowledge to create something new and to
provide new or better insights into the knowledge. The combination of knowledge can be pre-
defined or pre-implemented by the system provider or can also be conducted by the users
themselves. An often used application of knowledge orchestration is providing the possibility for
better exploring the knowledge and its combinations. This can be achieved by e.g., better
visualization of the knowledge by use of maps, timelines, diagrams, and others (a very interesting
and recent example is Freebase Parallax, an add-on to Freebass) and allowing for facetted, blended
browsing and querying on the knowledge . Orchestrated knowledge that makes use and combines
different sources can itself be used again as knowledge source.
The process of knowledge orchestration allows for knowledge creation through combination of
existing resources. The goal of this combination is knowledge transfer and knowledge application in
the sense defined. Transfer of knowledge means that by accumulating the knowledge and
presenting it through different visualizations to others, it can be perceived and acquired. The reason
for transferring this knowledge is driven by a purpose. This purpose is application dependent and
can be e.g., in an emergency response situation to decide from flooded or blocked streets visualized
on a map what will be the best evacuation route for citizens or how emergency entities can best
approach the incident. Thus, the purpose is applying to the transferred knowledge.
13 | P a g e
3. Established Knowledge Management Systems
and Applications :
The purpose of this section is to present existing solutions used for knowledge management. The
following systems and applications are described:
14 | P a g e
3.1. Expert Systems :
The beginnings of Experts Systems can be traced back to mid-1960s when scientists were trying to
answer the question of whether a computer can use rules to extract answers from an information
base and explain how it did this. Edward A. Feigenbaum was one of the first scientists that were
trying to answer this question by constructing an artificial expert. In 1965 Bruce Buchanan,
Lederberg and Feigenbaum began working on Dendral, the first expert system. In the mid-1970s at
Stanford University, MYCIN was developed by Edward H. Shortliffe. This program was diagnosing a
certain class of brain infections.
“An expert system is a computer program intended to embody the knowledge and ability of an
expert in a certain domain.” Those computer programs contain of a set of rules for analyzing
information about specific class of problem in order to recommend one or more courses of user
actions .
Expert systems emerged, and are used for several reasons, mainly related to imperfections of human
experts, more specifically:
• proneness to stress
• high costs
Expert systems can be trusted to some extent; therefore, they are often used in parallel with human
experts. Their main limits are :
The basic components of an expert system are a knowledge base (KB), an inference engine and user
inference. The information to be stored in the KB is obtained by interviewing people who are expert
in the area in question. The interviewer, or knowledge engineer, organizes the information elicited
from the experts into a collection of rules, typically of an 'if-then' structure. Rules of this type are
called production rules. The inference engine enables the expert system to draw deductions from
the rules in the KB. The user inference requests information from the user and outputs intermediate
and final results. In some expert systems, input is acquired from additional sources such as data
bases and sensors.
15 | P a g e
The following general points about expert systems and their architecture have been illustrated .
1. The sequence of steps taken to reach a conclusion is dynamically synthesized with each new case.
It is not explicitly programmed when the system is built.
2. Expert systems can process multiple values for any problem parameter. This permits more than
one line of reasoning to be pursued and the results of incomplete (not fully determined) reasoning
to be presented.
3. Problem solving is accomplished by applying specific knowledge rather than specific technique.
This is a key idea in expert systems technology. It reflects the belief that human experts do not
process their knowledge differently from others, but they do possess different knowledge. With this
philosophy, when one finds that their expert system does not produce the desired results, work
begins to expand the knowledge base, not to re-program the procedures. In the expert system
approach, all problem solving related expertise is encoded in data structures, i.e. not within the
programs.
This organisation has several benefits pertaining to the programs of the expert system:
• They serve to process the data structures without regard to the nature of the problem area they
describe.
Examples of Expert Systems are Dendral and MYCIN which were mentioned earlier. Dendral was
responsible for the establishment of a molecular structure of unknown chemical organic compound.
It was developed using Interlisp (programming environment built around Lisp programming
language) . MYCIN is also an expert system, developed just like Dendral at Stanford University. Its
task was to diagnose blood’s bacterial diseases and propose appropriate treatment .
Another expert system worth to mention is STD Wizard. This is a publicly available expert system
which gives recommendations related to sexually transmitted diseases. Based on answers to the
demographics, behaviours and symptoms questions, it recommends screening tests (like HIV
screening test), vaccinations or evaluations. The STD Wizard was developed by the Medical Institute
for Sexual Health and Expert Health Data Programming, Inc., with funding from the Centre for
Disease Control and Prevention and the Association for Prevention Teaching and Research.
16 | P a g e
Knowledge directories are structures where pieces of knowledge are placed in a proper folder or in
categories of a hierarchical tree. The idea originates directly from the directory structure of
organisations in operational systems. Such directories are typically shared among multiple users.
The term “knowledge directory” is often considered as equivalent to Web Directory. Web directories
are categorised directories available on the Internet or in local networks. They may either contain
real documents or (more commonly) external links.
Recently, intensive growth is taking place in the fields of semantic technologies and ontology
representation and processing. These technologies may be also useful for knowledge directories.
There are attempts to semantically integrate knowledge pieces inside directories and also to add
automated elements to the web directory building process .
The purpose of using knowledge directories is to persist, share and categorise knowledge, and
Internal knowledge directories, which often hold actual documents (not only references), can be
treated as a form of organisational knowledge repositories. Comparing to Web Search Engines, Web
Directories typically have a smaller web reference base. But often they allow finding useful
information in a more convenient way, because they:
• Offer content that is filtered and often ranked Present-day knowledge directories can be
categorised based on several aspects :
• Place of content:
Indexes: When the directory server contains only reference to actual content. Typically this is a
structured index of web pages.
• Access:
• Subject:
17 | P a g e
Free: Placing a reference to web page is free of charge.
Semi-Commercial: References are included free of charge. Special reference emphasizing (for
example by placing it in a better place in a result list) is charged.
• Web Search Engines: Competition or alternative to Web Dictionaries. Based on machine algorithms
which give them huge advantage in terms of speed (of indexing existing resources) and huge
disadvantage in terms of understanding the content and properly classifying it.
- the biggest multi-purpose open web directory project. It is an open catalogue of web pages kept
on servers of the Time Warner Corporation in which the category tree and all entries are edited by a
community of volunteers. The idea of the project was to create a spacious, free of charge and
constantly developing catalogue of WWW sites in which every entry is checked and described by a
human.
• Google Directory
- a popular web directory, based on Open Directory Project, enhanced with proprietary Google
solutions .
• Yahoo Directory
- one of the oldest, still developing web directories. It rivals in size with the Open Directory Project.
Yahoo uses Yahoo Directory in order to add relevancy to its search. For the same reason Google uses
Google Directory.
- example of software solution for building an own knowledge directory. Browsing and editing can
be performed by means of a web user interface or Java API.
Data warehouse is a kind of a database which maintains a copy of the transaction data. But in
contrast to the original data base this data is “specifically structured for query and analysis” .
• To perform server/disk bound tasks associated with querying and reporting on servers/disks not
used by transaction processing systems.
• To use data models and/or server technologies that speed up querying and reporting and that are
not appropriate for transaction processing.
• To provide a repository of "cleaned up" transaction processing systems data that can be reported
against and that does not necessarily require fixing the transaction processing systems.
• To make it easier, on a regular basis, to query and report data from multiple transaction processing
systems and/or from external data sources and/or from data that must be stored for query/report
purposes only.
• To provide a repository of transaction processing system data that contains data from a longer
span of time than can efficiently be held in a transaction processing system and/or to be able to
generate reports "as was" as of a previous point in time.
• To prevent persons who only need to query and report transaction processing system data from
having any access whatsoever to transaction processing system databases and logic used to maintain
those databases. In spite of so many reasons why to use a data warehouse it is worth to remember
its main disadvantages:
• Over the life of a data warehouse, its usage costs may rise. Maintenance costs are very high
because of the amount of data gathered in the warehouse.
In the architecture of data warehouse we can point out the following interconnected layers:
• Operational database layer – “the source data for the data warehouse”
• Informational access layer – this is data for reporting and analysing along with tools that helps to
analyze and make reports. Business intelligence tools are part of this layer.
19 | P a g e
• Data access layer – this is the interface between operational and informational access layer. Tools
used for extracting, transforming and loading data into the warehouse fall into this layer.
• Metadata layer – this is a data directory. In context of storing data in a data warehouse, two
leading approaches are worth to mention :
• Dimensional approach – transaction data are partitioned into “facts” (numeric transaction data)
and “dimensions” (reference information giving the context to the facts). The advantage of this
approach is that the data warehouse becomes easier for the user to understand and to use. It also
gives quicker retrieval of a data from data warehouse.
• Normalized approach – “the data in the data warehouse are stored following, to an extent, Codd’s
normalization rule (see [Dat08]). Tables are grouped together by subject areas that reflect general
data categories (e.g., data on customers, products, finance, etc.)” . The main advantage of this
approach is that it is unambiguous to add information to a database.
• DWQ (Data Warehouse Quality) – it is a European project focusing on the quality of data
warehousing. Authors of this project perceived a data warehouse as a buffer. The DWQ project
developed techniques and tools to support the rigorous design and operation of data warehouses .
• DAWN (The Data Warehouse of Newsgroups) – in this project authors model each Newsgroup as a
view over the list of all articles.
• DWPP (Data Warehouse Population Platform) – project done in 2003 at Telecom Italia. It is a set of
modules whose aim is to resolve typical aspects arising during the transformation and loading vast
amount of data into data warehouse .
The notion of workflow itself is not related to any particular domain, but since last decade of 20st
century we are observing intensive growth of so called workflow systems . Workflows are more
oriented on data processing than storing. But recently there is a trend to create solutions
integrating data storing and processing in a workflow mode. For example Oracle has added "job
queues" functionality to their database – allowing formal definition and organisation of complex
data processing.
20 | P a g e
including recovery and reporting. The workflow system notion is connected to BPM – Business
Process Modelling concept and SOA – Service Oriented Architecture . Workflow systems can be used
to support efficient cooperation between persons, software applications and machines taking a part
in processes.
• Integration of multiple working elements in one flow. Passing control information and data
between those elements.
From the user or administrator perspective workflow system usually consists of:
• Modelling language: A formal notation used to describe and store process data and model.
• Execution framework: An engine that can read, execute and control execution of process
description, encoded using modelling language.
• Visualisation: Though a modelling language and execution framework itself could be enough to
build complete workflow system, visualization
– that includes visual flows edition, visual flow control, and output data visualization – is the actual
power of modern workflow systems.
The implementation of workflow elements described here varies among available solutions. Actual
workflow system structures depend mostly on the workflow purpose, and though potentially
workflows can be used for multiple purposes, currently they are most popular in [Hol95], [Wik08r]:
• science
• business/management
• software engineering
21 | P a g e
Modern sciences, especially disciplines like physics, biology, chemistry, meteorology, often require
processing of large amounts of data for simulations purposes or to process experiments results. It is
common that such processing includes cooperation of multiple systems – often with completely
different interfaces and access modes and it is also common that such experiments are repeated
multiple times. Scientific workflow systems, like for example Kepler, are mostly used for connecting,
controlling and passing large amount of data between separated systems and applications. One
latest trend is to use grid technology for scientific workflows, see for example GriPhyN or GridDB
Project.
In business applications workflows are mostly required for defining, controlling and automation of
repeatable tasks and processes. Modern business supporting applications include their own
workflow engines, that offer easy application adjustment to client requirements, e.g., Software
Mind CRM
Workflows are currently popular in software engineering. It is mainly because of three reasons :
• They allow for defining, viewing and modifying application business logic on higher abstraction
level, independently from implementation details.
• They allow easier and more elegant integration of applications that consists of multiple
components or even heterogeneous systems.
• Most of them contain integrated components for common tasks. For example they often allow
using remote web-service without writing any code.
There are already workflow systems for software engineering available on the market. Examples of
most popular are:
• Windows Workflow Foundation: A powerful, popular workflow system for .Net platform.
• Enhydra Shark: Popular open-source Java workflow engine based on XPDL language.
• Bea Weblogic Integration: Commercial solution for J2EE applications integration, using J2EE
standards.
Currently one of the biggest problems of workflow systems is the lack of widely accepted standards.
Still most workflow systems are autonomic applications. The most important organisation for
workflow systems standardisation is the Workflow Management Coalition (WfMC) supported by
many companies from business software sector. One of WfMC standards called XPDL is currently the
most popular, and probably will be considered as the next standard workflow system modelling
language.
22 | P a g e
“Groupware (also referred to as collaborative software or workgroup support system) is a
technology designed to facilitate the work of groups. This technology may be used to communicate,
cooperate, coordinate, solve problems, compete, or negotiate.”
Groupware systems allow users to overcome common problems which emerge during teamwork.
They are especially well suited for a non aollocated teams. Groupware services include the sharing of
calendars, collective writing, email handling, shared database access, electronic meetings with each
person able to see and display information to others, and other activities.
• Synchronization of resources – it is crucial that all collaborating users see the same version of one
resource.
• Locking of resources - to prevent concurrent writes the resource must be locked when one user
gets write access to it; (another solution is a merging mechanism, which allows users to combine the
changes made at the same time) .
Latest groupware applications facilitate teamworking even if users use different applications - for
example groupware application can synchronize calendars maintained by Microsoft Outlook
(Windows) and Kontact(Linux/KDE) applications.
• temporal dimension; whether users of the groupware are working together: at the same time
("real-time" or "synchronous" groupware) (e.g., Shared whiteboards, Video communications, Chat
systems, Decision support systems, Multi-player games), different times ("asynchronous"
groupware), (e.g., Email, Newsgroups and mailing lists, Workflow systems, Hypertext, Group
calendars, Collaborative writing systems);
Groupware can be also divided into three categories by their main purpose
• Communication tools: using these tools people can send and receive messages, files or other data.
• Conferencing tools: using these tools people can share information and data and interact
• Collaborative management tools: using these tools is helpful to manage group activities.
Recently there emerged new web tools to support collaborative work e. g. Google Documents or
Google Calendar. New instant messaging applications become popular, and more and more people
are using these tools. Examples of groupware implementations are:
23 | P a g e
• Instant Messaging applications (like Skype, Jabber, etc.)
• Access Grid
• eGroupWare
• Group-Office
• Kolab
• Zimbra
Many popular desktop applications also contain some groupware concepts (e.g., synchronization of
calendars in PIM applications). In the context of Groupware it is worth to mention document and
content management systems.
The beginnings of this conception reach back to the 1980s when a number of vendors began
developing systems to manage paper-based documents. Later, a second system was developed to
manage electronic documents (all those documents, or files, created on computers and often stored
on local user file systems). Another example of such a system is Statistica for management of
electronic documents.
A content management system (CMS) is one or set of internet applications easing the creation,
actualization, management and development of WWW services by non-technical personnel. CMS are
mainly used for controlling, storing, versioning, publishing documentation such as news articles,
marketing brochures, etc. Content managed by CMS may consist of audio files, video files, image
media, electronic documents and Web content . CMS are suitable for a variety of web site models,
such as news publications, customer support interfaces, Web portals, communities, project
management sites, intranets, and extranets .
The key goal of a CMS is to increase the integration and automation of the processes that support
efficient and effective delivery of Web-based resources. CMS is supposed to help institutions to
maintain their Web sites .
• Authoring – process by which many users can create Web content within a managed and
authorised environment.
• Workflow – management of steps taken by the content between authoring and publishing.
• Storage – placing of authored content into repository. This can also be versioning of the content to
prevent access conflicts between multiple authors.
24 | P a g e
A CMS manages the path from authoring through to publishing using a scheme of workflow and by
providing a system for content storage and integration .
• Alfresco
– an open source alternative for enterprise content management providing Document Management,
Collaboration, Records
• Plone
3.6. Summary :
In this section knowledge management systems and applications have been described and analyzed.
They serve different purposes and help to achieve different goals:
• cooperation of many individuals working on the same data (e.g., collaborative software)
The common element among all these systems is their attempt to address the problem of dealing
with huge amounts of knowledge and information.
6. Conclusions :
25 | P a g e
26 | P a g e
In this report, we introduced, discussed and analysed foundational concepts of knowledge
management and the Web 2.0. We the provided a definition of knowledge and argued that this
clarification is essential to understand the topic of knowledge management. Since traditional
knowledge management has a long history in both science and practice, different models emerged
emphasising different aspects of knowledge management. However, the models agree on four
common processes in knowledge management. These processes are knowledge creation, knowledge
storage and retrieval, knowledge transfer and knowledge application. In contrast to knowledge
management, the concept of Web 2.0 is very young. It has gained much attention in the last years.
We introduced the concept of Web 2.0 in Section 2.2 and described the essential aspects of it such
as user contributed content, collaborative annotation, sharing, openness, and mashups. In particular
the openness and active involvement of users allowing them to be not only consumers but also
producers of content brings in a new and very interesting quality of Internet-based applications for
knowledge management.
Although, the concept of Web 2.0 is initially not motivated from traditional knowledge management,
many of today’s Web 2.0 applications and platforms support their users in providing and sharing
knowledge via the system and with other users in the Internet. Thus, Web 2.0 applications generally
provide support for knowledge management in the Web. Consequently, we analysed the Web 2.0
applications in detail from a knowledge management perspective. Based on this analysis, we
identified six processes underlying Web 2.0 applications, namely: Knowledge syndication,
collaborative knowledge creation, collaborative knowledge exchange, knowledge and meta-
knowledge sharing, social networking and knowledge orchestration. Seemingly, the six processes
identified for Web 2.0 applications have a correlation to the four core processes of traditional
knowledge management. The matrix depicted in the following Table 1 shows this correlation
between traditional knowledge management and the processes of Web 2.0. On the x-axis, the four
traditional knowledge management (KM) processes are shown whereas the y-axis distinguishes the
six Web 2.0 processes.
Given this matrix, we can see that the majority of Web 2.0 support for traditional knowledge
management lies on the knowledge transfer. Here, we find all introduced Web 2.0 applications. The
process of knowledge creation and knowledge storage and retrieval are supported by less Web 2.0
methods. Both can be facilitated by Wikis and Social Networking Applications. While Knowledge
creation is additionally supported by Knowledge Orchestration, Knowledge Storage and Retrieval can
be improved by Knowledge and Meta-Knowledge sharing.
27 | P a g e