Вы находитесь на странице: 1из 366

Network of Excellence - Contract no.: IST-508 011 www.interop-noe.

org

Deliverable D9.1

State-of-the art for Interoperability architecture approaches


Model driven and dynamic, federated enterprise interoperability architectures and interoperability for non-functional aspects

Classification Project Responsible : Authors :

Contributors: Task Status : Date :

Public SINTEF Arne-Jrgen Berre, Axel Hahn, David Akehurst, Jean Bezivin, Aphrodite Tsalgatidou, Franois Vermaut, Lea Kutvonen, Peter F. Linington See list below 9.1 Version 1.0 November 19, 2004

History and editors of the various parts (for contributors see below)

Version

Description

Editor

From-to

Comments

0.10.2 0.10.3 0.10.5

Initial Draft part I Interoperability Architectures Initial Draft part II Model Driven Development Initial Draft part III Service oriented Computing

Axel Hahn David Akehurst Aphrodite Tsalgatidou A.J.Berre

1/7-1/9-2004

Incremental versions

1/6-17/11-2004 Incremental versions 1/6-12/11-2004 Incremental versions

0.1- Initial Draft part IV 0.2 Component and Messageoriented Computing 0.1- Initial Draft part V Agent 0.3 oriented Computing 0.1- Initial Draft part VI 0.4 Business Process Management and Workflows 0.1- Initial Draft part VII Non 0.4 Functional aspects 0.1- Integrated D9.1 report 1.0

1/6-15/11-2004 Incremental versions

Franois Vermaut 1/6-17/11-2004 Incremental versions Lea Kutvonen 1/6-15/11-2004 Incremental versions

Peter F. Linington 1/6-17/11-2004 Incremental versions Arne J. Berre 1/9-19/11 2004 Incremental versions into version 1.0

PARTNER NAME University of Oldenburg University of Nantes SINTEF

CONTRIBUTORS to Part I Interoperability Architectures Axel Hahn (editor) Jean Bzivin Arne J. Berre, Brian Elvester

PARTNER NAME University of Kent University of Nantes SINTEF Telematica Instituut TU-Berlin HS

CONTRIBUTORS to Part II Model Driven Development D.H.Akehurst (editor) Jean Bzivin (co-editor) Arne J. Berre Marc Lankhorst Torben Weis, Andreas Ulbrich Benkt Wangler

PARTNER NAME NKUA LORIA

CONTRIBUTORS to Part III Service Oriented Computing Web services, P2P and Grid Aphrodite Tsalgatidou (editor), E. Koutrouli, G. Athanasopoulos, V. Floros, A. Sotiropoulou, D. Theotokis O. Perrin, S. Bhiri, F. Charoy

SINTEF UNINOVA Univ.Oldenburg NKUA Singular Telematica IDI-NTNU KTH HS

A.J.Berre, T.Neple, B. Elvester, R. Grnmo J. Sarraipa, H. Vieira A. Hahn A. Tsalgatidou, E. Koutrouli, G. Athanasopoulos, V. Floros, A. Sotiropoulou, D. Theotokis E. Vlahopoulou, S. Pantelopoulos M. Steen, H. ter Doest Y. Lin, S. Hakkarainen, D. Strasunskas, J. Sampson M. Henkel E. Sderstrm, B. Wangler

PARTNER NAME SINTEF Univ.Oldenburg UNIGE AIDIMA

CONTRIBUTORS to Part IV Component- and Message Oriented Computing A.J.Berre (editor) A. Hahn Michel Pawlak Miguel Angel Abian

PARTNER NAME University of Namur Computas UoTilburg NTNU UNIGE

CONTRIBUTORS to Part V Agent Oriented Computing Franois Vermaut (editor) Sobah Abbas Petersen Hans Weigand Jennifer Sampson Michel Pawlak

PARTNER NAME University of Helsinki Telematica Instituut Loria-Eccoo Tudor Computas University Klagenfurt KTH Hgskolan Skvde NKUA TU Berlin BOC

CONTRIBUTORS to Part VI Business Process Management and Workflows Lea Kutvonen (editor) Henk Jonkers, Maria-Eugenia Iacob, Marc Lankhorst Olivier Perrin Laurent Gautheron Helge Solheim Johann Eder, Marek Lehmann, Horst Pichler Jaana Wayrynen Per Backlund, Christina Tsagkani Kurt Geihs, Michael Jaeger Harald Khn

PARTNER NAME University of Kent University of Geneva SINTEF Telematica Instituut HS - Skvde KTH Stockholm University of Helsinki

CONTRIBUTORS to Part VII Non Functional Aspects Peter F. Linington (editor), David Akehurst Jean-Henry Morin Jan yvind Aagedal Maarten W.A. Steen, Henk Jonkers, Maria Eugenia Iacob Per Backlund, Benkt Wangler Gustav Bstrom Lea Kutvonen

Table of contents
Deliverable D9.1 1 State-of-the art for Interoperability architecture approaches.............................................. 1 History and editors of the various parts (for contributors see below) .................................... 2 Executive Summary .................................................................................................................... 11 I Interoperability Framework............................................................................................... 13 I.1 Introduction.................................................................................................................... 13 I.1.1 Scope of this document .......................................................................................... 13 I.1.2 Structure ................................................................................................................. 14 I.2 Interoperability Framework Principles .......................................................................... 14 I.2.1 Overview ................................................................................................................ 14 I.2.2 Enterprise View...................................................................................................... 16 I.2.3 Architecture & Platform View ............................................................................... 17 I.2.4 Evolution from the IDEAS Interoperability Architecture...................................... 18 I.3 Background .................................................................................................................... 19 I.4 Requirements on a Framework Architecture ................................................................. 20 I.5 Structure of the Architecture Framework ...................................................................... 20 I.5.1 Reference Model for Conceptual Integration......................................................... 22 I.5.2 Reference Model for Technical Integration ........................................................... 24 I.5.3 Methodology for Applicative Integration............................................................... 25 I.6 Framework Architecture ................................................................................................ 25 I.6.1 Relationships between the INTEROP Work Packages .......................................... 27 I.6.2 WP9 Structure ........................................................................................................ 27 Model Driven Development ................................................................................................ 29 II.1 Introduction.................................................................................................................... 29 II.2 MDD Standards ............................................................................................................. 31 II.2.1 OMGs Model Driven Architecture (MDA)................................................... 31 II.2.2 Microsofts Domain Specific Modelling (DSM)............................................ 33 II.2.3 Model Integrated Computing (MIC)............................................................... 34 II.3 Principles of Modelling.................................................................................................. 36 II.3.1 Modelling Frameworks, Enterprise, and Architecture.................................... 37 II.4 Classification of Models ................................................................................................ 41 II.5 Operations on Models .................................................................................................... 43 II.5.1 Model Transformation (MT) .......................................................................... 43 II.5.2 Model Transformation Languages.................................................................. 44 II.5.3 QVT and Model Transformation Tools .......................................................... 47 II.5.4 The Eclipse Modeling Framework (EMF)...................................................... 47 II.5.5 UML Model Transformation (UMT).............................................................. 49 II.5.6 ATL (ATLAS Transformation Language) ..................................................... 49 II.5.7 ArcStyler......................................................................................................... 51 II.5.8 Rhapsody ........................................................................................................ 52 II.5.9 OptimalJ.......................................................................................................... 53 II.6 Research Issues .............................................................................................................. 54 II.6.1 Applying the unification principle.................................................................. 54 II.6.2 Some open research issues in model engineering........................................... 56 II.6.3 Applications, consequences and perspectives ................................................ 58

II

III Service Oriented Computing .............................................................................................. 61

III.1 Introduction.................................................................................................................... 61 III.1.1 What is Service Oriented Architecture (SOA) ............................................... 62 III.1.2 Web Services .................................................................................................. 65 III.1.3 P2P Services ................................................................................................... 66 III.1.4 Grid Services................................................................................................... 67 III.1.5 Relation between P2P, Grid and Web Services.............................................. 68 III.2 Applications / Case Studies, Example Scenarios Addressed by SOA........................... 70 III.2.1 Web Services example applications ............................................................... 70 III.2.2 P2P Example Applications ............................................................................. 76 III.2.3 Grid example applications .............................................................................. 78 III.3 Technical aspects of eServices...................................................................................... 82 III.3.1 Web Services .................................................................................................. 82 III.3.2 P2P Services ................................................................................................... 94 III.3.3 Grid Services................................................................................................... 97 III.4 State of the Art in Research projects............................................................................ 106 III.5 Tools/Platforms for service development .................................................................... 110 III.5.1 Tools and Platforms for Web Services development.................................... 110 III.5.2 Tools and Platforms for P2P Services Development.................................... 112 III.5.3 Tools and Platforms for Grid Services Development................................... 115 III.6 Interoperability Issues and Challenges ........................................................................ 118 III.6.1 Web Services Interoperability ...................................................................... 118 III.6.2 Interoperability in P2P Systems.................................................................... 122 III.6.3 Interoperability in Grid Systems................................................................... 123 III.6.4 Convergence of Web Services with P2P ...................................................... 124 III.6.5 Towards a Synergy between P2P and Grids................................................. 126 III.6.6 Integration of Web Services, P2P and Grid Computing............................... 129 III.7 Conclusions.................................................................................................................. 133 IV Component and Message Oriented Computing.............................................................. 136 IV.1 Introduction.................................................................................................................. 136 IV.2 What is COC Component Oriented Computing ....................................................... 136 IV.2.1 The OMG solution for interoperability and for components: CORBA ........ 137 IV.2.2 Microsoft DNA / COM+ platform................................................................ 141 IV.2.3 J2EE Framework........................................................................................... 142 IV.2.4 .Net Framework ............................................................................................ 144 IV.3 What is MOC Message Oriented Computing ........................................................... 146 IV.3.1 Microsoft Biztalk Framework....................................................................... 146 IV.3.2 ebXML Technical Architecture BCM and BCF........................................ 147 IV.4 Interoperability of Component oriented and Message oriented systems ..................... 151 IV.4.1 Communication Model/Services - Description / Publication ....................... 153 IV.5 Methodologies for Components................................................................................... 154 IV.5.1 Catalysis........................................................................................................ 155 IV.5.2 CADA ........................................................................................................... 155 IV.5.3 Select Perspective for CBSE......................................................................... 155 IV.5.4 Rational Unified Process .............................................................................. 156 IV.6 State of the Art Research projects CBSE.................................................................. 156 COMPETE................................................................................................................... 156 COSMOS ..................................................................................................................... 156 IV.7 Conclusions.................................................................................................................. 157 V Agent-Oriented Computing.............................................................................................. 158 V.1 Introduction.................................................................................................................. 158 V.1.1 Historical context.......................................................................................... 158 V.1.2 What is an agent?.......................................................................................... 158

V.2

V.3

V.4

V.5 V.6

V.1.3 Types of agents ............................................................................................. 159 V.1.4 Agents vs Objects ......................................................................................... 159 V.1.5 What is a multi-agent system?...................................................................... 160 Agent Design - Considering agents as a new design metaphor ................................... 161 V.2.1 Agent-oriented design languages and methodologies .................................. 161 V.2.2 Mobile Agents............................................................................................... 166 V.2.3 Agent Middlewares....................................................................................... 169 Multi Agent Systems.................................................................................................... 170 V.3.1 Agent Societies ............................................................................................. 170 V.3.2 Coordination in MAS.................................................................................... 171 V.3.3 Negotiation.................................................................................................... 173 V.3.4 Communication............................................................................................. 173 Multi-agent Architectures ............................................................................................ 176 V.4.1 Market Architectures .................................................................................... 176 V.4.2 Broker Architectures..................................................................................... 176 V.4.3 Information Agent Architectures .................................................................. 177 Agent theories - Semantics of agent systems............................................................... 177 The role of ontologies in agent-based systems ............................................................ 178

VI Management of business processes and workflows........................................................ 181 VI.1 Introduction.................................................................................................................. 181 VI.1.1 BPMS Paradigm ........................................................................................... 181 VI.1.2 Purpose of Enterprise interoperability architectures..................................... 183 VI.1.3 Interoperability architecture styles................................................................ 184 VI.1.4 Phases of Business Process management ..................................................... 187 VI.1.5 Interoperability issues ................................................................................... 187 VI.1.6 Part structure ................................................................................................. 188 VI.2 Relevant scenarios ....................................................................................................... 189 VI.2.1 Health-care processes ................................................................................... 189 VI.2.2 Insurance Partner Platform ........................................................................... 192 VI.3 Business Process Modelling Methodologies, Tools, and Languages .......................... 195 VI.3.1 Methodologies and resulting application architectures ................................ 196 VI.3.2 Process models, languages, and representations........................................... 204 VI.3.3 Verification and validation of business process models............................... 224 VI.4 Open Source Workflow Management Systems ........................................................... 227 VI.5 Workflow-based Business Monitoring ........................................................................ 230 VI.5.1 Key Performance Indicators ......................................................................... 230 VI.5.2 Workflow Technology for Measuring KPIs ................................................. 231 VI.5.3 Business Monitoring Framework ................................................................. 231 VI.5.4 Animation ..................................................................................................... 233 VI.6 Research, technologies and markets, standards ........................................................... 234 VI.6.1 Projects.......................................................................................................... 234 VI.6.2 Technologies and market .............................................................................. 239 VI.6.3 Standards....................................................................................................... 240 VI.7 Issues, gaps, priorities, conclusions ............................................................................. 247 VII Enterprise Interoperability for non-functional aspects ................................................. 248 VII.1 Introduction to Non-Functional Aspects .............................................................. 248 VII.1.1 What are Non-Functional Aspects? .............................................................. 248 VII.1.2 Modelling Non-Functional Aspects.............................................................. 250 VII.1.3 Work on Interoperability applied to NFA..................................................... 251 VII.1.4 Aspect-oriented Software Development for dealing with NFA ................... 253 VII.2 Technical Review of Specific Aspects................................................................. 254 VII.2.1 Quality of Service ......................................................................................... 254

VII.2.2 Security ......................................................................................................... 255 VII.2.3 Trust .............................................................................................................. 257 VII.2.4 Enterprise Digital Rights and Policy Management....................................... 258 VII.2.5 Performance .................................................................................................. 261 VII.2.6 Reliability and Availability........................................................................... 264 VII.2.7 Business Value.............................................................................................. 264 VII.3 Current state of activities...................................................................................... 269 VII.3.1 Projects.......................................................................................................... 269 VII.3.2 Standards....................................................................................................... 272 VII.4 Issues .................................................................................................................... 273 VII.4.1 Gap analysis.................................................................................................. 273 VII.4.2 Priorities and Conclusions ............................................................................ 275 VIIIResources 276 VIII.1 Conferences, workshops and events..................................................................... 276 Conferences/Workshops MDD and CBSE ............................................................... 276 VIII.1.1 Conferences - SOA ....................................................................................... 276 VIII.1.2 Conferences, Workshops and Journals Agents.......................................... 280 VIII.1.3 Courses - SOA .............................................................................................. 280 VIII.1.4 Related Events - SOA ................................................................................... 282 VIII.2 Journals, Books, Reports, Links........................................................................... 282 Journals - MDD and Components............................................................................... 283 VIII.2.1 Journals - SOA.............................................................................................. 283 VIII.2.2 Books - SOA................................................................................................. 283 VIII.2.3 State-of-the-art reports - Agents ................................................................... 284 VIII.2.4 Useful links Agents.................................................................................... 284 VIII.3 I. Interoperability Research Challenges ............................................................... 285 VIII.3.1 Challenges for standardization...................................................................... 289 VIII.4 Standardization organisations and activities ........................................................ 290 VIII.4.1 OMG Object Management Group ............................................................. 292 VIII.4.2 W3C - The World Wide Web Consortium ................................................... 294 VIII.4.3 Global Grid Forum (GGF)............................................................................ 296 VIII.4.4 Peer-to-Peer Working Group (P2Pwg) ......................................................... 296 VIII.4.5 OASIS........................................................................................................... 297 VIII.4.6 ebXML.......................................................................................................... 297 VIII.4.7 UN/CEFACT ................................................................................................ 298 VIII.4.8 BPMI............................................................................................................. 299 VIII.4.9 RosettaNet..................................................................................................... 299 IX Bibliography References ................................................................................................ 301 IX.1 Bibliography Interoperability Framework ............................................................... 301 IX.2 Bibliography Model Driven Development .............................................................. 301 IX.3 Bibliography Service Oriented Computing .............................................................. 304 IX.4 Bibliography Component- and Message-oriented Computing ................................ 311 IX.5 Bibliography Agent-oriented Computing ................................................................. 313 IX.6 Bibliography - Business Process Management and workflow ................................... 319 IX.7 Bibliography Non functional aspects........................................................................ 339

Figure 1: Compatibility levels (adapted from IEC TC65/290/DC) ................................................... 13 Figure 2: ICT view on the IDEAS Reference Model......................................................................... 19 Figure 3 Conceptual, applicative view and technical view of an enterprise..................................... 21 Figure 4 Reference model for conceptual integration........................................................................ 22 Figure 5: Integration dimensions with respect to system aspects ...................................................... 23 Figure 6: Reference model for technical integration ......................................................................... 24 Figure 7: 4-tier reference architecture for software systems.............................................................. 25 Figure 8: Framework Architecture..................................................................................................... 26 Figure 9: Framework Architecture areas addressed by the different INTEROP WP ........................ 27 Figure 10: Areas addressed by the WP9 SOA report......................................................................... 28 Figure 11 Multigraph Architecure ................................................................................................... 35 Figure 12 Example of a relational DBMS metamodel...................................................................... 41 Figure 13 Relations between static and dynamic models and systems............................................. 42 Figure 14 Transformation with specifications ..................................................................................44 Figure 15 The Sowa meaning triangle ............................................................................................. 57 Figure 16 Service model ................................................................................................................... 63 Figure 17: Extended Service Oriented Architecture .......................................................................... 64 Figure 18 Overview of the Integration Hub Architecture................................................................ 71 Figure 19 Overview of the solution ................................................................................................. 73 Figure 20: Grid Services approach with factories.............................................................................. 78 Figure 21 GRID demonstrator ......................................................................................................... 79 Figure 22 Grid Specification............................................................................................................ 80 Figure 23 Overview of the Web services stack................................................................................ 82 Figure 24 Semantic languages stack ................................................................................................ 90 Figure 25 Graphical and XML-based representation of an RDF statement ..................................... 92 Figure 26 OWL-S model of service .................................................................................................. 93 Figure 27 Web Services Architecture Stack .................................................................................. 120 Figure 28 interoperability support in p2p systems......................................................................... 122 Figure 29 The building blocks of the convergenced middleware ................................................. 130 Figure 30 IDC Functional Stack ................................................................................................... 130 Figure 31 Some areas of divergence ............................................................................................... 131 Figure 32 Foundation for a truly interoperable distributed computing........................................... 131 Figure 33 The SODIUM platform components and their interaction............................................. 132 Figure 34 General structure of a CORBA component................................................................... 138 Figure 35 An example of a IDL interface described....................................................................... 139 Figure 36 Common Object Request Broker Architecture............................................................... 141 Figure 37 The Windows DNA object model .................................................................................. 142 Figure 38 The J2EE Framework .................................................................................................... 143 Figure 39 BizTalk Layer Architecture ........................................................................................... 147 Figure 40 High-level overview of ebXML interaction between two companies............................ 150 Figure 41 ebXML Business Interactions and use of Repository .................................................... 151 Figure 42 Agent UML Protocol diagram....................................................................................... 163 Figure 43 The Business Process Management Systems Paradigm ................................................. 182 Figure 44 Levels of cooperation between enterprises..................................................................... 185 Figure 45 The Integrated, Unified and Federated approach .......................................................... 186 Figure 46 Business Model Insurance Partner Platform............................................................... 193 Figure 47 General Software Architecture of Insurance Partner Platform ................................... 193 Figure 48 Business Graph, Workflow Graph, and Execution Graph.............................................. 197 Figure 49 Notation for Methodologies............................................................................................ 197 Figure 50 Top-Down Methodology .............................................................................................. 198 Figure 51 Bottom-Up Methodology .............................................................................................. 199

Figure 52 Prototyping Methodology............................................................................................. 200 Figure 53 Typical Workflow Application Architectures ................................................................ 201 Figure 54 Embedded vs. Autonomous Workflow Management Systems ...................................... 202 Figure 55 Process definition exchange ........................................................................................... 207 Figure 56 Process meta model ......................................................................................................... 209 Figure 57 Package model................................................................................................................ 210 Figure 58 Hierarchy of PIF core components................................................................................. 211 Figure 59 IDEF0 representation..................................................................................................... 215 Figure 60 Example model in BPMN................................................................................................ 216 Figure 61 Example of a business process model in Testbed........................................................... 218 Figure 62 The ARIS house............................................................................................................. 220 Figure 63 Events, functions and control flows in ARIS .............................................................. 220 Figure 64 MEMO lifecycle modell.............................................................................................. 222 Figure 65 ArchiMate framework .................................................................................................. 223 Figure 66 Example of a business layer model ............................................................................. 224 Figure 67 Symbols used in YAWL (Source: [van der Aalst et al., 2004]) .................................... 228 Figure 68 YAWL architecture (Source: [van der Aalst et al., 2004])............................................. 229 Figure 69 Business Monitoring Objects......................................................................................... 230 Figure 70 Metamodel of Business Monitoring Framework............................................................ 232 Figure 71 Levels of Business Monitoring Framework ................................................................... 233 Figure 72 Enterprise business process integration architecture[Raut and Basavaraja, 2003] ........ 237 Figure 73 Reference model for workflow management [84]........................................................ 242 Figure 74 ebXML Architecture...................................................................................................... 244 Figure 75 Classification of workflow management standards [Hollingsworth, 2004]. ................. 246 Figure 76 The Common Criteria Conceptual Model for Security. ................................................. 256 Figure 77 Different views of performance...................................................................................... 262 Figure 78 An e3-value model......................................................................................................... 266 Figure 79 Part of the ArchiMate metamodel. ................................................................................. 266 Figure 80 A layered performance model in ArchiMate.................................................................. 271 Figure 81 Initial Web alongside the Web of tomorrow [http://www.w3.org/Consortium/] ........... 295

Executive Summary
This document presents State-of-the art for Interoperability architecture approaches with a focus on Model driven and dynamic, federated enterprise interoperability architectures and interoperability for non-functional aspects as in the title of INTEROP WP9/JR4. The aim is to provide a foundation for further analysis and work in the context of defining solutin approaches and research issues related to the roadmap for interoperability related to Architecture&Platforms. This document can be viewed as an evolution and extension of results from the IDEAS Roadmap project, and the Deliverable D1.1 Part C, Architectures & Platforms State of the Art. (June 2003). Available at http://www.ideas-roadmap.net The current document evolves the IDEAS Interoperability framework, as described in particular in Part 1 Interoperability Architectures. The overall framework shows the relationships of the domains Enterprise modelling, ontologies and Architecture&Platforms in the context of interoperability, as well as relationships between the areas that are further described in the parts of this document. These areas are presented with a focus on relevant aspects for interoperability, Part II focuses on Model Driven Development as a bridge to the areas of Enterprise Modeling and Ontologies, but also as a foundation for explicit system models, and model driven architectures as an approach for achieving interoperability. Part III extends the area of Web services to Service-Oriented Computing in general, and also includes and compares with the current evolution in P2P and Grid technologies. Part IV relates to the existing area of Component-oriented and Message-oriented Computing, as an implementation foundation for areas such as service-oriented computing. Part V details on the area of Agent-oriented Computing, - including multi agent systems and principles for agent design. Part VI focuses on the area of Business Process Management and workflow from a system perspective, but also relates this to various business process modelling languages. Part VII addresses Non Functional Aspect of systems, with respect to interoperability, and covers various areas such as security, trust, quality of service, performance and reliability, DRM, etc. Part VIII contains pointers to various resources, like conferences and journals and useful links for part II to VII. Part IX contains bibliographies for the various areas in part II to VII. Each of the parts I to VII has been the responsibility of different editors, with material and comments provided by different partners, as listed in the tables in the front of this document. The various parts have been developed as separate documents, and then integrated into the structure of this document.

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

Page 12 of 366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

Interoperability Framework

I.1 Introduction There exist various definitions on interoperability. According to the Oxford Dictionary, interoperable means able to operate in conjunction. The word interoperate also implies that one system performs an operation on behalf of anothersystem. From software engineering point of view, interoperability means that two co-operating software systems can easily work together without a particular interfacing effort. It also means establishing communication and sharing information and services between software applications regardless of hardware platform(s). The interoperability is considered achieved if the interaction can, at least, take place at the three levels: data, application and business process with the semantics defined in a business context IEC TC65/290/DC is defining Interoperability as the ability to integrate data, functionality and processes with respect to their semantics.
Compatibility level
Interchangeable Interoperable Interworkable Interconnectable

System feature
Dynamic Behaviour

Coexistent Incompatible

Application Functionality Parameter Semantics Data Types Data Access Communication Interface Communication Protocol

x x x

x x x x

x x x x x x

x x x x x x x

Application part

Communication part

Figure 1: Compatibility levels (adapted from IEC TC65/290/DC) In Figure 1 the different depths of integration are illustrated from the software engineering perspective. The standard addresses communication and application issues and its fulfilment by different integration types. Interoperability distinguishes from interconnected and workable systems by supporting the integration of functionality and addressing parameter semantics. I.1.1 Scope of this document This document has the task to illuminate the interoperability issues from the system architecture and platform point of view. Of cause there are strong relationships to the SOA documents of the other INTEROP work packages and overlaps are unavoidable (if not addressed by direct references) to ensure an as complete as possible analysis of the most important issues on interoperability in the architecture and platform domain. This report mainly addresses the architecture aspects from the software engineering perspective. It does an analysis of the most recent software architectural concepts and identifies the interoperability challenges and how the different concepts address them. This document also broadens its view on process management and non functional aspects (NFA) because interoperability is influenced by these aspects as well.

Page 13 of 366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

I.1.2 Structure Including this introduction this document is sub-structured into five main parts. This parts address modelling and architecture (II III) and contextual aspects like business processes and NFA. Part I: Interoperability Framework This parts introduces to the overall documents and describes a reference framework. The reference framework covers a simplified architecture model and its relationship to system and enterprise models integrated in the system and enterprise context. Part II: MDD and Model Management Modelling is the most important tool for software engineering and system design. Especially for Model Driven Development modelling of cause is essential. Therefore modelling and model management get into the main focus for interoperability. Part III: Service Oriented Computing (inkl. SOA/P2P/Grid) Part IV: Component and Message Oriented Computing Part V: Agent Oriented Computing Part VI: Management of business processes and workflows Describes the technologies for business process and workflow management regarding technical (communication protocols etc) and enterprise issues (value business processes). Part VII: Non functional Aspects This section describes the non functional requirements on software systems and how they are affecting interoperability and vice versa. Part VIII: Resources Part IX: Bibliography Each of these sections follow a generic structure as applicable introducing the main concepts, addressing the interoperability aspects and referencing relevant projects, activities and standards. Part IX contains the references for each part individually. I.2 Interoperability Framework Principles For the analyses and description of the interoperability aspects for architectures and platforms an interoperability framework is used. For the design of the framework the results of the Roadmapping project IDEAS provide the background. The reference model of IDAS is depicted in the next subchapter and the transition to the INTEROP WP9 model is argumented. I.2.1 Overview Why interoperability is so a big issue in software architectures and frameworks? From a system oriented perspective a software system is a conglomerate of subsystems communicating internally and to its environment via well defined interfaces. Additionally software systems are designed to provide specific business functionality. The interfaces in terms of functionality, protocols and signatures are described by models, languages and additional text documents. The challenge of interoperability is induced by the mismatch of these interfaces in multiple senses Computer can match these interfaces and ensure consistency formally but lacks in mapping the interface semantics. To address the semantics of system specifications in context of its business application is the challenge of interoperability. To address and substructure the interoperability challenge and to introduce the terminology used in this document this report uses the IDEAS interoperability Framework as background.

Page 14 of 366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

Background in the IDEAS Interoperability Framework The starting point for the INTEROP Interoperability Framework was developed in the previous IDEAS project, and is presented as a matrix of Interoperability elements related to Enterprise Modeling (Business Models and Knowledge Models) and Architectures, associated with ontologies/semantics and non functional quality aspects, as seen in table 1.
Framework 1st Level Framework 2nd Level ONTOLOGY QUALITY ATTRIBUTES

Semantics

Security

Scalability

Evolution

E N T E R P R. M O D E L A R C H I T E C T P L A T F O R M

Business

Decisional Model Business Model Business Processes Organisation Roles Skills Competencies Knowledge Assets QUALITY ATTRIBUTES

Knowledge

Performance

Availability

Portability

Application

Solution Management Workplace Interaction Application Logic Process Logic Product Data Process Data Knowledge Data Commerce Data

Data

Communicati on

Table 1: IDEAS Interoperability Framework IDEAS analyzed interoperability aspects from an enterprise view (i.e. between two or more enterprises) an architecture & platform view (i.e. between two or more applications/systems), and an ontological views (i.e. the semantics in interoperability) The Enterprise view is separated into Business issues and Knowledge issues.

Page 15 of 366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

I.2.2 Enterprise View The aim of this view is to represent in which way different Models adopted in the inter-operating enterprises could affect our reference framework. Interoperability Aspects for the Enterprise Level are separated Into Business issues and Knowledge issues. I.2.2.1 Business Issues Decisional Model The Decisional Model of an Enterprise defines how decisions are taken and the degree of responsibility of each operating unit, role and position Business Model Business Model is the description of the commercial relationship between a business enterprise and the products and/or services it provides in the market. Business Process Business processes are the set of activities that deliver value to customers. They represent the endto-end flow of materials, information and business commitments, which implement an Enterprise Business Model I.2.2.2 Knowledge Issues Organisation-People Organisation-People can be characterized in the form of an Organisational Model - with respect to the inter-operability business model: Internal (inside the same organisation), Chain (stable, precise hierarchy, suppliers, customers, extended enterprise), Network (less stable, opportunity based, scarce reciprocal knowledge, smart organisations), Constellation (unstable, individual companies, virtual organisations). Skills-Competency Skills-Competency Model defines the capability of an organisation and of its employees to perform a certain job under certain working conditions. Knowledge Assets Enterprise Knowledge Assets are the capital of an organisation formalised in terms of procedures, norms, rules, and references. I.2.2.3 Semantic View The aim of this view is not to allow inter-operability of Ontology Languages or Software Environments, but to represent in which way different concepts and relations adopted in the interoperating enterprises could affect our reference framework. The semantic level is orthogonal to the Enterprise and Architecture & Platform Views. Business Ontology It refers to a shared understanding of concepts and inter-relationships concerned with the Enterprise Business Issues (decisional model, business model, business process). Knowledge Ontology It refers to a shared understanding of concepts and inter-relationships concerned with the Enterprise Knowledge Issues (organisation, skills, knowledge assets). Applications Ontology It refers to a shared understanding of concepts and inter-relationships concerned with the Applications Issues (functional and not-functional issues). Data Ontology It refers to a shared understanding of concepts and inter-relationships concerned with the Data Issues (product, process, commercial, knowledge).

Page 16 of 366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

I.2.3 Architecture & Platform View The aim of this view is to represent in which way different architectures and data adopted in the inter-operating enterprises could affect our reference framework. Interoperability Aspects for the Architecture & Platform are separated Into Application, Data, and Communication issues. I.2.3.1 Application Issues Solution Management It describes the tools and procedures required to administer an enterprise system. This includes role and policy management, monitoring and simulation tools. System management requires compatible data formats and processes. Workplace Interaction It refers to the interaction of the human user with the system, which could be described through input, output, and navigation. Input describes how the user enters data into the system. Output refers to how the user receives information from the system. Navigation describes, how the user navigates between interaction elements. Interaction is also related to usability. Input and output can be through text, graphics, voice, etc. Navigation includes stylus, mouse, voice, keyboard, etc. Application Logic This describes the computation that is carried out by an enterprise system to achieve a business result. It is also referred to as application logic. The provision of application is one of the core goals of an enterprise system. Process Logic This is the order (i.e. step-by-step) in which application (or subsets thereof) is carried out. I.2.3.2 Data Issues It also describes which data is required and produced by an enterprise system during its lifecycle. Data is often kept persistent, which is also a core goal of an enterprise system. This includes Repository Services and Content Management. The data can be categorized as follows: Product Data This is the data associated with the design and production of a product. Process Data This is the data associated with a process, also called workflow-relevant data. Knowledge Data This is the data that is associated with the interpretation and relation of other data. Commercial Data This is the data associated with commercial actions, such as contractual data. Other Data This is any data that is not covered by the above data types I.2.3.3 Communication Issues Communication, Coordination and Collaboration Services refer to the capabilities of an enterprise system to communicate with another enterprise system. This would not be limited to a particular layer in the ISO/OSI reference model. In terms of communication we can think about lower-level network protocols, but also about higher-level interaction protocols, such as EDI or ebXML. N.B. that the higher the protocol, the more likely there will be an overlap with Process Logic. We believe that it is important to point out that these aspects do not necessary form a layer, but are considered as services that characterise an interoperability solution. Page 17 of 366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

I.2.3.4 Quality Attributes We distinguish the following Quality Attributes: Security Describes the ability of a solution to protect enterprise resources and control the access to them. This includes authentication, authorization, and data encryption. Scalability Is the ability of a solution to adjust to an increased number of business tasks. Evolution Refers to the ability of the system to react to changing requirements. Example: Existing software often needs to be upgraded as a whole, when new functionality is required. Alternatively, only those components could be exchanged that are affected by required changes. This requires a solid architecture of the system. Performance Is the ability of a solution to quickly execute a business task and to retrieve and return information in a timely fashion. Availability Is the availability of a solution to be accessible, e.g 5x8, or 7x24. Portability Is the ability of a solution to be used on different hardware platforms, operating systems, and runtime environments with little changes of the solution. I.2.4 Evolution from the IDEAS Interoperability Architecture The presentation document of the IDEAS Interoperability Architectures (IDEAS D34+D35+d36), page 33, states the following: We would like to note that the framework has undergone an evolution, and that the current version should be further improved to accommodate more views. In the context of INTEROP we have discussed the further evolution of this framework, and the need to incorporate more views. We have found the need for the following: Relate the framework to technical architectures Support abstraction views Support composition views Support the time/evolution dimension Support the genericity view

Therefore we defined a specific view on the IDEAS Reference Model from the ICT and modelling perspective, which is depicted in Figure 2. From the ICT point of view the knowledge and enterprise view is covered by the Enterprise Model. This model describes the operation of the enterprise and its support by ICT. The ICT view is split up into the System Model and the Architecture View. The System model specifies the system and is define in design time. At runtime the System model is used for maintance and documentation. Ontologies are the main method to handle and express the semantics beyond the used Enterprise and System Models. By using this view we defined an interoperability framework architecture further elaborated in chapter I.5. The reference model was adapted by: Page 18 of 366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

Combining Knowledge and Enterprise aspects Express Semantics by Ontologys Subdivide ICT into System Model and Architecture

This more detailed view on the ICT aspects lead us to the transition of the reference framework to an ICT oriented view of the reference model.
IDEAS Reference Model Reference Architecture

Knowlege ICT

Semantics

Enterprise

Enterprise Model System Model Architecture

Figure 2: ICT view on the IDEAS Reference Model

I.3 Background For complex systems, it is important to consider integration of different autonomous parts to support integration and interoperability between individually developed parts. Previous work on integrated environments, like the Toaster reference model from ECMA/NIST (European Computer Manufacturers Association/National Institute of Standards and Technology, provides a useful starting point for identifying logical integration aspects of a holistic architecture [EC93]. Even if the Toaster Model is used for distributed software systems, it can be generalised to enterprises as discussed below. The Toaster Model, depicted in the figure below, separates integration into four different categories: data, control, process and presentation integration. The name of the reference model is based on the way the tools fit in between the process integration layer and the data integration layer, like slices of bread in a toaster. The four service areas are defined as follows: Data Integration: The degree to which tools are able to share common data and information. For enterprises, sharing information is equally important as for software systems. Actually, software systems may facilitate this by defining a common vocabulary to be used in information exchange and by providing the means for efficient information sharing. Control Integration: The degree to which tools are able to interact directly with each other, by requesting and providing services. In enterprises, not only computerised services may be requested. On the contrary, any kind of element of an enterprise may request or provide services. Process Integration: The degree to which the users working process and use of tools can be guided by a model of the work process and the methodology to be followed, possibly in cooperation with other users. In an enterprise, this means that the actors may share a common process model. Presentation Integration: The degree to which a user-interface program might provide access to the functionality needed by the user through a common look and feel. In enterprises, this refers to image and profile of the enterprise related to how each actor represents the enterprise.

Ontologies

Page 19 of 366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

The ECMA model still represents the main interoperability aspects in terms of the system itself. But these aspects have to be put in the context of the application in the enterprise which is not covered by ECMA. E-Commerce Integration Meta-Framework (ECIMF) defines recommended interoperability methodology, and the technical specification (described in the ECIMF-TS document) and base tools needed to prepare specific comparisons of concrete frameworks. The results of following the ECIMF methodology should be clear implementation guidelines for system integrators and software vendors on how to ensure interoperability and semantic alignment between incompatible ecommerce systems. [16]. The proposed ECIMF methodology for analysis and modelling of the transformations between e-commerce frameworks follows a layered approach. In order to analyse the problem domain one has to split it into layers of abstraction, applying top-down technique to classify the entities and their mutual relationships. X.900 Series of standards define Open Distributed Processing Reference Model called RM-ODP [17-20]. RM-ODP is a standardized model for the design of object-based distributed systems. It has great flexibility, and it has the advantage that it allows to define systems in terms of reusable conceptual components, which in practice can become reusable physical components. Systems designed according to the reference model will be distributed object systems that enable organizations to share data and processing services. OPD-RM defines five viewpoints to describe open distributed systems; Enterprise viewpoint addressing purpose, scope, policies. Information viewpoint addressing information content. Computational viewpoint addressing functionality. Engineering viewpoint addressing infrastructure for distribution. Technology viewpoint addressing choices of technology I.4 Requirements on a Framework Architecture By development of the framework architecture the following requirements have been addressed: Business Context: an ICT system is an application in its business context. The enterprise initiates interoperability request by defining the system environment and integration tasks. Modelling: In a model driven approach models have to be addressed from the enterprise perspective to specify the structure and value chains of the enterprise and the software oriented aspects of the system. Design Cycle: Creation and analyse defines the life cycle of the system entities of the software systems. While design oriented issues are focus already from the beginning of software engineering nowadays alse reengineering is addressed as well. Architectural pattern: The reference model should include a basis architecture pattern that can be applied to most recent architectural concepts with respect to interoperability issues. Interoperability: The reference model has to integrate the mentioned requirements in an open model in combination with semantic specifications, non functional aspects and business process management. By taking this requirements into account the work package 9 framework architecture is designed. I.5 Structure of the Architecture Framework The architecture framework is structured according to the three areas; Conceptual integration, technical integration and applicative integration defined in the ATHENA Interoperability Framework [8]. Based on these three areas we have developed a conceptual, a technical and an applicative view of an enterprise that are used to provide reference models for integration: The conceptual view of an enterprise has been developed from a MDD point of view focusing on the enterprise application software system. A computation independent model (CIM) corresponds Page 20 of 366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

to the view defined by the context viewpoint. It captures the business context of the software system. A platform independent model (PIM) and a platform specific model (PSM) are both computational dependent with respect to the software system, the difference being that the PIM is independent of an execution platform while the PSM is not. This introduces two new viewpoints that are orthogonal to the system and realisation viewpoint described above. Thus specification and implementation models can be regarded as either PIM or PSM dependent on the target execution platform. The models at the various levels may be semantically annotated using ontologies to achieve mutual understanding on all levels. The use of ontologies will also help us in doing model transformations and mappings between models. The technical view of an enterprise focuses on the deployment and execution of the software system and how it supports the businesses and users of the enterprise. The software system is coupled to a software bus that provides the necessary communication infrastructure required to deploy a distributed system. The architecture of the enterprise application software can be described according to a 4-tier reference architecture where each tier provides different software services required by the enterprise. Amongst other things, the software system needs to support the business transactions, business processes and business collaboration taking place within an enterprise. In addition it needs to support the users in performing their business tasks. The applicative view of an enterprise corresponds to the enterprise and software models prescribed by enterprise and software methodologies. These models can be related in a holistic view, regardless of modelling language formalisms, by the use of meta-models. Our focus is on MDD methodology for software system development. These methodologies prescribe a set of models that can either be linked to the three basic viewpoints defined, or by introducing new viewpoints that correspond to the new models expressed. A model may be split into smaller submodels, of which some may be visual and others may be textual (e.g. specification of constraints or program code).
Figure 3

illustrates these three different views of an enterprise.


Enterprise Architecture A
(Model World)

Enterprise System A
(MDD Abstraction)
Semantic Annotation

Enterprise A
(Physical World)

Computational Independent Model (CIM)


Model-Driven Development (MDD) & Architecture-Driven Modernisation (ADM)
MT MT MT

Enterprise Model

Business Transactions Business Collaboration Business Processes Business Tasks

Ontologies

MT

Business Context Business


Context Models

Semantic Annotation

Platform Independent Model (PIM)


Architecture-Driven Modernisation (ADM)

MT

Business

Users
Vertical Integration

Model-Driven Development (MDD)

MT

Software Specification Models


MT

Resource Services

Business Services

Execution Platform A Computational System A


MT Model Transformation
Software Realisation Models

Visual Model

Textual Model

Software System

Figure 3 Conceptual, applicative view and technical view of an enterprise Page 21 of 366

User Services

Software Bus

Semantic Annotation

Platform Specific Model (PSM)

User Interface Services

Software Model

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

Enterprise and software models can be built to understand and analyse the physical world of an enterprise. Software models describe how ICT systems are used to support the businesses of an enterprise. These software models can be classified as CIM, PIM or PSM models according to a MDD abstraction. The model world corresponds to the set of models prescribed by a MDD methodology The architecture framework provides reference models for interoperability corresponding to the three defined views of an enterprise. The reference models will address ICT interoperability elements such as application, data, communication, ontology and quality aspects identified in the IDEAS framework. I.5.1 Reference Model for Conceptual Integration The conceptual integration focuses on concepts, meta-models, languages and model relations. It provides us with a foundation for systemising various aspects of ICT model interoperability.
Enterprise System A
(MDD Abstraction)
Semantic Annotation

Enterprise System B
(MDD Abstraction)
Reference Ontology
Semantic Annotation

Computational Independent Model (CIM)


Model-Driven Development (MDD) & Architecture-Driven Modernisation (ADM)

Computational Independent Model (CIM)


Model-Driven Development (MDD) & Architecture-Driven Modernisation (ADM)

Ontologies

MT

Interoperability Patterns

Ontologies

MT

Semantic Annotation

Platform Independent Model (PIM)


Architecture-Driven Modernisation (ADM)

Integration

Model-Driven Development (MDD)

MT

e ic rv c t s Se p e As

In fo As rm pe ati ct on s
n- nal N o t io t s n c p ec Fu s A

Semantic Annotation

Platform Independent Model (PIM)


Architecture-Driven Modernisation (ADM)

Model-Driven Development (MDD)

Horisontal
Pr As oce pe ss ct s

MI Integration
Vertical

MT

Semantic Annotation

Platform Specific Model (PSM)

Semantic Annotation

Platform Specific Model (PSM)

Execution Platform A Computational System A


MT Model Transformation MI Model Interoperability

Execution Platform B Computational System B


MT Model Transformation

Figure 4 Reference model for conceptual integration Interoperability issues occur both within the company (vertical integration) and between companies (horizontal integration). Our primary focus will be on the horizontal and vertical integration issues that take place between enterprises (interactions and collaborations). The interoperability patterns applied between companies (inter-) can be recursively applied to solve interoperability issues between business units within a company (intra-). Model mappings can be defined using the meta-models and ontology. The focus of the reference model will be on horizontal integration. Vertical integration is addressed by MDD and ADM. Our emphasis is on model mapping, synthesis and development with respect to model integration. The use of reference ontology for semantic annotation of models will help us achieve this integration. Furthermore, application of generic or domain-specific interoperability patterns can also be used in this respect. Page 22 of 366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

Models are used to describe different concerns of a software system. We have identified four different systems areas or aspects where specific concerns can be addressed by conceptual integration. 1. Service aspects: Services defines contracts that specifies they usage and implementation. 2. Information aspects: Information are provided by services and used by the processes. 3. Process aspects: Processes describes the sequencing of work. 4. Non-functional aspects: Extra-functional qualities that can be applied to services, information and/or processes. These four aspects can be addressed in all three CIM, PIM or PSM-level models and specific concerns regarding them can be made explicitly visible through models defined by viewpoints (corresponding to an architectural framework or MDD methodology). There may exist other aspects, but we feel that the four identified will provide a good baseline for discussing conceptual integration. In literature different dimensions of system design are identified. These dimensions can be used to analyse software systems or help to structure the system modelling process and to catalyse design decision. Figure 4 graphically organizes these integration dimensions around the four identified system aspect defined in the architecture framework.
System abstraction
CIM PIM Enterprise Viewpoint Business Viewpoint System Viewpoint Realisation Viewpoint PSM Code
e ic rv cts Se pe As
Pr As oce pe ss ct s
In fo As rma pe tio ct n s

Genericity

Product-line, Framework, Pattern

System, Product

M0

M1

M2

M3

Viewpoints
Object Component Service (Virtual) Enterprise

l n- na No tio ts nc ec Fu sp A

Model abstraction
State Evolution

Time

Composition

Figure 5: Integration dimensions with respect to system aspects Each of these dimensionss may support interoperability achievements or could represent a challenge of interoperability. System abstraction: This dimension of system design reflects the abstraction in terms of implementation independency and is addressed by MDD. Genericity: is an important design rational and has impact on the adaptability and reusability of the system components. Viewpoint: System models represent a complex and strongly interrelated network of model entities. To address different issues and for complexity reduction different viewpoint on the model are used. This viewpoint may also be regarded for interoperability.

Page 23 of 366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

Composition: Systems are a iteratively composed in an hierarchy from an individual object to the system in the enterprise context. On each o this aggregation layers the entities have to be interoperable Time: The system itself is modified in status, configuration and design. Model abstraction: Meta models help to describe and analysed the used models These dimensions of system design are applicable on the main aspects of the system model; service, information, process and non-functional aspects. I.5.2 Reference Model for Technical Integration Technical integration focuses on the development and execution environment. It provides us with tools and solutions to develop and execute software models.
Enterprise A (Physical World) Enterprise B (Physical World)

Business Transactions Business Tasks Business Processes Business Collaboration

Business Transactions Business Collaboration Business Processes Business Tasks

Users
Vertical Integration

Business
Visual Models Textual Models

Business

Users
Vertical Integration

Model Mgmt.

Service Mgmt.

Task Exec. Mgmt.

Data Mgmt.

User Interface Services

Software Bus (Intranet)

Software Bus (Intranet)

Software Bus (Internet)

Software System

Infrastructure Services

Software System

Figure 6: Reference model for technical integration We will use the software bus as an architectural pattern for handling technical integration of software systems. Figure 6 shows how a software bus comes into play when integrating two (or more) enterprises. The software bus will make us of infrastructure and registry/repository services. A software system can be structured according to a tiered architecture. We have identified four main tiers that should be seen as logical separations of a software system and not as a 4-layered architecture. 1. User interface tier provides presentation and user dialog logic. Sometimes, it is useful to make the presentation and user dialog separation explicitly, in particular to support reuse of user dialog on multiple platforms with different graphical capabilities, e.g. Web, PDA and Mobile phones. 2. User service tier provides the users model, which may include user session logic and user-side representations of processes and information. It is an abstraction for a set of business services, making the business service provision (and the communication mechanisms) transparent to the user interface tier. 3. Business service tier provides components that represent business functionality and pervasive functionality (vertical vs. horizontal services). This tier provides enterprise-level services, and is Page 24 of 366

User Interface Services

Registry/Repository

Resource Services

Resource Services

Business Services

Business Services

User Services

User Services

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

responsible for protecting the integrity of enterprise resources at the business logic level. Components in this tier can be process-oriented, entity-oriented or workflow-oriented. For performance reasons, entity-oriented components are typically not exposed outside of this tier. 4. Resource services tier provides global persistence services, typically in the form of databases. Resource adapters (e.g. JDBC or ODBC drivers) provide access, search and update services to databases and its data stored in a database management system (DBMS) like Oracle or Sybase. In addition to these four tiers we need a software communication bus so that services deployed at the various tiers can interoperate both within a tier and across tiers.
User Interface Tier User Service Tier Business Service Tier Resource Service Tier
RA RA

User Service Domain

LA

LS

Legend Service
LA Local Adapter LS

Business Service Domain

Local Storage

Database Inter-service communication

Software Bus
(Middleware Services)

RA Resource Adapter

Figure 7: 4-tier reference architecture for software systems I.5.3 Methodology for Applicative Integration

Applicative integration focuses on methodologies, standards and domain models. It provides us with guidelines and patterns that can be used to solve real-world ICT interoperability issues A methodology for applicative integration should be adapted to the business situation at hand. It should allow for both a MDD and an ADM approach depending on whether new solutions are to be developed or existing solutions are to be integrated. In some circumstances there will be a need for both, e.g. developing new services using service composition to integrate existing services. The methodology focuses on the models to develop and how to develop them. The models in question will depend on technical issues, business domain issues etc. In addition the methodology should make use of available interoperability patterns. I.6 Framework Architecture The structure introduced in the last subchapter can be condensed into a single architecture which can be used to structure and localizethe topics addressed in this SOA report. The framework architecture is depicted in Figure 8. The figure shows the architecture in the context of an individual enterprise.

Page 25 of 366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

Enterprise A
Enterprise Model

Enterprise B
Enterprise Model

Reengineering

Ontologies

Reengineering
Interaction/ Presentation User Services Shared Business Services Data/ Legacy

Design

Design

Ontologies

System A
System Model

System B
System Model Reengineering Reengineering Non Func. Aspects Design BP Mgt. Interface

Design
User Services

Interaction/ Presentation

Non Func. Aspects

Shared Business Services Data/ Legacy

Interoperability
Interface

BP Mgt.

Architecture

Business

Architecture

Figure 8: Framework Architecture The framework architecture consists out of the following elements: Enterprise: The Enterprise defines the context of the system to be integrated. The enterprise could also represent another business context (department, ). The interoperability is requested by initiating a business relationship to other enterprises. The enterprise concepts, interfaces etc. have to be aligned to support the business interactions. Enterprise Model: The Enterprise Model comprehends all models from the business point of view: Business Processes Resource models Organisation structure Models Business Modells Knowledge and Skills related models The models are including the specification of the relevant ICT support. Ontologies: represent the conceptualisation of the domain exceeding the capabilities of the models and provide semantic information on the models. System: The system defines the context of the ICT contemplation. Systemmodel: The System model covers the models used during the conception and design of the ICT Systems Architecture: Following multi-tier and service/component-oriented architectures as well a functional structure of the system the main areas are: Interaction / Presentation User Services Business Services Data / Legacy Services These four elements reflect the aspects presentation, data and control integration while control integration is subdivided in business and user services. Page 26 of 366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

Non Functional Aspects: The non functional aspects are applicable to all of the above mentioned areas. See Part V for more information. Business Process Management: Controlls the application and user processes. Therefore BP management can rely on the system and enterprise models. This also reflects the process integration aspects of the ECMA model but also with respect of control integration. I.6.1 Relationships between the INTEROP Work Packages The framework architecture can be used to describe the relationship of the A&P work package to the different INTEROP NOE work packages. As sketched in Figure 9 the relationships can be addresses as following WP5 CEMF: This work package addresses the enterprise models. It analyses interoperability issues related to the methods to describe and specify how the enterprise is organized and how value is created. WP6 Design Principles: The design principles are the methodological background to reach interoperability and cover the architecture framework holistically. WP7 Methods: have also impact on the system design. The support the transition from the enterprise to system models to the concrete system WP8 Ontologies: provide technologies to express, maintain and to manage semantic information about and in extension to the used models. WP9 Applications & Frameworks: This work package focuses on the relationship of system models and the system it self and covers system design and reengineering with respect to business process management and non functional aspects. In extension to this focus the work package hat to regard the link to enterprise models and semantic specifications (provides by ontologys).
Enterprise A
Enterprise Model

WP5 CEMF WP8 Ontologies WP7 Methods WP6 Design Princ. WP9 Architectures
BP Mgt. Interface

Enterprise B
Enterprise Model

Reengineering

Ontologies

Reengineering
Interaction/ Presentation User Services Shared Business Services Data/ Legacy

Design

Design

Ontologies

System A
System Model

System B
System Model Reengineering Non Func. Aspects Design BP Mgt. Interface Architecture

Interaction/ Presentation User Services

Reengineering

Design

Non Func. Aspects

Shared Business Services Data/ Legacy

Interoperability

Architecture

Business

Figure 9: Framework Architecture areas addressed by the different INTEROP WP

I.6.2 WP9 Structure The framework architecture helps also to describe the relationships and the localisation of the different parts of this SOA report. The localisation is shown in Figure 10.

Page 27 of 366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

Enterprise B
Enterprise Model

Reengineering

Design

Part I Int.Arch

Ontologies

Part II MDD

System B
System Model Reengineering Design

Part VI BPM
BP Mgt. Interface

Interaction/ Presentation User Services Shared Business Services Data/ Legacy

Part VII NFA

Non Func. Aspects

Architecture

Part III SOC, WS,P2P, Grid

Part IV COC/MOC

Business

Part V Agents

Figure 10: Areas addressed by the WP9 SOA report

Part I: Interoperability Architecture: Is this part, describing the overall relationships between the areas of Enterprise modeling, ontologies and Architectures in the context of interoperability. Part II: MDD and Model Management: This part focuses on a model-based approach to system description, with the corresponding support for transformation and mappings between models, including a potential link to enterprise modelling and ontologies. Part III-V: Software Architecture related: Part III: Service Oriented Computing, including Web services, P2P and Grid : Focuses on service oriented architectures as a principle, with realizations in web services, and inclusion of P2P and Grid technologies. Part IV: Component-Oriented and Message-Oriented Computing: Covers the system architectures and the aligned models. Part V: Agent Oriented Computing: Describes Agent technologies and models. Part VI: Management of business processes and workflows: Focuses in particular in the area of business process management and workflow, based on mappings of business process models to architectural execution support and enactment, Part VII: Non functional Aspects: are immanent in the system itself but are also reflected in the models. Page 28 of 366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

II
II.1

Model Driven Development


Introduction

The object technology revolution has allowed the replacement of the more than twenty-years old step-wise procedural refinement paradigm by the more fashionable object composition paradigm. Surprisingly this evolution seems itself today to be triggering another even more radical change, towards model-based technology. As a concrete trace of this, the Object Management Group (OMG) is rapidly moving from its previous Object Management Architecture vision (OMA) to the newest Model-Driven Architecture (MDA) [OMG MDA] and even Microsoft is putting investment in its own variant of modelling around the notion of Domain Specific Modelling [Cook jan04]. The object technology revolution has allowed the replacement of the more than twenty-years old step-wise procedural refinement paradigm by the more fashionable object composition paradigm. Surprisingly this evolution seems itself today to be triggering another even more radical change, towards model-based technology. As a concrete trace of this, the Object Management Group (OMG) is rapidly moving from its previous Object Management Architecture vision (OMA) to the newest Model-Driven Architecture (MDA) [OMG MDA] and even Microsoft is putting investment in its own variant of modelling around the notion of Domain Specific Modelling [Cook jan04]. MDA is the OMGs instantiation of an approach to software development coming to be known as Model Driven Engineering (MDE) or Model Driven Development (MDD). MDD focuses on Models as the primary artefacts in the development process, with Transformations as the primary operation on models, used to map information from one model to another. There is presently an important paradigm shift in the field of software engineering that may have important consequences on the way information systems are built and maintained. Presenting their software factory approach, J. Greenfield and K. Short write in [Greefield, Short]: "The software industry remains reliant on the craftsmanship of skilled individuals engaged in labor intensive manual tasks. However, growing pressure to reduce cost and time to market and to improve software quality may catalyze a transition to more automated methods. We look at how the software industry may be industrialized, and we describe technologies that might be used to support this vision. We suggest that the current software development paradigm, based on object orientation, may have reached the point of exhaustion, and we propose a model for its successor." The central idea of object composition is progressively being replaced by the notion of model transformation. One can view these in continuity or in rupture. The idea of software systems being composed of interconnected objects is not in opposition with the idea of the software life cycle being viewed as a chain of model transformations. In November 2000, the OMG made public the MDA initiative, a particular variant of a new global trend called model driven development. The basic ideas of MDD are germane to many other approaches such as generative programming, domain specific languages, model-integrated computing, software factories, etc. MDA may be defined as the realization of MDD principles Page 29 of 366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

around a set of OMG standards like MOF, XMI, OCL, UML, CWM, SPEM, etc. MDD is presently making several promises about the potential benefits that could be reaped from a move from codecentric to model-based practices. When we observe these claims, we may wonder when they may be satisfied: on the short, medium or long term or even never perhaps for some of them. The MDA approach does not have a unique goal but multiple goals. Among the objectives pursued, one may list the separation from business-neutral descriptions and platform dependent implementations, the identification, precise expression, separation and combination of specific aspects of a system under development with domain-specific languages, the establishment of precise relations between these different languages in a global framework and in particular the possibility to express operational transformations between them. MDD is an evolving paradigm with many expectations on its final potential and with much research and development needed before it will meet those expectations. As stated by Steve Cook []: Over the past few years, new modelling technologies have begun to emerge under the banner of Model Driven Architecture (MDA) have created a buzz of interest by promising to increase the productivity of software development and the portability of software. On the other hand, we can see parallels between the promotion of MDA and the promotion of Computer-Aided Software Engineering (CASE) tools during the 1980s, and CASE clearly failed to live up to its promises. We should be very sceptical about new claims for model-driven software development unless we can demonstrate new ways of thinking about the problem that will avoid the pitfalls and failure of CASE. In general we can view Model Driven Development as a general principle for software engineering that can be realised in a number of different ways (using different standards) and supported by a variety of tools. The following table illustrates this, showing three current realisations of the MDD principle, with tools that are design in support of a particular standard.

Principle Standard Supporting Tools

Model Driven Development MDA DSM EMF Visual Studio 2005 Table 2 Three approaches to MDD

MIC GME

More detail regarding these three approaches to MDD are given in section II.2. There are three general aspects to MDD that we consider it important to look at which are covered in the subsequent sections: Principles of Modelling, Model Classification and Operations on Models. The final sections cover current MDD tools, some potential research issues surrounding MDD, some relevant sources of information on MDD, and finally more specific references to MDD documents. Document D7.1 from INTEROP WP7 contains a relevant section on Model Transformation tools, in particular related to the proposed OMG QVT standard.

Page 30 of 366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

II.2

MDD Standards

II.2.1 OMGs Model Driven Architecture (MDA) As defined by the OMG (although currently only a draft) The following was approved unanimously by 17 participants at the ORMSC plenary session, meeting in Montreal on 23 August 26, 2004. The stated purpose of these two paragraphs was to provide principles to be followed in the revision of the MDA Guide: MDA is an OMG initiative that proposes to define a set of non-proprietary standards that will specify interoperable technologies with which to realize model-driven development with automated transformations. Not all of these technologies will directly concern the transformations involved in MDA. MDA does not necessarily rely on the UML, but, as a specialized kind of MDD (Model Driven Development), MDA necessarily involves the use of model(s) in development, which entails that at least one modeling language must be used. Any modeling language used in MDA must be described in terms of the MOF language, to enable the metadata to be understood in a standard manner, which is a precondition for any ability to perform automated transformations. The three primary goals of MDA are portability, interoperability and reusability. Over the last dozen years, the Object Management Group, better known as OMG, standardized the object request broker (ORB) and a suite of object services. This work was guided by the Object Management Architecture (OMA), which provides a framework for distributed systems and by the Common ORB Architecture, or CORBA, a part of that framework. The OMA and CORBA were specified as a software framework, to guide the development of technologies for OMG adoption. This framework is in the same spirit as the OSI Reference Model and the Reference Model of Open Distributed Processing (RM-ODP or ODP [ISO/IEC 10746-1]). The OMA framework identifies types of parts that are combined to make up a distributed system and, together with CORBA, specifies the types of connectors and the rules for their use. Starting in 1995, OMG informally began to adopt industry-specific (domain) technology specifications. Recognizing the need to formalize this activity, OMG added the new Domain Technology Committee in the major process restructuring of 1996 and 1997. Parallel work around object modelling, resulted in the adoption of the Unified Modelling Language, UML. OMG members then began to use UML, sometimes in replacement for IDL, in the specification of technologies for OMG adoption. In keeping with its expanding focus, OMG began the development of a second framework, the Model Driven Architecture or MDA [OMG MDA]. MDA is not, like the OMA and CORBA, a framework for implementing distributed systems. It is an approach to using models in software development. The Model Driven Architecture starts with the well-known and long established idea of separating the specification of the operation of the system from the details of the way the system uses the capabilities of its platform. MDA provides an approach and tools for: specifying a system independently of the platform that supports it, Page 31 of 366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

specifying platforms choosing a particular platform for the system, and transforming the system specification into one for a particular platform.

The MDA core is based basically on the following standards: MOF (Meta Object Facility), UML (Unified Modelling Language) and CWM (Common Warehouse Metamodel). There are also a series of Model Interchange standards based around XML (XMI) Java (JMI) and CORBA (CMI). MOF plays an important role because in this level we can define transformations between models and we can also define others metamodels. UML is a unified modelling language and it is not a universal modelling language. Moreover we need to specify that UML is not a methodology. Optionnally a methodology could be defined within the Model Driven Engineering (MDE). In addition UML could be tailored for a specific purpose and domain through constraints and extensions mechanisms. This kind of UML is called UML Profile and there are some UML Profiles under standardization process and others Profiles that are already standardized (e.g. UML Profile for Enterprise Application Integration, UML Profile for Enterprise Distributed Object Computing,). In the above pictures, the UML Profile concept is not include because it is considered as an UML ability. CWM is a language metamodel to specify the design and use of the data warehouse. XMI is a mapping to the XML technical space, mainly intended to allow models and metamodels to be exchanged after serialization JMI is a mapping to the Java technical space, in order to allow access to the internal structure of models and metamodels by executable Java programs CMI is a mapping to the CORBA technical space, in order to show the interoperabilty of main OMG standards There are other definitions of MDA; of particular note is that included in An MDA Manifesto [MDA Manifesto] as published by the MDA Journal: In essence, the foundations of MDA consist of three complementary ideas: 1. Direct representation. Shift the focus of software development away from the technology domain toward the ideas and concepts of the problem domain. Reducing the semantic distance between problem domain and representation allows a more direct coupling of solutions to problems, leading to more accurate designs and increased productivity. 2. Automation. Use computer-based tools to mechanize those facets of software development that do not depend on human ingenuity. One of the primary purposes of automation in MDA is to bridge the semantic gap between domain concepts and implementation technology by explicitly modeling both domain and technology choices in frameworks and then exploiting the knowledge built into a particular application framework. 3. Open standards. Standards have been one of the most effective boosters of progress throughout the history of technology. Industry standards not only help eliminate gratuitous diversity but they also encourage an ecosystem of vendors producing tools for general purposes as well as all kinds of specialized niches, greatly increasing the attractiveness of the whole endeavor to users. Open source development ensures that standards are implemented consistently and encourages the adoption of standards by vendors. Page 32 of 366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

Another primary source of MDA definitions is the OMGs MDA Guide [OMG MDA]. This document defines the main set of concepts related to Model Driven Architecture, including definitions of CIM, PIM and PSM. Finally, there are many books starting to appear that all introduce MDA with the authors giving their own flavour to the definition [Frankel] [Kleppe, Warmer, Bast] [Mellor etal]. II.2.2 Microsofts Domain Specific Modelling (DSM) There is so far little published material regarding Microsofts approach to MDD, perhaps one of the best sources of information is the ongoing debate within a series of articles published by the MDA Journal [MDA Journal] and on the Web site mentioned by Steve Cook in his first article that gives details of Microsofts modelling tools. The place to watch for upcoming info is http://lab.msdn.microsoft.com/vs2005/teamsystem/workshop/. Microsoft released in October 2004 a preview version of new tools intended to make it easier for companies to create custom Web applications. This was released as a "community technology preview" version of modelling tools, formerly code-named Whitehorse, to be included in Visual Studio 2005 Team System, an upcoming addition to Microsoft's line of developer packages that focuses on enterprise developers. Domain-specific language tools lay the foundation for software factories by providing a framework and a set of tools for delivering domain-specific visual designers that plug into Visual Studio Team System. These designers could be tools for industry verticals, such as the Financial/ERP, health care or telecommunications industries or, they could be tools for development across numerous disciplines, such as object-oriented modeling and architecture. Other vendors, such as IBM and Borland Software, also have invested substantially in modelling. Borland announced its own modelling tools, called Together Architect, and IBM has just released the new Rational Software Architect . High-quality software that doesn't easily crash or require frequent maintenance is especially important for a company's most significant applications. Market researcher Gartner estimates that the average cost of unplanned downtime for so-called "mission critical" software is $100,000 per hour. Fully 40 percent of application failures are due to software problems, according to Gartner. One of the most immediate concerns development tool companies have is preparing corporate customers for building new software following a service-oriented architecture (SOA), with a focus on n more flexible, better quality software at a lower cost. An SOA, for example, could allow an ecommerce site to perform a complex transaction involving different business partners by linking together several Web services, rather than requiring programmers to hand-code connections to partners. By publishing the software development kit for the modelling tools in Visual Studio, Microsoft hopes to encourage partners and customers to create customised models components to describe software functions peculiar to specific industries and tasks. The new Visual Studio versions will be one of the first steps in a big Microsoft effort, dubbed "Software Factories", to enable companies to produce customised applications faster by automating routine tasks. Page 33 of 366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

Developers who are using UML, data modeling or business process modeling tools are often faced with the need to personalize or customize these tools. They turn them into domain-specific, company-specific and even project-specific modeling tools. They modify what a diagram shows and what it means, and they add code generators, report generators, and functionality to do useful things with their models. Often they will write custom code to import and export their models. In some cases, they even devise their own interpretation of standard languages like UML. But in other cases, a developer may simply be reusing a pattern of code with a template language or XML schema for which he writes a code generator for specific projects. Developers who are interested in using models and generators like this are indulging in model-driven development as a way to achieve higher productivity. The 'Domain-Specific Language (DSL) Concepts Editor' will enable developers, both tool builders and tool users, to create their own custom and problem-specific modeling tools with little effort. The intention is to provide tools to build graphical designers that are focused on specific aspects of application building. However, this does not necessarily mean a need to invent entirely new notations. One can use the same well-known notations for statecharts etc. in these custom tools, but one will have the freedom to extend or redefine the notation, and describe precisely what the notation means in the problem domain. The 'DSL Concepts Editor' also enables to define concept models that are not necessarily used in graphical designers or modeling tools. II.2.3 Model Integrated Computing (MIC) Model-Integrated Computing (MIC) addresses the problems of developing software integrated systems by providing rich, domain-specific modeling environments including model analysis and model-based program synthesis tools. This technology is used to create and evolve integrated, multiple-aspect models using concepts, relations, and model composition principles routinely used in the specific field, to facilitate systems/software engineering analysis of the models, and to automatically synthesize applications from the models. MIC has been used to develop many different technologies and solutions for industry and government. Model-Integrated Computing (MIC) has been developed over a decade at ISIS, Vanderbilt University for building embedded software systems. The key element of this approach is the extension of the scope and usage of models such that they form the "backbone" of a modelintegrated system development process. See http://www.isis.vanderbilt.edu/research/mic.html In Model-Integrated Computing models play the following central roles: Integrated, multiple-view models capture the information relevant to the system to be developed. Models can explicitly represent the designer's understanding of the entire system, including the information processing architecture, the physical architecture, and the environment it operates in. Integrated modeling allows the explicit representation of dependencies and constraints among the different modeling views. Tools analyze different, but interdependent characteristics of systems (such as performance, safety, reliability, etc.). Tool-specific model interpreters translate the information in the models to the input languages of analysis tools. The integrated models are used to automatically synthesize the software. The modelintegrated program synthesis process utilizes model interpreters to translate the models into executable specifications. UML-based metaprogramming interface allows synthesis, evolution of domain-specific MIPS environments. Page 34 of 366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

Using MIC technology one can capture the requirements, actual architecture, and the environment of a system in the form of high-level models. The requirement models allow the explicit representation of desired functionalities and/or non-functional properties. The architecture models represent the actual structure of the system to be built, while the environment models capture what the "outside world" of the system looks like. These models act as a repository of information that is needed for analyzing and generating the system. Multigraph Architecture The MultiGraph Architecture (MGA) provides a unified software architecture and framework for building domain-specific tools for: (1) building, testing, and storing domain models, (2) transforming the models into executable programs and/or extracting information for system engineering tools, and (3) integrating applications on heterogeneous parallel/distributed computing platforms. The MGA is comprised of three levels as described below.

Figure 11 Multigraph Architecure Application Level The Application Level represents the synthesized, adaptable software applications. The executable programs are specified in terms of the Multigraph Computational Model (MCM). The MCM is a macro-dataflow model which models the synthesized programs as an attributed, directed, bipartite graph. The runtime support for MCM, the Multigraph Kernel, is implemented as an overlay above operating and communication systems.

Page 35 of 366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

Model-Integrated Program Synthesis (MIPS) Level The Model-Integrated Program Synthesis (MIPS) Level includes generic, but customizable tools for model building, model analysis, and application synthesis. The generic components of the architecture are: (1) a customizable Graphical Model Editor (GME), (2) a database layer for storing and accessing models, (3) model analysis tools and external analysis tools, and (4) model interpreters that synthesize applications (executable models) in terms of the MCM, or translate models into input data structures of the analysis tools (analysis models). The Meta-Level The Meta-Level of MGA provides a metaprogramming interface for the components on the MIPS Level. The Metaprogramming Interface includes: (a) support for the formal specification of domainspecific modeling paradigms and model interpreters using formal languages, (b) metalevel translators to generate configuration files for the GME from the modeling paradigm specification, (c) metalevel program synthesis tools for generating model interpreters from their formal specification, and (d) support for the validation and verification of the metamodels. Metamodels capture the formal semantics of domain specific modeling languages and model interpreters. The formal semantics of modeling paradigms define the constraints that the domain models must satisfy with respect to the concepts, relations, model composition principles and domain-specific integrity constraints. As we can consider applications as "executable instances" of domain models, the domain models can be viewed as "instances" of metamodels. Impact metaprogramming tools significantly decrease the required effort to create integrated domain specific modeling environments, metaprogramming tools decrease the development time of generators, MIPS environments enable the rapid modification/adaptation of applications by simply modifying domain specific models, and metaprogramming toolset supports environment evolution (i.e., changing the modeling paradigm). II.3 Principles of Modelling

The attraction of models is that they enable a problem to be precisely described without having to delve into the technical details; essentially a model is an abstract description. By model we mean a complex structure that represents a design artefact, such as a relational schema, object-oriented interface, UML model, XML DTD, web-site schema, semantic network, complex document, or software configuration. Many uses of models involve managing changes in models and transformations of data from one model into another. These uses require an explicit representation of mappings between models. We propose to make database systems easier to use for these applications by making model and model mapping first-class objects with special operations that simplify their use. We call this capacity model management. [P.A. Bernstein, A.L. Levy & R.A. Pottinger MSR-TR-2000-53] Modelling is a fundamental part of Model Driven Development and is in itself a far old discipline than computing. However, even in the context of computing systems, modelling is far older than the recent acknowledgement through MDD that it can form a major and useful part of system engineering. Modelling has been advocated as an important part of system design for almost as long Page 36 of 366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

as we have been building computing systems. Early approaches to modelling computing systems such as Yordon Structured Design (late 1970s), DeMarco Structured Analysis (late 1970s), Merise (late 1970s), Finkelstein/Martin Information Engineering (1970/80) Structured System Analysis and Design Method (SSADM and LSDM) (early 1980s), Shlaer Mellor (early 1980s), Booch (early 1990s) were used to aid system building as long as 30 years ago. Since then numerous other informal and formal approaches to modelling have been developed. We could not possible do a complete review of all the past and present approaches to modelling, however it is important to point out that: a) modelling is not in its self a new concept b) modelling approaches existed before UML, MDD, MDA, MOF or the OMG c) UML is not the only language for modelling d) Modelling need not be Object-Oriented. UML is currently one of the most widely used modelling languages. Despite the many objections to and problems with it, it remains accepted by industry and supported by numerous tool vendors. Prior to UML perhaps some of the most widely adopted modelling languages were Entity Relationship Diagrams for modelling data, SDL for modelling telecommunication systems and perhaps languages such as Z, B, CSP, LOTOS etc. for more formal modelling of systems and processes. II.3.1 Modelling Frameworks, Enterprise, and Architecture Model driven architecture entails, on one hand a partially ordered set of models constituting the description, on several levels, of a software service as well as the context in which it works, and on the other hand the process of constructing those models in a step-wise refinement manner. More precisely, "architecture" in this context would denote a description of a structure (architectural model) comprising a set of interacting components/nodes/layers that together make up a software service; e.g. the term "information systems architecture" is used in at least three different ways and hence to denote three different architectural levels: 1. 2. 3. The set of components that make up a particular computerized information system and how these components are interlinked and interact. A collection of information systems used in some organisation and how they interact. A model of the business conducted in an organisation and how its various functions and processes are supported by information systems (services)

"Model driven" would entail that the final architecture, itself being a model, is derived, from early more sketchy enterprise models in a step-wise refinement process. This is in line with e.g. "information systems architecture" according to Zachman, where the complete architecture is made up of all models and descriptions constructed from the early requirements capture stages and on to the implementation - answering at each step the questions of "what, how, who, where, when and why". This is in much the same way that the architectural design of a building starts with quick sketches of the suggested building exterior and ends with detailed drawings of the building and its components (floors, walls, roofs, windows, doors ...) and including pipes and electrical wiring etc. In order to define the field and determine the scope of enterprise architecture both researchers and practitioners have produced a number of architecture frameworks. Frameworks provide structure to the architectural descriptions by identifying and sometimes relating different architectural domains and the modelling techniques associated with them. Page 37 of 366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

II.3.1.1 The Zachman Framework In 1987, John Zachman introduced the first and best-known enterprise architecture framework [Zachman 87], although back then it was called Framework for Information Systems Architecture. The framework as it applies to enterprises is simply a logical structure for classifying and organising the descriptive representations of an enterprise that are significant to the management of the enterprise as well as to the development of the enterprise's systems. The framework in its most simple form depicts the design artefacts that constitute the intersection between the roles in the design process, that is, owner, designer and builder; and the product abstractions, that is, what (material) it is made of, how (process) it works and where (geometry) the components are, relative to one another. Empirically, in the older disciplines, some other "artefacts" were observable that were being used for scoping and for implementation purposes. These roles are somewhat arbitrarily labelled planner and sub-contractor and are included in the framework graphic that is commonly exhibited. From the very inception of the framework, some other product abstractions were known to exist because it was obvious that in addition to what, how and where, a complete description would necessarily have to include the remaining primitive interrogatives: who, when and why. These three additional interrogatives would be manifest as three additional columns of models that, in the case of Enterprises, would depict: who does what work, when do things happen and why are various choices made. Advantages of the Zachman framework are that it is easy to understand, it addresses the enterprise as a whole, it is defined independently of tools or methodologies, and any issues can be mapped against it to understand where they fit. An important drawback is the large number of cells, which is an obstacle for the practical applicability of the framework. Also, the relations between the different cells are not that well specified. Notwithstanding these drawbacks, Zachman is to be credited with providing the first comprehensive framework for enterprise architecture, and his work is still widely used. II.3.1.2 The Open Group Architecture Framework The Open Group Architecture Framework (TOGAF) originated as a generic framework and methodology for development of technical architectures, but evolved into an enterprise architecture framework and method. Version 8 of TOGAF [TOGAF] is called the Enterprise Edition and is dedicated to enterprise architectures. TOGAF has four main components: A high-level framework, based on some of the key concepts and a methodology called Architecture Development Method (ADM). The framework considers an overall Enterprise Architecture as composed of four closely interrelated Architectures: Business Architecture, Data/information Architecture, Application Architecture, and Technology (IT) Architecture. ADM is considered to be the core of TOGAF, and consists of a stepwise cyclic approach for the development of the overall enterprise architecture. The TOGAF Enterprise Continuum, which comprises the TOGAF Foundation Architecture ( that contains the Technical Reference Model, The Open Group's Standards Information Base (SIB) and The Building Blocks Information Base (BBIB)) and the Integrated Information Infrastructure Reference Model. The central idea behind the Enterprise continuum is to illustrates how architectures are developed across a continuum Page 38 of 366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

ranging from foundational architectures, through common systems architectures, and industry-specific architectures, to an enterprise's own individual architectures. The TOGAF Resource Base, a set of tools and techniques available for use in applying TOGAF and the TOGAF ADM (Architecture views, Business scenarios, ADML, Case studies, other architecture frameworks, a mapping of TOGAF to the Zachman framework, etc.). Apart from the main components of the framework, TOGAF identifies a number of views, which are to be modelled in an architecture development process. The architecture views, and corresponding viewpoints fall into the following categories (the TOGAF taxonomy of views is compliant with the IEEE Std 1471-2000): Business Architecture Views, which address the concerns of the users of the system, and describe the flows of business information between people and business processes (e.g. People View, Process View, Function View, Business Information View, Usability View, Performance View). Engineering Views, addressing the concerns of System and Software Engineers responsible for developing and integrating various components of the system (e.g. Security View, Software Engineering View, Data View, System Engineering View, Communications Engineering View). Enterprise Manageability Views, addressing the concerns of systems administrators, operators and managers. Acquirers Views, addressing the concerns of procurement personnel responsible for acquiring the commercial-off-the-shelf (COTS) software and hardware to be included in the system (e.g. The Building Blocks Cost View, The Standards View). These views typically depict building blocks of the architecture that can be purchased, and the standards that the building blocks must adhere. II.3.1.3 DoDAF/C4ISR The Command, Control, Communications, Computers, Intelligence, Surveillance, and Reconnaissance (C4ISR) Architecture Framework [C4ISR] was originally developed in 1996, for the US Department of Defence, to ensure a common unifying approach for the Commands, military Services, and Defence Agencies to follow in describing their various architectures. A new version of the framework, now titled Department of Defence Architecture Framework (DoDAF), was released in August 2003. Although DoDAF has a rather specific target, it can be extended to system architectures that are more general. DoDAF sees the architecture description as an integration of three main views: operational view, system view and technical view. A number of concepts and fundamental definitions (e.g. architecture, architecture description, roles, and interrelationships of the operational, systems, and technical architecture views) are provided. Some frameworkcompliant guidelines and principles for building architecture descriptions (including the specific product types required for all architecture descriptions), and a Six-Step Architecture Description procedure complement them. II.3.1.4 RM-ODP The Reference Model for Open Distributed Processing (RM-ODP) is an ISO/ITU Standard [ISO/IEC 10746-1] which defines a framework for architecture specification of large distributed systems. The standard aims to provide support for inter-working, interoperability, portability and distribution, and therefore to enable the building of open, integrated, flexible, modular, manageable,

Page 39 of 366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

heterogeneous, secure and transparent systems Error! Reference source not found.. The standard has four parts: Part 1: Reference, containing a motivational overview of the standard and its concepts [ISO/IEC 10746-1]. Part 2: Foundations, defining the concepts, the analytical framework for the description of ODP systems and a general framework for assessment and conformance [ISO/IEC 107462]. Part 3: Architecture, describing the ODP framework of viewpoints for the specification of ODP systems in different viewpoint languages [ISO/IEC 10746-3]. It identifies five viewpoints on a system and its environment: enterprise, information, computation, engineering and technology. Part 4: Architectural semantics, showing how the modelling concepts from Part 2 and the viewpoint languages from Part 3 can be complemented in a number of formal description techniques, such as LOTOS, Estelle, SDL, and Z [ISO/IEC 10746-4]. II.3.1.5 GERAM The Generic Enterprise Reference Architecture and Methodology (GERAM) [GERAM] defines the enterprise-related generic concepts recommended for use in enterprise engineering and integration projects. These concepts can be categorised as: Human-oriented concepts: to describe the role of humans as an integral part of the organisation and operation of an enterprise and to support humans during enterprise design, construction and change. Process-oriented concepts for the description of the business processes of the enterprise; Technology-oriented concepts for the description of the supporting technology involved in both enterprise operation and enterprise engineering efforts (modelling and model use support). The model proposed by GERAM has three dimensions: the lifecycle dimension, the instantiation dimension allowing for different levels of controlled particularisation, the view dimension with four views: Entity Model Content view, Entity Purpose view, Entity Implementation view, and Entity Physical Manifestation view. Each view is further refined and might have a number of components. II.3.1.6 Nolan Norton Framework This framework is the result of a research project of the Nolan Norton Institute (that involved 17 Dutch large companies) on the current practice in the field of architectural development [Zee, Laagland, Hafkenscheid]. Based on the information collected from companies the authors have defined a five-perspective vision of enterprise architecture: Content and goals: which type of architecture is developed, what are its components and the relationships between them, what goals and requirements has the architecture to meet? More precisely, this perspective consists of five interconnected architectures (they correspond to what we have called architectural views): product architecture, process architecture, organisation architecture, functional information-architecture, and technical information architecture. Architecture development process: what are the different phases in the development of an architecture, what is their sequence and what components have to be developed in each phase? Architecture process operation: what are the reasons for the change, what information is needed and where lie the responsibilities for decision making? Page 40 of 366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

II.4

Architectural competencies: what level of expertise should the organisation reach (and how) in order to develop, implement and use an architecture? Cost/Benefits: what are the costs and benefits of developing a new architecture?

Classification of Models

The impact of MDA on the organization of the software production and maintenance workbench is beginning to appear. The UML metamodel, which was previously at the center of this workbench, is now only one metamodel among others, allowing the initial capture of UML models. At the same time, we see a number of new tools appearing, like independent transformation engines and frameworks. All these tools operate on top of a model and metamodel repository. Each of them implements a limited set of specific operations on models and metamodels. Their behavior is sometimes partially driven by generic uploadable metamodels. The MDD landscape is going to be populated by a high number of metamodels, like the programming language technical space which is populated by a high number of language grammars or the XML document space populated by DTDs and XML schemas.
When MDD is accepted, we will see hundreds of metamodels being defined and used. Each metamodel will correspond to a specific situation. For example some metamodels are object-oriented, some are not: a RDBMS system, as depicted below for example, is not based on object technology.
RModelE lement kind : String name : String

+refersTo * Forei gnKey +foreignKey +owner * 0..1 +foreignKey 1 +owner 1 Table +owner * +key * +key 0..1 1 * * +column Column type : String +column +column * +owner RDBMS Key

1 * +ownedElement

Figure 12 Example of a relational DBMS metamodel Some metamodels may be similar to grammars, but generally they are more versatile. The correspondence between these notions is a subject of current debate. Besides the fact that a metamodel is usually tree-based, there are several differences in application between these notions. A model is a representation of a system. Some systems are static, i.e. they dont change with time. The USA census of 2000 is a model of a system considered as stable on a given time of April 1, 2000 with 281,421,906 people of various name, sex, age, origin and many other characteristics. Obviously the model of a static system is itself static. On the contrary of this example, most systems are dynamic, i.e. they change in time. This is a highly interesting property of many models because it facilitates reasoning on these models. A source program is a static model because the code does not change with time. This characteristic allows for example to establish axiomatic proofs of a program. A StateChart is a static model of a dynamic system, and many models are of this kind. However, in some cases we may also have dynamic models, i.e. models evolving in time. Page 41 of 366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

Let us take an airport as a system. Let us build a simulation of this airport. The airport is represented by the simulation, i.e. the simulation has a behavior that represents the behavior of the real airport. The simulation is a dynamic model. Now if the simulation is written in a given programming language like Simula, the source program itself is a static model of the dynamic simulation execution. The Figue below summarizes this, by showing that there are static and dynamic systems on one side, static and dynamic models on the other side. The only combination that seems to make no meaning is a dynamic model of a static system. All other cases make sense, with the most common situation being the case of a static model of a dynamic system.

Figure 13 Relations between static and dynamic models and systems Another important distinction between models is the difference between product and process models. The relation between process and product is a recurrent theme in computer science. Many contributions suggest strong structural correspondence between process and product models, and several authors have provided empirical evidence of this in the last decades. These observations were usually made on the occasion of establishing guiding rules or applied model elaboration. When discussions about Structured Programming were active, N. Wirth noticed that: " structuring considerations of program and data are often closely related. Hence, it is only natural to subject also the specification of data to a process of stepwise refinement. Moreover, this process is naturally carried out simultaneously with the refinement of the program". The relation between process and products take more importance when they are expressed as formal models. One typical example in the MDA area was the decision to separate the search for unified modeling techniques in two phases. In the first one a metamodel for defining software artifacts (UML) was defined, i.e. a software product metamodel. After that, a second metamodel for expressing software processes could be designed (SPEM). There are several relations between these two standards that are evolving in synchronization (UML 2.0 and SPEM 2.0) [ref BezivinBreton]. There are many other situations where a product/process couple exists in the MDD landscape, for example when considering component/workflow systems. The product metamodel has a role similar to data structure definition while the process metamodel is more related to generic definition of control structures. The area of intermodel relationship has however yet to mature (perhaps in a way similar to XLink in the XML technical space). Another possible direction is between prescriptive and descriptive models. A descriptive model is a representation of a system already existing. A prescriptive model is a representation of a system Page 42 of 366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

intended to be built. As a matter of fact, there is little difference between these two kinds of models. In the first case, the model is built by observation of the system while in the second case the system is built by observation of the model. What is different is how these models are used. In the first case the model may be used to understand a system or to extract some aspects of this system that could be useful to build another system later. In the second case, the model may be used as a guide, a blueprint for building a new system. In both cases the same representedBy relation holds at the end. The classification of all these metamodels is a subject of high interest. The variety of different metamodels also suggests a secondary specialization relation between metamodels. There is no yet consensus on this relation that could be used to build a lattice of metamodels, with a top element (the metamodel selecting nothing) and a bottom element (the metamodel selecting all). Of course the ambition here is completely different from general purpose ontology initiatives like Worldnet or Cyc. The applicability domain is limited by the perimeter of information systems building and maintenance. It is likely that large monolithic metamodels like UML 2.0 will see some limitations to their usage because of their complexities. There are several reasons for this. The major one is that most usages rely only on a small subset of the entire metamodel. Any well designed process should provide a precise characterization of these subsets. Although there are some conceptual tools to do this (UML profiles are one example), it is much more difficult to work by restriction than by extension. Working by extension means dealing with a much more important number of metamodels of high abstraction. Tools to deal with this important number of metamodels of low granularity and high abstraction. Tools to deal with this important number of combinable and extensible metamodels are still to be invented. Present navigator architecture, often based on the class browsers of the 80's, are not sufficient to handle this new situation. The subject of model classification is presently of high relevance. In "Data Semantics revisited. Database and the semantic Web", DASFAA'04 guest talk, Jeju Island, Korea, 2004" John Mylopoulos mentions three broad categories of models called: I-models, E-Models and C-models. I-Models (for intentional) consist of a set of predicates with associated axioms; Database schemas with integrity constraints fall into this category but also logical theories. E-Models (for extensional) have set-theoretic constructions, and query answering based on set-theoretic relationships; Tarskian and Kripke models, but also database fit here. C-models (for computational) are characterized by the fact that query answering is produced by running programs, e.g. a simulation program. II.5 Operations on Models

Within the context of MDD, model transformation is the primary operation on models that is talked about, however it is not the only one. Operations such as model comparison, model merging etc are also considered, although these could be seen as particular types of model transformation. This section focuses on the SoA of Model transformation. II.5.1 Model Transformation (MT) The concept of model transformations existed before QVT and even before MDA, the following topics each address some aspect involving the notion of transforming data from one form to another. Compiling Techniques [Aho etal] Page 43 of 366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

Graph Grammar/Transformations [EEKR99] Triple Graph Grammars [Schurr] Incremental Parsers [Ghezzi, Mandrioli] Viewpoint framework tools [Finkelstein etal] Databases, Update queries Refinement XML, XSLT, XQuery [w3c] To be literal about, even simple straight forward programming is generally done as a means to transform data. This becomes more stylised when programming patterns such as the Visitor pattern [gof] are used as a way to visit data in one model and create data in another. Some techniques, such as those presented in [Akehurst etal] [qvt-partners] [dstc-qvt], base the transformation language on the notion of relations. However, this too is a new application of the old ideas as originally applied (for example) in the fields of databases (e.g. Update Queries) and System Specification (or refinement) using the formal language Z (which is heavily dependent on the notion of relations and their use for the manipulation of data). The interesting aspect of the MDD approach to transformation is the focus on: Executable specifications; unlike the Z approach. Transforming models; models being viewed as higher level concepts than database models and certainly higher level than XML trees. Deterministic output; the main problem with GGs is that they suffer from non-deterministic output, applying the rules in a different order may result in a different output. A general depiction of a transformation and the necessary relevant specifications is shown in the figure below.

Figure 14 Transformation with specifications II.5.2 Model Transformation Languages Transformation of one model into another model is the central concept of model-driven development. Specifying and implementing model transformations is thus a field of active research. Model-transformation can be tackled using general-purpose programming languages; however, specialized transformation languages can significantly ease the implementation of transformations. In the following we classify the various approaches by the programming paradigm that is used: functional languages, pattern-based languages, object-oriented languages, Page 44 of 366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

hybrid languages, and graphical languages.

II.5.2.1 Functional Languages Functional programming languages are one way to implement transformations [HJGP99]. Here transformations are carried out by applying transformation operators on models. With the normal filter, map, and reduce functions these operators can be concatenated to more complex operations [WMH00]. One advantage of using functional languages is that the developer does not have to deal with model traversal. Furthermore, existing operators can be reused. However, the approach inherits some problems from functional languages. Pure functional programs are usually free of side-effects. An efficient implementation of model transformation thus requires the use advanced concepts such as monads [Jon01] or uniqueness-types. II.5.2.2 Pattern-based Languages As the standard serialization format for models (XMI) is based on XML, some researchers [DHO01, PBG01, PZB00, VVP02, KH02] suggest transforming the XML document instead of the graph representation of the model. XSLT [W3C99b] can be used as a general purpose transformation language for XML documents. XSLT uses the concept of patterns. It traverses the XML tree structure and tries to match its pattern at every node of the tree. If the pattern matches, a certain transformation rule is applied. While this concept works very well for trees, it is difficult to adapt it to arbitrarily shaped graphs. The problem of sub-graph matching in arbitrarily shaped graphs is known to be NP-complete [GJ90]. Using XSLT has severe drawbacks. XML documents are trees structures only. Most models, especially behavioral models such as state-machines or activity diagrams, are rather complex graph structures. It is possible to cross link elements outside the tree structure using standards such as XPointer [W3C02] or XLink [W3C01]. However, XMI does not use them. With XPath [W3C99a] model edges that are not part of the serialized tree can be traversed. However, this leads to very complex implementations. Hence, XSLT and similar approaches are considered to be too difficult to use without a front-end language [PBG01]. II.5.2.3 Object-oriented languages Object-oriented languages are currently the first choice programming paradigm when looking for a general purpose programming language. Using them for model transformation is straight forward, if an object-oriented API to access the model is available. The drawback of OO languages is that the programmer has to write code for traversing the data structures [WUG2003]. Traversing trees or even arbitrary graphs is non-trivial. It often results in deeply nested loops and increases the possibility of bugs such as infinite loops or infinite recursions. Other approaches like functional languages or pattern-based languages hide the model traversal from the developer. II.5.2.4 Hybrid Languages Combining the advantages of different programming paradigms can be beneficial for model transformations. The Object Constraint Language [OMG03i, OMG03d] has been developed to describe constraints on instances of models. OCL is a hybrid language merging functional and OO concepts. OCLs syntax and handling of data-types are close to that of object-oriented languages. But it also features functional-style constructs such as forall and select that are conceptually close to the functiona concepts of map, filter, and reduce. As OCL has been designed to work on models, it

Page 45 of 366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

seems most suitable for the problem of model transformation. However, OCL is a constraint language and designed to be free of side effects. The UMLAUT transformation framework [Tri01] is a research project that builds model transformations on top of the upcoming OCL2 [OMG03d] and the action semantics standard [SPH+01, OMG03d] that allows OCL2 scripts to add and remove objects and to change their properties. The VisualOCL project [BKPPT01, KTW02] equips OCL with a graphical notation. This may ease the creation and understanding of OCL because its textual notation is very compact. VisualOCLs notation does, however, not provide new concepts. In [BKPPT02] VisualOCL is combined with graph transformation theory. This way OCL is extended with operational semantics. KASE [WUG03] uses Python as its scripting language. Python scripts can be used to implement model transformation. Python [Pyt03] is an object oriented language that has been extended with elements of functional programming. Python is used in commercial tools, too. For example, the MDA tools of Interactive Objects [Int03] build on Python when it comes to scripting. II.5.2.5 Graphical languages Textual languages are usually not the ideal media for graphs, which inherently have a non-textual, hence graphical, notation. AGG [TFS03, EEKR99, Tae99] is a tool for editing and transforming graphs. Applying AGG to model transformations is in theory possible although AGG has not been designed with UML in mind. However, AGG supports for higher-order graphs. It could handle association classes in the meta model. Higher-order graphs allow for edges between edges. This way, nodes and edges can be handled in a uniform way [LB93]. The UML itself can be extended with graph transformation capabilities. [FNTZ98] uses story diagrams that show how links between objects are created or removed, how objects are created or destroyed or how the attribute values of the objects change. The approach taken by [HE00] is a merge of UML object diagrams and graph transformations. In so far it is comparable to Story Diagrams. Similar to [FNTZ98] they use ideas from graph transformation theory to model the behaviour of a system. The approach acts on the object level (M0), too, just like Story Diagrams. However, the MDA is not concerned with the object level. Transformations address the models on M1 level themselves. A model can be treated as an instance of the meta model (M0 objects are instances of the M1 classes). Story Diagrams, the approach of [HE00], or AGG could be used for model transformation, too. This means that that transformation developer would have to specify the transformation in terms of meta-model objects. However, a shift of the meta models would ideally be accompanied with a shift of the notation. An M1 instance of an M2 meta class has a special notation in the UML that depends on the meta class. A shift of notation cannot be easily accomplished since the UML notation extension introduced for Story Diagrams works for UML object diagrams only. All the previous techniques modify graphs. A start graph is transformed several times to reach a final derivation. Usually, this final derivation still shares a common subset with the start graph, which is the case for all examples found in [AEH+99, FNTZ98, HE00]. In the MDA it is less common to morph a PIM step by step until it is a valid PSM. Even worse, this is usually not possible at all since the PIM is based on another meta-model. VIATRA [VVP02, CHM+02, VGP01] is model transformation approach based on graph transformation. VIATRA has been designed to transform a model from one modelling language to Page 46 of 366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

another one. The overall goal of VIATRA is to check the correctness and reliability of a design. The design for example an UML state-machine is transformed into a more precise model better suited for formal reasoning, for example Petri Nets. Existing tools for Petri Nets can be applied. The result of these tools is back-annotated to the UML state machine model. To achieve this back-annotation, VIATRA builds a reference graph between the input and the output model during the transformation process. Kafka [WUG03] is a graphical model-transformation language that is based on graph transformation. It supports transformation of models from one meta-model (PIM) to models of a different meta-model (PSM). During the transformation Kafka builds a reference graph between the both models. With this reference graph roundtrip engineering is possible. Changes that the developer made to the PSM are not lost during a subsequent model-transformation step. Kafka reuses the notation of the source and target meta-model. A diagram for the transformation of a component (PIM) into a class (PSM) would show a component and a class in their normal notation. A detailed knowledge of the meta-models is thus not required for specifying transformations. II.5.3 QVT and Model Transformation Tools Much work on model transformation is being driven by the OMGs call for proposals on Queries, Views and Transformations (commonly known as QVT) [qvt]. There are a number of submissions to the standard with varying approaches, a good review of which is given by [GGKH03] along with some other (independent) approaches such as YATL [Patrascoiu], MOLA [Kalnins etal] etc. and earlier work such as [Akehurst]. There are a set of requirements, given in [GGKH03], of which the multiple approaches each address a different subset. As yet there is no approach that addresses all of the requirements. Some MDA oriented tools and platforms are introduced in this section. Some of these tools and platforms are open source software such as EMF and UMT. The others are commercial products such as ArcStyler, Rhapsody and OptimalJ. More MDA tools and platforms can be found in modelbased.net [Modelbased]. The INTEROP D7.1 part I report on Model Transformation tools, contains a good overview of QVT principles and various related transformation tools. The corresponding part II of this report contains an illustrative example on model transformation. II.5.4 The Eclipse Modeling Framework (EMF) The Eclipse Modeling Framework (EMF) is a Java framework and code generation facility for building tools and other applications based on a structured model. Eclipse an open source software development project led by IBM. Eclipse intents to provide a kind of universal tool platform an open extensible IDE for anything and nothing in particular [Eclipse]. EMF is one of subproject of Eclipse. EMF can be thought of as MDA on training wheels. EMF uses XMI (XML Metadata Interchange) as its canonical form of a model definition. And EMF supports the implementation of OMG (Object Management Group) MOF (Meta Object Facility) specification. EMF is a highly efficient Java implementation of a core subset of the MOF API [EMF]. An EMF model is just a small subset of UML model, especially simple definitions of the classes and their attributes and relations. An EMF model can be defined under Eclipse environment by: Page 47 of 366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

Importing from a Rational Rose class model Describing the model directly in an XMI document Defining the model using annotated Java Using XML Schema to describe the form of a serialization of the model When an EMF model is specified, the EMF generator can create a corresponding set of Java implementation classes. These generated classes can be edited by adding methods and instance variables and still regenerated from the model as needed: the additions will be preserved during the regeneration. If the code you added depends on something that is changed in the model, your code has to be updated to reflect those changes; otherwise, your code is completely unaffected by model changes and regeneration. EMF consists of three fundamental frameworks: the core framework and EMF.Edit. The core framework provides basic generation and runtime support to create Java implementation classes for a model. EMF.Edit extends and builds on the core framework, adding support for generating adapter classes that enable viewing and command-based (undoable) editing of a model, and even a basic working model editor. EMF.Codegen provides code generation support. II.5.4.1 The core EMF framework The core EMF framework includes a meta model (Ecore) for describing models and runtime support for the models including change notification, persistence support with default XMI serialization, and a very efficient reflective API for manipulating EMF objects generically [What is EMF]. Ecore is the MOF-like core meta model in EMF. In the current proposal for MOF 2.0, a similar subset of the MOF model, which it calls EMOF (Essential MOF), is separated out. There are small, mostly naming differences between Ecore and EMOF; however, EMF can transparently read and write serializations of EMOF [EMF]. II.5.4.2 EMF.Edit The EMF.Edit framework includes generic reusable classes for building editors for EMF models. It provides [EMF.Edit]: Content and label provider classes, property source support, and other convenience classes that allow EMF models to be displayed using standard desktop (JFace) viewers and property sheets. A command framework, including a set of generic command implementation classes for building editors that support fully automatic undo and redo. II.5.4.3 EMF.Codegen The EMF code generation facility is capable of generating everything needed to build a complete editor plug-in for the EMF model. It includes a GUI from which generation options can be specified, and generators can be invoked. The generation facility leverages the JDT (Java Development Tooling) component of Eclipse [What is EMF]. EMF provides a product quality, open source, model-driven tool for metadata-based tools integration in Eclipse [Gardener dec03]. With EMF, modelling and programming can be considered the same thing. Instead of forcing a separation of the high-level engineering/modelling work from the low-level implementation programming, it brings them together as two well-integrated parts of the same job. Often, especially with large applications, this kind of separation is still desirable, but Page 48 of 366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

with EMF the degree to which it is done is entirely up to you [Budinsky]. EMF is widely used in IBM products for a variety of metamodels. II.5.5 UML Model Transformation (UMT) UMT (UML Model Transformation Tool) is an open source initiative to support model driven development and the model driven architecture. UMT is a QVT (Query View Transformation) implementation, so it is also called UMT-QVT (the Sourceforge name for the UMT tool). UMT is a tool for model transformation and code generation of UML/XMI models. UMT provides an environment in which new generators can be plugged in [UMT-QVT]. The UMT tool consists of following sub-tools that provide different end-user functionality [UMT]: The model browser/editor provides a viewer towards a model and the capability of assigning different kind of properties to model elements, for example technology specific properties. The project editor provides a context for working with source models. The profile editor provides a means of providing support for the concepts in product line architectures. The transformation tool provides the means for defining and modifying new transformations. UMT is not a modelling tool, thus the source model should be described in some UML tool capable of exporting XMI. XMI is exported from a UML model and imported by UMT. XMI is used as an intermediate format in UMT. UMT transforms XMI to a simple format, called XMI Light. XMI Light is the internal model representation used by UMT. XSLT is used in UMT to transform XMI to XMI Light. UMT uses a transformer (a transformation implementation) the XMI Light to desired target technologies, like Java, C#, XML-based formats like WDSL, GML, database schemas, application server technologies, like J2EE, .Net. II.5.5.1 XMI Light XMI Light is the internal model used by UMT, a simple lexical view into the UML model. XMI Light is used by UMT for browsing and editing. XMI Light is used as a source metamodel for generating code. II.5.5.2 Transformer The transformer architecture in UMT is based on a flexible model where different kind of transformer implementations can be plugged in. Currently, two main different types of transformers can be plugged in: Java transformers and XSLT-transformers [UMT]: 1. A UMT XSLT transformer is an XSLT Stylesheet that produces transformations based on XMI Light input models. 2. A UMT Java emitter a Java-class that implements the UMT Transformer interface transformer.TransformationEngine. The tool environment is implemented in Java. Generators are implemented in either XSLT or Java [UMT-QVT]. II.5.6 ATL (ATLAS Transformation Language)

Page 49 of 366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

The ATL project aims at providing a set of transformation tools for GMT [GMT: http://www.eclipse.org/gmt/]. These include a transformations repository, some sample ATL transformations, an ATL transformation editor plug-in, an ATL transformation engine, etc. The Atlas Transformation Language (ATL) has been developed by the ATLAS team, INRIA. It is a hybrid language (a mix of declarative and imperative constructions) designed to express model transformations as required by any MDA [MDA] approach (see the QVT RFP [QVTRFP]). It is described by an abstract syntax (a MOF meta-model), a textual concrete syntax and an additional graphical notation allowing modelers to represent partial views of transformation models. A transformation model in ATL is expressed as a set of transformation rules. The recommended style of programming is declarative. Transformations from Platform Independent Models (PIMs) to Platform Specific Models (PSMs) can be written in ATL to implement the MDA approach as suggested by the OMG. Eclipse has been used as an IDE for ATL, with advanced code edition features (syntax highlighting, auto-completion, etc.). ATL will provide a context in which transformation-based MDA tools can be designed and implemented for Eclipse. The ATL project provides a complete environment for developing, testing and using model transformation programs through the following items:

A transformation repository supporting the creation of a library of transformations ranging from simple examples to fully reusable components. A prototype repository (RAS-like [RAS]) already exists that stores transformation components in compressed archives (ZIP files) including a meta-data description in the form of an XML file. Components can be: transformations (written in ATL, written in another language or composite transformations using others), meta-models (a transformation depends on the transformation meta-model, the input and output meta-models) and models (input/output samples). A source code editor for transformations adapted from the Eclipse source code editor. Several levels of implementation are possible, from a simple text editor to a full-featured one (including syntax highlighting, auto-completion, etc.). Users will be able to launch transformations from the IDE, which could interpret error messages and use them to point out the problems to the modeler. A debugger will also be integrated, in order to complete the Eclipse IDE for ATL. And finally a transformation engine (for ATL v0.2) has been released as part of the GMT project.

Although the QVT RFP asks for a transformation language for MOF models, the ATLAS team thinks that making other metametamodels usable, provided they are semantically close enough to MOF, would add considerable value to a transformation language. This metametamodel independence would even be very interesting among OMG standards, MOF 2.0 being different from MOF 1.4. Finally, the ATL framework also integrates the notion of "technical spaces" [Kurtev0001]. Although mainly intended to deal with MDA models (based on MOF meta-models and accessible via XMI [wwwXMI] or JMI [JMI10][JMIfaq]), this framework should also handle other kinds of models from different technological spaces (e.g. Java programs, XML documents, DBMS artifacts, etc.). To this end, what is needed is a collection of adaptable "injectors" and "extractors" to Page 50 of 366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

complement the library of MDA transformation components. Models, meta-models, transformations, injectors and extractors are examples of MDA components that will be handled as uniformly as possible by the ATL repository. II.5.7 ArcStyler ArcStyler is a commercial MDA tool from Interactive Objects [Modelbased]. ArcStyler is a crossplatform, standards-compliant, environment, fully implemented in Java, for the rapid design, modelling, generation, deployment and management of high-quality, industrial strength applications of any size for architectures based on Java/J2EE and .NET as well as custom infrastructures and existing legacy platforms [IO-Software]. ArcStyler provides a comprehensive, architecture-driven solution for end-to-end model-driven development of modern, component-based applications. By assisting developers with important architectural tasks, the ArcStyler simplifies and expedites the entire development life cycle, from the platform-independent business model to platform-specific refinement and optimized code generation for the leading application servers. This approach is currently being standardized as Model Driven Architecture by the OMG [IO-ArcStyler]. ArcStyler is described by Interactive Objects as an Architectural IDE, which attempts to encapsulate similarities between software architecture and industrial architecture. The most significant characteristic in the ArcStyler is its positioning above traditional, programming IDEs (in fact such a tool is a component of ArcStyler) [Charlesworth]. ArcStyler is made up of a number of tools and modules. Among them, MDA-Cartridges are pluggable MDA automation engines and contain the entire model-to-model and model-to-code transformation logic as well as model verifiers [IO-Cartridges]. Through the use of MDA Cartridges, ArcStyler does not require developers to have in depth platform knowledge. The most important aspect is the use of MDA Cartridges to ensure that the generated code is optimised for the environment and also conforms to the coding styles and more specific requirements of the organisation. This allows solutions to be configured for all of the major application servers (J2EE, .NET, CORBA) as well as for custom environments (COBOL, z/OS, RPG) or a mixture of these. Interactive Objects develops and supports a number of MDA Cartridges providing support for leading application servers and their descendants, including [Charlesworth]: BEA WebLogic Server. IBM WebSphere (NT and z/OS). IONA E2A Platform. Borland Enterprise Server. The JBoss Application Server. Three editions of ArcStyler are now available [whitepaper]: 1. The ArcStyler Web Edition, which provides MDA automation and MDA Cartridges for Web technologies including Web services, XML, .NET and J2EE. 2. The ArcStyler Enterprise Edition, which enhances the Web Edition with MDA support for all levels of an n-tier architecture. It also adds the Business Object Modeller, the Refinement Assistant, as well as an explorer for the common model repository. 3. The ArcStyler Architect Edition comprises the Enterprise Edition plus the MDA Development IDE. This is used by the organisation to develop MDA support for its own Page 51 of 366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

architectural style through visual development or extension of MDA Cartridges according to a well-defined Cartridge Architecture. This is a key capability and allows for technical knowledge to be captured, incorporated into an MDA Cartridge, and communicated throughout the business. ArcStyler supports the creation of, and relationships between, numerous UML models. II.5.8 Rhapsody Rhapsody, by I-Logic, is the industrys leading UML 2.0 based Model-Driven Development environment for systems and software engineering [Rhapsody]. The philosophy of Rhapsody has always been the generation of platform-independent models that map onto many different computing platforms, long before the OMGs inception of the MDA initiative. Rhapsody consists of several collaborating parts [Douglass]: Model-Entry System the developer enters in the PIM using standard UML diagrams Model Compiler the developer generates the source for the selected language (C, C++ or Java) and compiler Model Tester allows the tester to stimulate and monitor the execution of the PIMgenerated application on the host or target platform Framework a real-time PIM framework, provided by Rhapsody, that runs underneath your PIM OS-Dependent Adapter a lightweight OS-specific adapter layer that handles interaction with the underlying RTOS Rhapsody is designed to support complete model-based iterative life-cycle. The following key enabling technologies are used in Rhapsody for effective model-based development including the common automation and traceability deficiencies: model-code associativity, automated implementation generation, implementation framework, model execution and back animation, and model-based testing [Gery, Harel, Palachi]. II.5.8.1 Model-code Associativity Model-code associativity is a key enabler for software developers to effectively leverage the benefits of model-based approaches, but without compromising the benefits of direct access to the implementation language/platform [Gery, Harel, Palachi]. In Rhapsody, the implementation language is augmented by an execution framework, the interface between the implementation language and the modelling language abstractions. Detailed behaviours are not written in modelling language but in the target implementation language. Code translated from the model by using common translation patterns for UMLs abstract construction and by attaching notes and constraints to the resulting code. Rhapsody supports to trace the code resulting from certain model elements and vice versa. The changes in the model are instantaneously reflected in the code view and vice versa. II.5.8.2 Automated implementation generation Automated implementation generation is the core support of Rhapsody. Implementations can be generated not only from structural semantics but also from all the behavioural semantics specified in the UML model. These include system construction, object life-cycle management (construction and deconstruction), object behaviour as specified by statecharts or activity-graphs, as well as methods directly specified by the implementation language [Gery, Harel, Palachi]. Codes are generated from model artefacts based on generation rules and predefined parameters for each model metaclass. Rhapsody implementation generator supports implementation languages such as C++, C and Java, and component frameworks COM and CORBA. Page 52 of 366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

II.5.8.3 The execution framework The execution framework is an infrastructure that augments the implementation language to support the modeling language semantics [Gery, Harel, Palachi]. APIs based on the implementation language are provided in the execution framework to support for manipulation of model at model abstraction level. A set of architectural and mechanistic patterns are used in this framework to support modelling abstractions. The execution framework API serves as an abstraction layer used by the code-generator to facilitate model semantics in the context of a particular implementation language [Gery, Harel, Palachi]. II.5.8.4 Model execution Model execution is a key enabler for effective model-based development. Rhapsody uses the model-animation technique, a runtime traceability link between the implementation execution and the runtime model [Gery, Harel, Palachi]. This technique eases the mapping UML design model and code implementation. It also enables shorter iterations of model-implement-debug cycles. II.5.8.5 Model-based Testing Model-based testing provides the ability to specify and run tests based on the specifications in the model, as well as the ability to pinpoint defects from the test results by visualizing points of failure within the model [Gery, Harel, Palachi]. Rhapsody specifies tests using scenarios specified in the model. The tests are run by driving and monitoring the model execution. Rhapsody provides functions to detect defect, to fix defect and verify by rerunning the test. II.5.9 OptimalJ OptimalJ from Compuware is an advanced development environment that makes complete use of the Object Management Group (OMG) Model Driven Architecture (MDA) to enable rapid design, development, modification and deployment of J2EE business applications [OptimalJ MDA]. OptimalJ delivers high productivity and consistency throughout the development cycle, while helping to reduce technical complexity for development teams [OptimalJ Product Preview]. A Java application is constructed in OptimalJ through a series of multi-level models and patterns. At model level, the OptimalJ include the Domain Model (PIM in MDA), the Application Model (PSM in MDA) and the Code Model (Implementation Model in MDA). The Domain Model defines the business domains without application details. There are two parts in domain models: a class model describing the static structure of the applications data, and a service model describing some behavioural aspects. The domain model is based on the Meta Object Facility (MOF) and Common Warehouse Model (CWM) in UML. The Application Model presents a fairly high level abstraction of J2EE concepts, and is created from the domain model at the users behest by OptimalJ. There are three application models: a database model, an EJB model and a web model [Evaluation OptimalJ]. The Code Model defines the generated application code transformed automatically from the Applicatoin Model [OptimalJ Product Preview]. OptimalJ can generate all the relevant Java, JSP, XML files etc., necessary for a J2EE application [Evaluation OptimalJ]. Two types of patterns are distinguished in OptimalJ [OptimalJ Product Preview]: Transformation patterns They are used in transforming OptimalJs Domain Model into the Application Model, and Application Model into the Code Model. Page 53 of 366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

Functional patterns They are code templates which are predefined functionality. Development can be sped up and errors can be reduced through quality-proven models or code in application.

There are three editions of OptimalJ are available [Evaluation OptimalJ]: OptimalJ Developer Edition, OptimalJ Professional Edition and OptimalJ Architecture Edition. II.6 Research Issues

The MDA has demonstrated the realism of model engineering principles. From there on, we may now envision a path of increasing applicability. Several potential progresses may be associated to the experimental application of the unification principle ("everything is a model"). Many open research problems may be related to, a loose or strict application of this principle. In particular related to supporting Interoperability, this might be viewed in general as supporting model transformation within and between different technological spaces. II.6.1 Applying the unification principle The more we stick to the basic principle of model unification, the more we may hope to see a very general, sound, regular, long-lasting and widely applicable set of techniques. Many of the examples given below suggest research paths that could be investigated in the near future. Programs as models. Programs are expressed in a programming language. If we make explicit the correspondence between a grammar and a metamodel, programs may be converted into equivalent MDA-models. In the same way a Java program may be converted into an XML-model, i.e. into an XML document conformant to the JavaML DTD for example. The problem is thus to be able to implement agile bridging between these different spaces (MDA, XML and Program). This is a matter of representation changing, knowing that each technical space has some advantages and drawbacks. For example the programming technical space illustrated in the left of Figure 20 has the advantage of granting natural executability, the MDA and XML technical spaces on the other side having the advantage of allowing easier interoperation with other specific standards and tools. Traces as models. A program is a static system because its code does not change with time. Another interesting system is one particular execution of a program, which is a dynamic system. As previously discussed, we can have models of dynamic as well as static systems. One example of a model of the dynamic execution of a program is a trace. A trace is an interesting example of a model. It may be based on a metamodel and will express the specific events traced (object creation, process activations, method calls, etc.). Platforms as models. In the traditional MDA approach, the objective is to be able to generate platform specific models (PSMs) from platform independent models (PIMs). In order to carry this task with sufficient automacity we need to have precise models of the targeted platforms like the Web, CORBA, DotNet, EJB, etc. Platform description models (PDMs) seem presently to be the missing link of the MDA. Answering the question of what is a platform may be difficult, but until a precise answer is given to this question, the notion of platform dependence and independence (PSMs and PIMs) stays more in the marketing than in the technical and scientific vocabulary.

Page 54 of 366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

There is a considerable work to be done to characterize a platform. How is this related to a virtual machine (e.g. JVM) or to a specific language (e.g. Java)? How is this related to a general implementation framework (e.g. DotNet or EJB) or even to a class library? How to capture the notion of abstraction between two platforms, one built on top of the other one? The notion of a platform is relative because, for example, to the platform builder, the platform will look like a business model. One may also consider that there are different degrees of platform independence, but here again no precise characterization of this may be seriously established before we have an initial definition of the concept of platform. One possible hint to answering this question would be to use insight from the work of M. Jackson. The idea of considering on one side the world, on the other side the machine and to interpret the intersection as shared phenomenon bear some similarities with the classical Y development cycle, revisited in MDA style. The lowest part of suggests that the production of a PSM may be more a model-weaving than a direct model transformation operation. Legacy as models. There is more than forward engineering in the MDA. Platform of the past (Cobol, PL/1, Pascal, CICS, etc.) should be considered as well as platforms of the present or platforms of the future. The extraction of MDA-models from legacy systems is more difficult but looks as one of the great challenges of the future. There is much more to legacy extraction than just conversion from old programming languages. Model elements as models. In many cases we have the situation where we need models with elements representing other models, for example to describe the software assets of a company. Of course each such element has a "type" corresponding to its own metamodel. Model and metamodel portfolio management is a subject of high practical interest that has still to be conceptually investigated. In [1] the notion of a "megamodel" has been introduced to deal with models containing metadata on several models, metamodels, operations, services, tools, end other model components. Transformations as models. As a special example of the search for uniformity, let us look more in detail at the example of transformations as models. We shall elaborate more on this to illustrate our main point. As in other technical spaces, the MDA realized there was a need for some kind of unified transformation language or at least a family of such languages. This idea was first suggested by R. Lemesle [2] in his PhD thesis, followed by the MTrans proposal of M. Peltier [3] and later by the MOF/QVT request for proposal [4]. A transformation generates a target model Mb from a source model Ma (Figure 22). Applying the model unification principle, the transformation itself should be a model. From that point, there are several conclusions that could be drawn. Some of them are being more important than others: Since often models may have a graphical presentation, this will provide a natural way to graphically depict transformations. A model conforms to a metamodel. The source and target models conform to metamodels MMa and MMb. Similarly the transformation Mt: Ma->Mb (i.e. the transformation program itself) conforms to a metamodel MMt defining the common model transformation language. One nice consequence of this regular organization is that we may envision the possibility of higher-level transformations, i.e. transformations taking other transformations as input and/or producing transformations as output.

Page 55 of 366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

This regular organization does not bear an important implementation overhead. As a matter of fact, the transformation engine is installed on a model and metamodel repository, uniformly considering the input, output and transformation models. When we say that the result of a transformation is to generate a model Mb from a model Ma, we have a bit oversimplified the situation. As a matter of fact we also need to generate a traceability model Mtr that could be used with the target model Mb for various reasons. Obviously this traceability model also conforms to a specific metamodel.

Perhaps the domain of model transformation is the one that is presently the most illustrative of the benefits that could be reaped from a general application of principle [P2]. In the first experiments with the ATL language [5], some of these advantages have been noticed. The idea is that a common language could be used for any kind of input or output metamodels is now established. The fact that all input, output and traceability models and metamodels are similarly considered, allows basing the transformation virtual machines on a very well defined API for accessing to the various elements of these models and metamodels. Metamodels as models. There is no reason that we could not apply transformations to metamodels themselves as we apply them to models. It is even possible to consider special operations taking a model as input and promoting it as a metamodel. One example of such a tool is the UML2MOF component available in the MDR/NetBeans space. Verification as models. Many operations on models may be seen as special cases of transformations. A refactoring, an improvement or verification could be viewed as regular operations on models. Verification for example could take a model and a set of verification rules as its input. Components as models. Since there are so many different kinds of models, we need to consider them as uniformly as possible for several operations like storage, retrieval, etc. This has given rise to this idea of general model components [1]. This notion of model component is different from the classical notion of "objects components", la EJB for example. This list is obviously very incomplete. When we consider model engineering with this unified vision, many well-known situations may be integrated into a more general consideration. To take one additional example, the seminal work of R. Floyd ("Assigning meanings to programs", [6]) that founded the research field of programming axiomatics may be viewed as "decorating" a flowchart model with an axioms model. This may lead us first to consider decorations as models, then to understand the nature of the binding between the flowchart model and the axioms model, this binding being itself some kind of correspondence model. Finally, these considerations may lead us to generalize R. Floyds approach and to add on the research agenda a new item on "Assigning meaning to models". II.6.2 Some open research issues in model engineering Besides the previous open problems directly derived from the basic unification principle, some additional investigations will be also quite important in the field. Most of them seek to achieve a better understanding of the two basic model relationships (representation and conformance) and their possible extensions. Correspondence between system elements and model elements. We have yet a lot of progresses to make in understanding the representation relation between a system and a model. If we have a Page 56 of 366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

discrete system composed of system elements, then the model represented as a set of model elements may be considered as a subset of the first one. But this probably does not capture all the aspects of the reality like the class as the set of its instances does not capture the totality of the meaning of the instanceOf relation.

Figure 15 The Sowa meaning triangle One troubling issue in a theory of modeling is that a same thing may be considered either as a system or as a model. For example an abacus may be considered as a physical system made of beams, beads and roads but at the same time it is a model of the business systems and transactions for many shopkeepers in Asia. There are several theories proposing an interpretation of this situation. One of them is related to semiotics [7]. In Figure 15, a cat is represented by a symbol in the Sowa meaning triangle, like a purchased article price may be represented by a set of beads in a Chinese abacus. Another possible interpretation could be to consider an explicit casting conversion for considering a model as a system. We mentioned this earlier when talking about deriving a 1:100 000 map from a 1:50 000map. This explicit "asSystem" operation, applied to model M, would allow to extract again a new model M from model M. In current practice this is a frequently observed operation. The question is how to relate this with a normal model transformation operation. When the meaning of the two basic model relations will have been settled, then additional secondary relations may be considered to extend the scope of this unification principle. The first candidate, mentioned already from place to place in this paper, would be an extension relation between metamodels. For the time being, the ad-hoc mechanism of profiles has been used to this purpose, but it probably brings more problems than solutions. Several important questions should be answered before a precise metamodel extension mechanism could be defined and this device should have very well specified properties. A metamodel should be able to extend one or more other metamodels by adding new elements. The extension mechanism should be completely defined at level M3 (i.e. at the metametamodel level) in order to be truly metamodel independent. The added elements should be related to elements in the base metamodel or to other added elements by a precisely defined relation. We have not yet seen any complete proposal for a general, precisely defined metamodel extension mechanism, although this issue is in priority position in the list of issues urgent to solve. As already mentioned, the solutions to many problems need a prior definition of the exact status of the relations between elements of different metamodels (inter-model relations), between elements of a same metamodel (intra-model relations) and between a model and elements of a model. The same also applies to relations between different metamodels or even relations between models and Page 57 of 366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

metamodels. Many operations, of different arities, on models and metamodels have yet to be identified. Model transformation is only one of them. Among important other operations that need to be specified, one may quote for example model weaving, model difference (diff on models), model metrication (establishing measures on models), meta-model alignment, etc. General operations may be applied to most models but specific operations may be applied to specific kinds of models only. One example of a general operation that could be applied to any kind of model is serialization (in XMI format). One example of an operation that could only be applied to a transformation model is preliminary verification with respect to a couple of metamodels (source and target). Such a preliminary verification of a transformation model could be checked before applying this transformation to a specific model. Strict definition of these model operation signatures may help to precisely identify the functionalities of various model engineering tools like CASE tools and to separate automatic operations (e.g. transformations) from manual ones (e.g. model capture, model browsing, etc.). This notion of model operation signatures hands itself easily to transformation into a system of model-based Web services going as far as service discovery in a general model engineering framework. Similarly to normalization properties that were defined on RDBMS, suitable properties could be associated to metamodels. Perhaps the best starting point to define this would be the proposals made in ontology engineering by N. Guarino [8]. Another set of issues are related to the evolution (versioning) of metamodels. This is not something new but results previously obtained in other domains would have to be adapted to model engineering. It is unlikely that this problem may be confined to the definition and implementation of model and metamodel repositories. The model engineering perimeter is constantly broadening. Behind each file format, each Visio stencil, each tool API, etc. there are metamodels hiding that could be usefully exposed and explicated. Model engineering is not only directed towards code generation, but may also be used to facilitate bridging between different tools. By basing this "bridge engineering" on a higher abstract level than simple XML exchanges (by the way of metamodels), it may become possible to achieve an important gain in interoperability. There are even more speculative uses of model engineering in future information systems. Since each model brings with it its metamodel, automatic application adaptation will be made possible, opening the way for plug-and-play model adaptation. A typical application may also be built on some code stub and an associated metamodel stub. Extensibility of this application may be obtained with an extension of the stub metamodel. For example it could be possible to deliver such application extensibility with {p,x} pairs where p is a plug-in (in the sense of an Eclipse plug-in) and x is the corresponding extension to the stub metamodel. II.6.3 Applications, consequences and perspectives When looking at several remarks that have been proposed in this section, a normal reaction is to say that these are in no way innovative because they correspond to old recognized practices. This is absolutely true. The applicability of model engineering very often meets implicit good practices that have previously been applied by skilled personnel in specific areas. The contribution of model engineering is twofold. First it may broaden the applicability scope by bringing the benefits of these good practices to a larger community, mainly through partial automation. Second it improves the good practices by making them explicit, allowing a better understanding and generalization of these these. Page 58 of 366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

One frequent objection to MDA is that this approach is related to CASE and metaCASE tools that were proposes in the 80's and that were not able to prove their wide applicability at the time. Why would a similar approach work today if it has not been able to become main-stream earlier? Better technological support for graphical systems is not a satisfactory answer even if this may help. A much stronger argument is that model unification is going to bring through the model unification principle a completely new way to deal with the architecture of such tools and frameworks. The subject of model unification does not only concern the strict boundaries of software engineering. Presenting the model management approach in the database field, P. Bernstein states: By model we mean a complex structure that represents a design artifact, such as a relational schema, object-oriented interface, UML model, XML DTD, web-site schema, semantic network, complex document, or software configuration. Many uses of models involve managing changes in models and transformations of data from one model into another. These uses require an explicit representation of mappings between models. We propose to make database systems easier to use for these applications by making model and model mapping first-class objects with special operations that simplify their use. We call this capacity model management. P.A. Bernstein, A.L. Levy & R.A. Pottinger MSR-TR-2000-53 What has been said of model engineering in this section may apply to other technological spaces as well. For example, a source program written in the Java language could be called a Java-model and an XML document could be called an XML-model. Prefixing the word model by the technical space to which it pertains is more than a convenience notation. Usually a technical space [10] is based on a uniform representation system that could be described by a metametamodel. The XML space is based on trees, but these trees have properties different from syntax-oriented trees. An XML document may be well formed (i.e. compatible with the XML metametamodel, but also valid wrt a particular DTD. This is similar to a model being well formed (wrt the MOF) or valid wrt a particular metamodel (e.g. wrt the UML metamodel). Looking at the similar architecture of various technical spaces may show synergies between these (for example Grammarware [9] or semantic Web) and even operational bridges allowing to convert an -model into a -model, and being different technical spaces. References for MDD research issues:
Bzivin, J., Grard, S., Muller, P.A., Rioux, L. MDA Components: Challenges and Opportunities. Metamodelling for MDA Workshop, York, 2003 http://www.sciences.univ-nantes.fr/lina/atl/publications/ 2. Lemesle, R., Transformation rules based on metamodeling. EDOC98, San Diego, 3-5 November 1998 http://www.sciences.univ-nantes.fr/lina/atl/publications/ 3. Peltier, M., Bzivin, J., Ziserman, F. On levels of model transformation in XML Europe 2000, Paris, proceedings pages 117. http://www.sciences.univ-nantes.fr/info/lrsg/Pages\_perso/MP/pdf/xml\_europe\_2000.pdf 4. Object Management Group OMG/RFP/QVT MOF 2.0 Query/Views/Transformations RFP. October 2002, http://www.omg.org/docs/ad/02-04-10.pdf 5. Allilaire,F. Idrissi, T. ADT: Eclipse development tools for ATL. EWMDA-2 , Canterbury, England, 2004 6. Floyd, R.W. Assigning Meaning to Programs Proc. Symposium on Applied Mathematics, American Mathematical Society, 1967, Vol. 1, pp. 19-32. 7. Sowa, J. Ontology, Metadata, and Semiotics, ICCS'2000, Darmstadt, Germany, August 14, 2000. Published in B. Ganter & G. W. Mineau, eds., Conceptual Structures: Logical, Linguistic, and Computational Issues, Lecture Notes in AI #1867, Springer-Verlag, Berlin, 2000, pp. 55-81, http://www.bestweb.net/~sowa/peirce/ontometa.htm 8. Guarino N., Welty, C. Towards a Methodology for Ontology-based MDD. in Bzivin, J. and Ernst, J., (eds.), First International Workshop on MDD, Nice, France, June 13, 2000, available from http://www.metamodel.com/IWME00/ 9. Klint, P., Lmmel, R., Verhoef, C. Towards an engineering discipline for grammarware Working draft paper July 2003, http://www.cs.vu.nl/grammarware/ 10. Kurtev, I., Bzivin, J., Aksit, M. Technical Spaces: An Initial Appraisal. CoopIS, DOA2002 Federated Conferences, Industrial track, Irvine, 2002 http://www.sciences.univ-nantes.fr/lina/atl/publications/ 1.

Page 59 of 366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

Page 60 of 366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

III Service Oriented Computing


III.1 Introduction The advent of Service Oriented paradigm for Enterprise Application integration has stimulated great expectation among the developers community. Service Oriented Architectures (SOA) emerged as an evolutionary step from Object and Component based approaches, with the promise to overcome the deficiencies due to which these solutions have fallen short in the past. Still the variety and diversity of implementations and interpretations of SOA causes controversy and scepticism among system architects and developers. Currently there not seems to be a single and consistent agreement on how SOA should be materialized. Moreover the vast amount of emerging standards, which many times expose overlapping applicability, makes it more difficult to understand and utilize the potentials of these technologies. This part provides a snapshot of current trends, standards and implementations of Service oriented technology and pinpoints some interoperability issues. Toward this, it focuses on three major trends in Service oriented development namely Web Services, Grid Services and peer-to-peer (P2P) services. These three technologies provide the main body of the work that has been achieved the recent past years covering different approaches of distributed programming. Web Services build upon XML standards to provide a coherent platform for building loosely coupled distributed applications. Grid Services on the other hand originate from the requirement of Grid Computing to standardize the interface mechanism for accessing distributed computational (grid) resources. P2P computing finally although has had many successes till now, still lacks consensus on how applications should be build and what semantics should support, thus rendering the notion of P2P service the vaguest of the three. In this State of the Art we present recent advances in all three technologies including current standards and the main players of standardization process, development tools, methodologies, sample applications and uses cases. Special focus has been given to Interoperability issues. Interoperability is considered both in terms of intra- and inter-paradigm integration. Finally we provide various resources (bibliography, conferences, etc) on which the interested reader may find more information. The document is structured as follows: Chapter 1 provides an introduction to the concept of Service Oriented Architecture and the fundamentals of Web Services, Grid Services and P2P Services. Chapter 2 lists various applications and successful case studies of these paradigms. Chapter 3 is dealing with the technical aspects of the technologies. It mainly answers the questions like how the services are defined, queried, accessed, executed and composed. In Chapter 4 we list a number of the most important research projects which have been completed or are still in progress in the broad area of e-Services. Chapter 5 presents tools and platforms for service development. The standardization bodies that steer the standard development process in this area are described in part VIII. Interoperability issues are analyzed in Chapter 6. Chapter 7 summarizes the results of this State of the Art. Finally the interested reader will find further resources in part VII, and the Bibliography references in part IX.

Page 61 of 366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

III.1.1 What is Service Oriented Architecture (SOA) According to W3C, a Service Oriented Architecture (SOA) specifies a set of components whose interfaces can be described, published, discovered and invoked over a network. SOA vies to promote software development in a way that leverages the construction of dynamic systems which can easily adapt to volatile environments and be maintained. The decoupling of system constituent parts enables the re-configuration of system components according to the end-users needs and the systems environment. Furthermore, the use of widely accepted standards and protocols that are based on XML and operate above internet standards (HTTP, SMTP, etc) enhances interoperability. Service oriented development emerged as an evolution of the component based development and among its goals is to support the loose coupling of system parts in a far better way than existing component based technologies. The ramifications of service oriented development can be observed both at the system and the business level. Having systems composed of services offered by various service providers provides the basis for supporting new business models, such as virtual organizations. III.1.1.1 Service Oriented Model Any service-oriented environment is expected to support several basic activities [TP 2002]: 1. 2. 3. 4. 5. 6. Service creation Service description Service publishing to Intranet or Internet repositories for potential users to locate Service discovery by potential users Service invocation, binding Service unpublishing in case it is no longer available or needed, or in case it has to be updated to satisfy new requirements.

In addition to these basic activities there are some other activities that need to take place in order to take full advantage of the Service Oriented Architecture. Such activities include service composition, management and monitoring, billing and security. However, we consider that the service model requires at least the following basic activities: describe, publish/unpublish/update, discover and invoke/bind, and contains 3 roles: service provider, service requester and service broker [WSAO]. These roles and basic activities are illustrated in Figure 16.

Page 62 of 366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

Service Provider

Publish, Unpublish, Update

Invoke/Bind

Discover
Service Broker

Service Requester

Figure 16 Service model Service provider: a service provider is the party that provides software applications for specific needs as services. Service providers publish, unpublish and update their services so that they are available on the Internet. From a business perspective, this is the owner of the service. From an architectural perspective, this is the platform that holds the implementation of the service. Service requester: a requester is the party that has a need that can be fulfilled by a service available on the Internet. From a business perspective, this is the business that requires certain function to be fulfilled. From an architectural perspective, this is the application that is looking for and invoking a service. A requester could be a human user accessing the service through a desktop or a wireless browser; it could be an application program; or it could be another service. A requester finds the required services via a service broker and binds to services via the service provider. Service broker: this party provides a searchable repository of service descriptions where service providers publish their services and service requesters find services and obtain binding information for these services. It is like telephone yellow pages. Such examples of service brokers are UDDI (Universal Description, Discovery, Integration) [IBM UDDI] [Microsoft UDDI] and XMethods [XMethods]. III.1.1.2 Extended Service Model The basic set of operations do not suffice for the development of a system within a business context. The construction of a complex system that supports business processes requires enhancements of the basic set of operations that give added value to the basic model. These enhancements mechanisms that address the composition of services to more complex ones, and mechanisms that deal with issues like transactions and security. Furthermore, there is also a need for mechanisms that support the quality of service and semantic aspects. Higher level mechanisms that can handle higher level issues such as monitoring and contracting are also required. This set of extensions can be organized into layers settled one above the other. Such Page 63 of 366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

an organization scheme was specified by Papazoglou and Georgakopoulos in [PG 2003] and is presented in Figure 17.

Figure 17: Extended Service Oriented Architecture

III.1.1.3 Benefits of Service Orientation One might ask why we should focus on services for architecting the enterprise and its IT support. What was wrong with object orientation, component-based development, and business process engineering? Two of the major benefits are: The service concept applies equally well to the business as it does to software applications. Add to that industry-wide support for Web services standards and for the first time in history, the convergence of the skill sets of the business analyst and the application developer. The analyst is able to specify service interface definitions and business processes, which can be used directly by the application developer as input for the implementation definition Service orientation offers a level of flexibility far exceeding that of Component Based Development (CBD). A component is built or bought once and integrated into an organisations application architecture. A service is invoked dynamically when required, allowing providers to continuously improve their service and users to select the best available service at any one time. The focus on business processes in Business Process Engineering (BPE) may have given a sense of flexibility, but IT systems were never process-oriented. A change in the business processes of an organisation could require months to implement in the ERP systems supporting those processes. Other benefits are listed below: Services reduce complexity by encapsulation: a service may be the aggregation of a number of other services. What is important is the type of behaviour a service provides, not how it is implemented. Encapsulation is key to coping with complexity, flexibility, scalability, and extensibility. Page 64 of 366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

Services provide the units of business that represent value propositions within a value chain or within business processes; these services are a natural starting point for flexible outsourcing strategies. Services promote interoperability by minimizing the requirements for shared understanding: a service description and a protocol of collaboration and negotiation are the only requirements for shared understanding between a service provider and a service user. Services enable interoperability of legacy applications: By allowing legacy applications to be exposed as services, a service-oriented architecture greatly facilitates a seamless integration between heterogeneous systems. New services can be created and dynamically published and discovered without disrupting the existing environment. Service Oriented Architecture is mainly instantiated by Web Services. However, Grid and P2P Services also adhere to the same model. Descriptions of these technologies and of their supporting mechanisms and specialties can be found in the following. III.1.2 Web Services There are many definitions for what constitutes a web service. The UDDI consortium [UDDI] defines web services as self contained, modular business applications that have open, Internetoriented, standards-based interfaces, whereas W3C specifies that web services are applications identified by a URI, whose interfaces and bindings are capable of being defined, described and discovered as XML artefacts. A Web service supports direct interactions with other software agents using XML-based messages exchanged via Internet-based protocols [W3C 2004]. Web services are an accurate instantiation of the service oriented model. They adhere to the set of roles and operations specified by the service oriented model and they have also managed to establish a standardized protocol stack. SOAP [SOAP], WSDL [WSDL] and UDDI [UDDI] are the most well known standards used for the execution of the basic set of operations i.e. bind, publish and find. Web services aim to support enterprise-application integration (EAI). They enable the integration and interoperability of heterogeneous systems and components which may be geographically dispersed. In practise web services have been regarded as web interfaces to components. However, web services arent just interfaces to components. Web services intend to expose higher level business processes whereas components tend to expose lower level business objects and processes. Nevertheless, the level of granularity addressed by each technology isnt the only difference among web services and component technologies such as CORBA [OMG 2001], EJBs [EJB2.1 2003] and COM+ [COM+]. A list of differences among web services and components can be found in [Szyp 2003] and among web services and CORBA components can be found in [CKMTW 2003][GKS 2002]. The Web Services community has been built around web services has been actively promoting the service oriented approach and producing open and internet-based protocols and standards. The web services community is also steering the evolution of other protocol proposals which tackle aspects

Page 65 of 366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

specified by the extended service oriented model e.g. composition of services, transactions, security, trust, etc. III.1.3 P2P Services The term peer-to-peer refers to a class of systems and applications that employ distributed resources to perform a function in a decentralized manner. The resources encompass computing power, data (storage and content), network bandwidth, and presence (computers, human, and other resources). The critical function can be: distributed computing, data/content sharing, communication and collaboration, or platform services. Decentralization may apply to algorithms, data, and meta-data, or to all of them. This does not preclude retaining centralization in some parts of the systems and applications. Typical P2P systems reside on the edge of the Internet or in ad-hoc networks. P2P services can be defined as resources (data, computing power, network bandwidth, presence and applications) that are shared among the peers of a p2p network by direct exchange between systems. The goals of p2p services include: - Cost sharing/reduction, by using existing infrastructure and by eliminating or distributing the maintenance costs - Resource aggregation (improved performance) and interoperability, by aggregating resources through low-cost interoperability, the whole is made greater than the sum of its parts Interoperability is also an important requirement for the aggregation of diverse resources - Improved scalability/reliability, by resulting in new algorithms in the area of resource discovery and search - Increased autonomy, by requiring the local node to work on behalf of its user - Anonymity/privacy, by incorporating these requirements in the design and algorithms of P2P systems and applications, and by allowing peers a greater degree of autonomous control over their data and resources - Dynamism, by allowing resources, such as compute nodes to enter and leave the system continuously - Enabling ad-hoc communication and collaboration, by taking into account changes in the group of participants The p2p model can be pure or hybrid. In a pure model, there does not exist a centralized server. In the hybrid model a peer first communicates with a server, e.g., to obtain the location/identity of a peer and then communicates directly with the peer. Pure p2p systems can be unstructured (Gnutella, Freenet) or structured (Chord, CAN, Pastry). P2P systems can be classified into systems for distributed computing (e.g., SETI@home, Avaki, Entropia), file sharing (e.g., Napster, Gnutella, Freenet, Publius, Free Haven), collaboration (e.g., Magi, Groove, Jabber), and platforms (e.g., JXTA and .NET My Services). P2P systems for Ecommerce is an emerging class of p2p systems (e.g. Project Venezia-Gondola [Gao 2004]).

Page 66 of 366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

III.1.4 Grid Services The term Grid refers to a system that is concerned with the integration, virtualization, and management of services and resources in a distributed, heterogeneous environment that supports collections of users and resources (virtual organizations) across traditional administrative and organizational domains (real organizations). The notion of the Grid emerged during the mid 90s when the increase of network bandwidth made feasible to connect various computers together and use them as if they were one single Meta-computer. The first applications that were developed and took advantage of Grid ideas were high-performance scientific applications with increased demands on CPU time and storage capacity. In practice a Grid is a software infrastructure that handles the details of resource sharing among distributed environments that reside under different administrative domains. Traditionally Grid resources are accessed using well defined, standard protocols and are governed by local or distributed policies. Practice has proven that programming in a Grid environment is a tedious and ad-hoc process. No common agreed programming model exists and in any case the diversity of applications that can be developed makes difficult to develop such a general-purpose abstract model. Recently there has been a movement from the API/Protocol oriented Grid programming to a Service-oriented approach. In June 2002 the paper The Physiology of the Grid:An Open Services Architecture for Distributed Systems Integration, co-authored by the two Grid computing pioneers Ian Foster and Carl Kesselman, prescribed a Grid architecture where shared resources are described and accessed using open Service-based interfaces [FKNT 2002]. This architecture known as OGSA (Open Grid Services Architecture) was initially materialized by the OGSI (Open Grid Services Infrastructure). OGSI adopted the Web Services technology and extended it in the areas were it was considered inadequate for developing Grid applications, namely stateful and transient interactions, life-cycle management and notifications [OGSA][OGSI]. However the introduction of OGSI raised controversy and anxiety among the Web Service community. OGSI proposed a tightly-couple, overloaded and object-oriented extension of Web Services named Grid Services that conflicted with the rest of the Web Services specifications and tools. The critic that OGSI received led to the so called refactoring of the standards and the introduction of the Web Services Resource Framework (WSRF) [WSRF]. WSRF defines a set of five Web Services specifications that together with the WS-Notification specification provide similar functionality with OGSI, which is compatible with the existing WS tools and in accordance with the common WS specification definition philosophy. In the context of WSRF a Grid Service has been defined informally as a Web service that is designed to operate in a Grid environment, and meets the requirements of the Grid(s) in which it participates. Grid resources are exposed and controlled by Web Services using the so called implied-resource pattern. WSRF specifications facilitate the implicit association between Web Services and Grid resources, and define their life-cycle, the static information that can be associated with them, the grouping of multiple resources and how clients can get notifications on resource state changes.

Page 67 of 366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

III.1.5 Relation between P2P, Grid and Web Services

III.1.5.1 P2P services and Web Services They are both designed to enable loosely coupled systems and have a heavy emphasis on distributed computing as they leverage SOA. They also aim to become a common stack for publishing and discovery across networks. The issues being discussed in both fields are similar: trust, performance, security, self-describing data, etc. Innovation in one area will boost the other. Web services alleviate complexities of p2p computing and vice versa, as follows [SS 2002]: - web services can be used for the communication between different p2p systems - using a p2p architecture, the web services implementation of using a single central UDDI registry can be transformed into a decentralized mode. - web services can be used for exposing business processes in p2p systems There are a lot of potential applications with p2p and web services working together, e.g., p2p collaborative applications or B2B systems built using web services or search engines based on p2p and web services technologies. III.1.5.2 P2P services and Grid services P2P services support the more general (beyond client server) sharing modalities and computational structures that characterize the grid services. One difference is that p2p developers have so far focused entirely on vertically integrated solutions, rather than seeking to define common protocols that would allow for shared infrastructure and interoperability. Another difference is that the forms of sharing targeted by various p2p applications are quite limited, for example, file sharing with no access control, and computational sharing with a centralized server. As these p2p applications become more sophisticated and the need for interoperability becomes clearer we will see a strong convergence of interests between p2p and grid services. For example, single sign-on, delegation, and authorization technologies that are used by grid services become important when various p2p services must interoperate, and the policies that govern access to individual resources become more complex [FKT 2001]. III.1.5.3 Grid and Web services In order to understand the relationship of Grid and Web Services one has to distinguish between the pre-WSRF, OGSI approach, and the recent WSRF proposal. To begin with, Web Services provide a paradigm for developing Service Oriented Architectures, adopting a message-centric approach to implement loosely coupled application interactions. In the surface OGSI, WSRF and Web Services expose similarities since all three of them are based on WSDL and SOAP to expose distributed capabilities and functionality. Nevertheless a closer investigation reveals important semantic differences. OGSI follows a more object-oriented approach providing an all-in one solution for building Grid Services incorporating capabilities like service composition, notification and state manipulation within a single specification. An OGSI Grid Service is tightly mapped to the computing resource that exposes. OGSI defines the standard interfaces to access a resources functionality and state.

Page 68 of 366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

These characteristics have caused the reaction of Web Services community since these diverged from the basic WS principles and semantics. WSRF on the other hand proposes a solution better aligned with common Web Services practices by providing separate specifications for each of the core capabilities of a Grid Service (stateful interactions, notification, grouping etc). Yet the implied resourced pattern that WSRF introduced, prescribe a resource-centric approach for building applications. A WSRF Grid Service is implicitly bound to one or more Grid resources. Applications are build around a resources state and lifecycle and not only around the functionality that this resource may provide. This introduction of the resource factor has once again raised scepticism from the Web Services community regarding its necessity and impact to SOA.

Page 69 of 366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

III.2 Applications / Case Studies, Example Scenarios Addressed by SOA The main purpose of this section is to present applications, case studies and example scenarios addressed by SOA. These are divided in three categories: web services example applications, p2p example applications and grid example applications. These applications provide the basis for the identification of interoperability issues which are discussed in section III.7. III.2.1 Web Services example applications

III.2.1.1 Sandvik Sandvik is a word-wide engineering company represented in 130 countries. The main business of the company is to produce industrial tools, advanced materials (such as high-alloy steels) and mining equipment. Sandviks IT infrastructure must be able to support the world-wide subsidiaries, as well as their sub-suppliers. Due to the heterogeneous IT environment, the need for standard based system integration is high. The following points summarise Sandviks need for system integration: An organisation that is distributed globally raised the need to integrate systems on the world-wide level. Numerous sub-suppliers and subsidiaries called for a standardised communication protocol. A wide set of different business systems required a structured, global, approach to integration. (Take any well-known business system, and it is very likely that some subsidiary at Sandvik uses it.) Need to alleviate time consuming communication by manual labour (for example exchanging information by fax). In addition to the above needs, there is a desire to distribute the responsibility of managing and maintaining the enterprise systems to the subsidiaries. The distribution of responsibility would avoid overloading the central IT staff with maintenance and support of the wide range of ERP systems that the sub suppliers use. The need for standardised communications protocols made Web services an appropriate choice as the base for the integration infrastructure. The desire to keep the responsibility of service provisioning at the subsidiaries made service oriented architecture appropriate. Integration Solution The need for integration described before prompted Sandvik to create an integration hub architecture, based on web services. To manage the growing set of services available through the architecture, Sandvik also created a service repository, with associated management routines. For integration purposes Sandvik created an architecture based on message delivery with IBM MQ and an integration hub based on Microsoft Biztalk. The purpose was to create a common hub that handles SOAP message routing, message translation and message delivery through different channels. Figure 18 gives a simplified view of the architecture.

Page 70 of 366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

S1 Integration Hub S2

S3

S3

Subsidiaries ERP systems t

S6

Integration Hub (MS Biztalk)

S4

Subsidiaries ERP systems t

Delivery adapters

Figure 18 Overview of the Integration Hub Architecture The integration hub provides a standardised and flexible way to communicate with back-end systems and the subsidiaries ERP systems. However, to avoid having a tailored interface to each of the back-end systems, Sandvik also created a set of standardised interfaces. This set of standard interfaces is meant to be implemented by the subsidiaries ERP systems. When a subsidiary implements the standardised interfaces, they can be connected to the hub. The standardised interfaces are documented in a Service library. The service library consists of message definitions specified by using XML schemas. In total, the library contains about 70 message definitions. The repository contains at least the following information for each message: A textual description of the message An XML schema definition of the request/response message structures. Information about the current versions, and prior versions. Identification of the systems that can receive the message. An important aspect of the repository that Sandvik has dealt with is its maintenance. In order to put the repository in focus for all system development efforts Sandvik has defined a special role called Librarian. All change requests of the schema definitions must go through the librarian. The librarian plans the changes so that they are synchronised with updates to the back-end systems. The librarian also maintains the structure of the repository, to keep every schema well-designed by avoiding overlapping functionality etc. Success Factors Skilled developers enabled Sandvik to create its own solutions when essential features were missing from existing products. A pragmatic approach to schema definition enabled the successful creation of XML schemas that could be utilised by several systems. This pragmatic approach meant that schema definitions were created based on existing system, using a bottom-up approach. The creation of a central repository with service/message definitions makes it easier for developers to find and use existing services. Page 71 of 366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

Explicit role definitions enable the repository of services to evolve in a controlled way. The possibility to get access to Sandviks global business systems (e.g. sales systems) created a big incentive for the subsidiaries to implement standardised Web service interfaces to their ERP systems.

III.2.1.2 SEB SEB is one of the leading corporate banks in Sweden and the other Nordic countries. This case description focuses on SEBs role as a provider of pension insurances using web services technology. SEB is a provider of insurances that are sold by numerous insurance brokers. All these brokers need access to SEB systems in order to market, sell and change insurances for the end customers. From the beginning SEB, the insurance brokers and the end customer where communicating using ordinary mail and fax. Two changes in the business required increased (IT) system support for these tasks. Firstly, the amount of manual transactions were gradually increasing due to changes in customer behaviour (frequent updates in the insurances became common). Secondly, the biggest insurance brokers and their biggest end customers required direct connection to SEB systems in order to be able to manage their insurances in an efficient way. Another aspect that required a new look at the integration possibilities were that the largest insurance brokers needed access to not only SEB systems, but to other providers of insurances as well (such as Skandia, another Swedish bank). As usual when dealing with insurance information, an integration solution needs to be secure. Integration Solution Given the problems stated above, SEB was in need of a secure communication protocol and an internal architecture that enabled structured access to their internal systems. Since there are many organisations interested in having a secure connection to their insurance brokers a web-service based secure communication protocol was developed together with key players in the swedish insurance domain, the companies Skandia Liv, Lnsfrskringar, Alecta, Aspispronia, Danica, Folksam and SPP. The created protocol is dubbed Specification of secure electronic communication between organisations in the insurance sector, SSEK (see www.SSEK.org). The architectural solution to enable external access to SEBs internal systems was to create a centralised message gateway. The SSEK Protocol The SSEK is an open specification for secure communication based on Web service and XML standards. The fundament of the protocol is to use XML SOAP messages sent over a SSL secured channel. The SSEK protocol is fully described at www.ssek.org, however, to give an overview of the scope of SSEK the following SSEK features can be mentioned: Identification of how to handle unique transaction identifiers. Specification of how to apply XML signatures to digitally sign documents. Identifies common error codes (SOAP Faults) that both parties need to handle. Definition of four security levels and their mandated use of signatures and SSL client and server certificates. It should be noted that the above domain specific features are not specified in the current Web service standards, thus the SSEK specification is very valuable as a basis for establishing secure Web service connections in this domain. Page 72 of 366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

The Message Gateway The SSEK specification enables secure message exchange between the insurance broker and SEB. When the messages arrive at SEB they need to be routed to the back-end system. These back-end systems uses different communication protocols, thus before routing the SOAP messages to the appropriate system they need to be transformed into the respective system native communication format. Another problem that needed to be dealt with was that one incoming message might result in numerous messages sent to the back-end systems. In order to solve these problems, a message gateway was introduced. The message gateway acts as a middle-man, handling message transformations, message splitting and message joining. Furthermore, the gateway contains generic functionality for handling authentication and logging, which enables rapid development of new services. The insurance brokers where connected to the gateway with the use of the SSEK protocol. Figure 19, below, gives a schematic overview of the solution.

Insurance broker
HTTP SSL SOAP

SEB
Gateway Proprietary protocols

B1 B2 ...
Back-end systems

SSEK

Message gateway

Figure 19 Overview of the solution The message gateway is built using a custom C++ ISAPI extension running inside the Microsoft Internet Information Services (IIS) web server. Success Factors The following points summarise why the integration solution was successful: Focus on technical protocol issues, rather than business aspects, made it possible for competing organisations to jointly define a domain-specific protocol specification, SSEK. A single entry point to all back-end systems enabled a structured approach to exposing existing systems as services. Separating generic functionality such as authentication, logging and protocol translations from the service implementation makes it easy to add more services later on. This generic functionality was put into the message gateway.

Page 73 of 366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

III.2.1.3 Web Services Security Case Study This short document gives a cursory description of how a big company (37000 employees in 130 countries), that uses web services for letting a selected few customers access certain information residing in their back-end systems, has implemented security controls to protect their services. The overall architecture is built up through the use of brokers like Microsoft BizTalk and a proprietary broker software built in-house. When a client invokes a web service, the broker takes care of validating (authorizing) the clients permission to access the particular service requested. As an example, a specific customer may only be allowed to invoke a service that extracts information based on the customers unique id. If a customer makes a request based on another id, this request should be rejected. The access rights assigned to each client are placed on the broker in the form of XML-based configuration files. Thus, technologies like Active Directory or other database solutions for storing the access rights are not used. Today, this solution does not constitute a big problem since there are only a few users involved, approximately 30. However, it is anticipated that the administration overhead of this solution will become a big problem in the future as more users are connected. Initial authentication of the requester is also performed on the broker using simply HTTP and standard basic authentication (thus, passwords are not really encrypted but encoded using Base64 encoding) if the request is made from inside the corporate network. Internal threats are acknowledged, e.g. there have been occasional virus outbreaks and such viruses could potentially jeopardize the confidentiality of information sent on the internal network. However, the internal network is constantly monitored to detect and prevent sniffing and similar compromising activities. Since services are also available from outside the network, if an external request is made, the traffic is encrypted and protected through the use of SSL and digest authentication. Neither UDDI, nor any other directories, are used to publish services publicly. Those customers who want to use a certain web service need to manually contact the company to get a user account to be able to access the service. Password and user id are sent separately to users in encrypted format. In some of the services, the user is allowed to change password on their own. On the positive side, this lets the user change the password quickly and easily if it has been accidentally compromised. At the same time it opens up for the possibility of users choosing weak passwords. It was argued that not letting the client change password leads to better control and that strong passwords can be generated and supplied to the users. However, if the password is supposed to be handled somehow by a human, it is likely that strong passwords are difficult to remember and will end up being written down on pieces of paper. There are no special policies regarding how passwords should be handled, but on the other hand it was argued that customers could not be bothered and restricted too heavily. Seen from another perspective though, it should be in the customers interest to achieve a high level of security. Either way, using passwords as the primary authentication technique introduces security problems and difficult trade offs. All service requests are logged on the broker and information like; failed authentication attempts; which user id that has most failed logins; how frequently certain IP-addresses have performed requests; which user ids that are the most frequent requestors and similar information can be gathered. The product used for collecting these statistics is called Analog (www.analog.cx ) and is described as a handy tool for this purpose. Page 74 of 366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

The broker is positioned on an internal network and is protected by traditional firewalls. According to the interviewees, the broker has been attacked quite heavily (as shown in the log files) and still there have been no security incidents that they are aware of. The input sent to services is not validated in any way with respect to data type, size etc, e.g. by using XML-firewall capabilities. Despite this, attacks based on unvalidated parameters are not seen as a major threat. E.g. SQLinjection attacks are not a concern since few back-end databases are accessed and those that are, use stored procedures. (Using stored procedures does not necessarily protect you from SQL-injection attacks though. Fact is that stored procedures could possibly be used to perform quite remarkable attacks) (http://www.spidynamics.com/whitepapers/WhitepaperSQLInjection.pdf). According to the interviewee, parameters in the service call will end up in some routine in a legacy application somewhere, which will generate an error if erroneous input is received. However, services are not tested with respect to security issues before being deployed and there is no guarantee given that a back-end legacy application will not respond unexpectedly to erroneous input. Currently, there is no filtering being done on data that is sent back from a service, but this is something that would be desirable based e.g. upon the rules in the configuration files mentioned earlier. When a new service/schema is developed, a certain development process is followed. At a certain stage in this process, the information owner gets to say what information is regarded so sensitive that it cannot be made publicly available at all. This control serves as a guard so that no confidential information is put up for access by developers who are not fully aware of the protection level of the information. All requests to change a service go through a librarian which guarantees a certain quality level in the development process. In the company, no one in the IT-department works only with security issues. There is a security department that deals with company wide security issues. This mainly concerns physical protection of computers, servers, buildings and other assets. It is not clear whether this department has any points of view regarding web services. Finally, it was pointed out that the security solution in place need to be based on estimations on what a reasonable level of security should be. Security is always a trade off between how much money to spend on security versus what the costs will be in case security is breached and information end up in the wrong hands. It then becomes crucial that such estimation has been done and that it can be used as a basis for making trade off decisions. The companys specific interest within security, related to web services, is expressed in the following bullets (this is what they want to know more about): Fine-meshed security (authorisation, filtering of input/output, validation of input against a schema) Interesting security products available on the market Standards. Should we be using WS-Security? What are the benefits, what are the problems?

Page 75 of 366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

III.2.2 P2P Example Applications III.2.2.1 Gnutella Gnutella [Gnutella] is a fully distributed information-sharing technology. Each piece of Gnutella software is both a server and a client in one, because it supports bi-directional information transfer. When you run a Gnutella software and connect to the Gnutella Network, you bring with you the information you want to make public. That could be nothing, it could be one file, a directory, or your entire hard drive. From a scientific perspective, what makes Gnutella different is that it doesnt rely on any central authority to organize the network or to broker transactions (like in Napster). You need only to connect to one arbitrary host. Any host. Discovery of an initial host is done automatically by a handful of host caches. Once you connect with one host, youre in. Installing any of several available clients is all that is needed to become a fully functional Gnutella site.
Gnutella client software is basically a mini search engine and file serving system in one, based on the TCP protocol. The client communicates directly only with the handful of sites that it's agreed to contact. Each Gnutella node is free to interpret the query as it wants, answering in the form of filename, advertising messages, URLs, graphics and other arbitrary content. There is no such flexibility in other similar systems (such as Freenet). The Japanese Gnutella project (http://www.jnutella.org/) is deploying Gnutella on mobile phones, where the results of a search are tailored to mobile phone interfaces. When you search for something on the Gnutella Network, that search is transmitted to everyone in your Gnutella Network "horizon". If anyone has anything matching your search, he'll tell you. If not, the request is passed from any host to all hosts known to it. Each node in the chain may cache the reply locally, so that it can reply immediately to any further request for that particular document. Eventually, you get many replies containing the information to choose from. You double-click on one to download the information from the Gnutella node that has it. Like in other communication systems the idea of decay is implemented. Messages time out after passing through a certain number of nodes, so that huge chains dont form. Previously the concept of horizon was introduced. It means that each node can see a certain distance in all directions but not beyond that. Each node is situated slightly differently in the network and as a result sees a slightly different network. It is like when you stay in the middle of a mob. You can only see for a short distance around you. Obviously there are countless more people outside your immediate vision, but you cant tell how many. Even if Gnutella was the first successful, fully decentralized, peer-to-peer system, what does really matter is that Gnutellas message-based routing system affords its users a degree of anonymity by making request and response packets part of a crowd of similar packets issued by other participants in the network. In most message that are passed from node to node, there is no mention of anything (such as IP address) that might tie a particular message to a particular user. Also, Gnutellas routing system is not accessible. The routing tables are dynamic and stored in the memory of the countless nodes for only a short time. It is nearly impossible to learn which host originated a packet and which host is destined to receive it.

III.2.2.2 CFS The Cooperative File System (CFS) [CFS] is a new peer-to-peer read-only storage system that provides provable guarantees for the efficiency, robustness, and load-balance of file storage and retrieval. CFS does this with a completely decentralized architecture that can scale to large systems. Page 76 of 366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

CFS servers provide a distributed hash table (DHash) for block storage. CFS clients interpret DHash blocks as a file system. DHash distributes and caches blocks at a fine granularity to achieve load balance, uses replication for robustness, and decreases latency with server selection. DHash finds blocks using the Chord location protocol, which operates in time logarithmic in the number of servers. CFS is implemented using the SFS file system toolkit and runs on Linux, OpenBSD, and FreeBSD. Experience on a globally deployed prototype shows that CFS delivers data to clients as fast as FTP. Controlled tests show that CFS is scalable: with 4,096 servers, looking up a block of data involves contacting only seven servers. The tests also demonstrate nearly perfect robustness and unimpaired performance even when as many as half the servers fail. III.2.2.3 Edutella Edutella [Edutella] is a peer-to-peer system for the exchange of educational metadata. Edutella lives on top of the Semantic Web framework as a distributed query and search service. It is the first application of Edutella project that is a multi-staged effort to scope, specify, architect and implement an RDF-based metadata infrastructure for JXTA P2P Applications. The initial Edutella services are Query Service (standardized query and retrieval of RDF metadata), Replication Service (providing data persistence / availability and workload balancing while maintaining data integrity and consistency), Mapping Service (translating between different metadata vocabularies to enable interoperability between different peers), Mediation Service (define views that join data from different meta-data sources and reconcile conflicting and overlapping information) and Annotation Service (annotate materials stored anywhere within the Edutella Network). The vision of the project is to provide the metadata services needed to enable interoperability between heterogeneous JXTA applications. The Edutella system is a P2P network for the exchange of educational resources (using schemas like IEEE LOM, IMS, and ADL SCORM to describe course materials). III.2.2.4 Groove Groove [Groove] is a platform and set of services for direct communication and collaboration both within the enterprise and across corporate boundaries. Groove uses a concept of shared and collaborative areas, called Shared Spaces. You create Shared Spaces that let others communicate with you. For example, after setting up a Conversation Space, you can invite others to chat, share documents, exchange files, or perform any other function that's configured to run in every client's version of Groove. At any time, a user can see who currently may be accessing one or more of the user's defined spaces. Users must formally invite other users to share spaces. III.2.2.5 Avaki Avaki [Avaki] is a p2p system that supports distributing computing. It provides a single virtual computer view of a heterogeneous network of computing resources. It is a classic example of metacomputing applied to networks ranging from corporate compute and data grids to global application grids of Internet scale. Avaki is object-based. Every entity in the system is an individually addressable object with a set of interfaces for interaction. This approach enables uniformity in system design through inheritance and containment through encapsulation. High performance, exploitation of heterogeneity, scalability, and masking of complexity were key design criteria in the development of Avaki.

Page 77 of 366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

III.2.3 Grid example applications III.2.3.1 Furniture to be Scenario All the small and medium enterprises aim to reach the supply capacity of a big enterprise. For that the first ones need, for example, to share online services in a pool of resources that could be managed by grid technology. This approach offers an added value to all the participants in two major vectors: first they can join efforts and services to provide major output that alone could never be possible and with that explore new business opportunities, once only reachable by big companies; secondly, all together can provide a front-end with homogenized business methods and services, rich in resources that will make possible an improved customer service. Grid Services [TCFFGK 2002] solve both Stateless and non-transient problems by proposing a factory approach to Web-Services.

Figure 20: Grid Services approach with factories Instead of having one big stateless OrderService shared by all users, one central OrderService Factory creates instances of the same service to each one of the clients requesting the service. To create the OrderService instance, the client requests to the factory and then all the OrderServices operations are done through the newly created instance. Such instances are transient because they have a limited lifetime (the instance will eventually be destroyed). The lifetime of an instance can vary from application to application. Usually, it might be useful if instances live only as long as a client has any use for them. This way, every client has its own personal instance to work with. However, there are other scenarios where it might be necessary an instance to be shared by several users, and to be self-destructed when no clients have accessed it for a certain time. It is then clear that Grid Services are an extension to Web-Services solving basic problems as stateless and non-transience features. This scenario intends to present the potential behind enhanced management of resources in collaborative engineering environments on furniture industry by clustering manufacturers resources.

Page 78 of 366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

fir on C m n io at

Request 100 tables 400 chairs

on ati rm nfi Co

est s qu Re chair 0 10

Request 400 tables 100 chairs

Figure 21 GRID demonstrator The potential on the manufactures resource aggregation can be seen in Figure 21 where none of the three manufacturers alone can fulfil the retailers order due to their limited production capacity. Two paths can follow: the lost of the business opportunities, or as it can be seen in the demonstrator, the aggregation of the manufacturers resources in a collaborative networked environment where all together can fulfil the retailers request. This not only has the advantage of business fulfilment but also the transparency in the manufacturer relationships. The front end order service of the demonstrator handles the collection on the parts of the request resources and delivers it to the retailer in one simple action. Taking in account the Figure 22, manufacturers A, B and C are registered in a grid platform that has the front end FrontOrdersService factory. In this way, when a given client needs a product, instead of contacting the whole set of manufacturers for ordering a product based on a specific criteria, he only has to create an instance in the central factory FrontOrdersService that will search for the products in the registered manufacturers and will make the order on the manufacturer with the best price, the criteria used in this case. Each of the manufacturers A, B, C provides two services, PriceService and OrderService. The PriceService receives a product name and returns its price. The OrderService gets a product name, processes the request in the manufacturer side and then returns the confirmation of the order. These services are used by the FrontOrdersService instances in order to search for the best price of a requested product, and to perform the order on behalf of the client in the manufacturer with the best offer.

Co nfir

ma tion

Confirmation

Request es 200 tabl

Page 79 of 366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

Figure 22 Grid Specification The flow of information in this scenario is equal to all clients requests, but each client creates his own FrontOrdersService instance and this one creates instances on the manufacturers OrdersServices and PriceService. In this way a Client X request is totally independent of a Client Y or Z request. Taking as example the Client X, first he has to create an instance of the FrontOrdersService (1x). Then he performs the service request, in this case a table order, on the nearly created instance (2x). After this, the FrontOrdersService factory creates an instance in each of the manufacturers PriceService and asks for the price of the table, evaluates it and stores the best offer as being manufacturer A (3x). Knowing the best manufacturer table price, the FrontOrdersService creates an Page 80 of 366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

instance of the OrderService in the selected manufacturer (4x). This instance is requested with a table order (5x), and subsequently processed in the Manufacturer system (6x). The confirmation is then sent to the FrontOrdesService instance (7x) and then finally to Client X (8x). Although this could appear a complex procedure, it gives a level of abstraction to the client facilitating him in the purchase of the lowest price product. As the client is concerned, he only has to interface the service once and then the intrinsic of the platform does all the work in ordering the product in the manufacturer with the lowest price or other suitable criteria.

Page 81 of 366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

III.3 Technical aspects of eServices

III.3.1 Web Services By definition, a Web service is a self-content, self-describing, loosely coupled, reusable software component that can be published, discovered/located, and invoked via Internet protocols. A Web service is agnostic of operating systems, programming models, and languages. It provides an interface describing how other systems can interact with it using messages. Web services perform functions, which can be anything from simple requests (transformation, storage and/or retrieval of data) to complicated business processes (aggregation, composition, orchestration). The basic technological infrastructure for Web services is structured around XML-based standards and Internet protocols. These standards provide building blocks for service description, discovery, and interaction. Web service technologies have clearly influenced positively the development of integrated systems by providing programmatic access to Web services. They are evolving toward being able to solve critical integration issues including security, transactions, collaborative processes management, semantic aspects, and seamless integration with existing middleware infrastructures. Figure 23 provides an overview of existing Web service specifications organized in terms of the issues that they address.
Composition/Choreography BPEL4WS, WSCI ebXML BPSS Description WSDL, WSPolicy ebXML CPP/CPA Transactions WSCoordination/Transaction OASIS BTP Advertisement/Discovery UDDI, WSInspection ebXML Registry Messaging SOAP, WSSecurity, WSReliableMessaging, WSRouting ebXML Messaging Service Transport HTTP, HTTPS, SMTP Format and Encoding Unicode, XML, XML Schema

Figure 23 Overview of the Web services stack

Page 82 of 366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

III.3.1.1 Web Service Description / Publication In this section, we first describe the use of SOAP (Simple Object Access Protocol), WSDL (Web Services Description Language), and UDDI (Universal Description, Discovery, and Integration) as building blocks for Web services-enabled applications [Alonso et al. 2003, Curbera et al. 2002]. Then, we give a brief overview of other Web service standards. Simple Object Access Protocol (SOAP) SOAP provides an XML-based protocol for structured message exchanges. It relies on existing transport protocols such as HTTP and MQSeries. SOAP features document-based communication among Web services. Document-based communication allows the integration of loosely coupled services. A SOAP message contains two parts: the header and the body. The header includes information such as intended purpose (e.g., service invocation, invocation results), sender's credentials, response type, and so on. The body contains an XML representation of a service invocation request (i.e., name of operation to be invoked, values of input parameters) or response (i.e., results of service invocation). SOAP implementations exist for several programming languages including Java and C. SOAP implementations provide mappings between SOAP messages and formats understood by service implementations (e.g., Java classes). SOAP implementations typically automatically generate the SOAP header, and provide mappings between the contents of SOAP message bodies, and data structures in the host language (e.g. Java objects). Web Service Description Language (WSDL) WSDL [W3 Consortium 2001a.] is an XML-based language for describing functional properties of Web services. It aims at providing self-describing XML-based service definitions that applications, as well as people, can easily understand. In WSDL, a service consists of a collection of message exchange end points. An end point contains an abstract description of a service interface and implementation binding. The abstract description of a service contains: (i) definitions of messages which are consumed and generated by the service (i.e., input and output messages) and (ii) signatures of service operations. The implementation binding provides a means to map abstract operations to concrete service implementations. It essentially contains information about the location of a binding, the communication protocol to use (e.g., SOAP over HTTP) for exchanging messages with the service, and mappings between the abstract description of a service and the underlying communication protocol message types (i.e., how interactions with service occur over SOAP).

III.3.1.2 Web Service Discovery Universal Description Discovery and Integration (UDDI) UDDI is a specification of an XML-based registry for Web services. It defines an interface for advertising and discovering Web services. The UDDI information model, defined through an XML schema, identifies three types of information: white pages, yellow pages, and green pages. White pages contain general information such as business name (i.e, service provider's name) and contact information (e.g., provider's phone numbers). Yellow pages contain meta-data that can used to effectively locate businesses and services based on classification schemes. For instance, UDDI uses the following standard taxonomies to facilitate businesses/services discovery: NAICS (North Page 83 of 366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

American Industry Classification System), UNSPSC (Universal Standard Products and Services Code System), and ISO 3166 (The ISO geographical classification system). The green pages contain service access information including service descriptions and binding templates. A binding template represents a service end point (i.e, a service access interface). It refers to an entity called the tModel. A tModel describes the compliance of a service with a technical specification (e.g., WDSL document, RMI interface, CORBA IDL). For instance, a WSDL document can be registered as a tModel in the UDDI registry and used in the description of a WSDL-complaint service end point to provide access to service operations. The current stable version of UDDI is version 3. III.3.1.3 Web Service Composition (Coordination / Transactions) Web service composition refers to the development of new Web services by interconnecting existing ones according to some business logic, expressed (for example) as a business process model. For example, a composite Web service for travel arrangement could bring together a number of Web services for flight booking, accommodation booking, attractions search, car rental, events booking, etc. in order to provide a one-stop shop for its users. Web service composition is a key element of the Web services paradigm, as it provides a means to integrate heterogeneous enterprise applications and to realize business-to-business collaborations. Orchestration deals with implementation management (what happens behind interfaces, i.e. process execution). This means that orchestration is a private process, controlled by one party, and it defines steps of an executable workflow. Propositions such BPEL and BPML are clearly at this level. Choreography is more about what happens between interfaces. It can involve static or dynamically negotiated protocols. In this sense, choreography is a public abstract process, where conversations are made up of equals, and they define sequences of observable messages [Peltz 2003]. In this section, we describe a representative sample of ongoing efforts in service composition, orchestration and choreography standardization. Business Process Execution Language for Web Services (BPEL4WS) The Business Process Execution Language for Web Services (BPEL4WS [Thatte2003]) is a language to model Web service based business processes. The core concept is the representation of peer-to-peer interactions between a process and its partners using Web services and an XML-based grammar. It is built on top of WSDL (both the processes and partners are modelled as WSDL services). BPEL4WS BPEL for short is a language based on XML that allows for controlling the process flow (states, coordination and exceptions handling) of a set of collaborating Web services. For that, it defines interactions that exist within and between organisation processes. The language uses either a graph based or algebraic representation, and offers the ability to manage both abstract and executable processes. It provides constructs to handle long running transactions (LRTs), compensation and exception using related standards WS-AtomicTransaction, WS-BusinessActivity and WS-Coordination. BPEL offers an interesting feature that allows having an independent representation of the interactions between the partners. The interaction protocols are called abstract processes, and they are specified in business protocols. This concept separates the external behaviour of the partners Page 84 of 366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

(public and visible message exchange behaviour) from their private internal behaviour and implementation. Executable processes are represented using the BPEL meta-model to model the actual behaviour using the three classical flows: the control flow, the data flow and the transactional flow. It also includes support for the message flow. As in traditional flow models, the control flow defines the execution flow as a directed acyclic graph. The language is designed to combine the block oriented notation and the graph oriented notation. It contains powerful constructors for modeling structured activities: aggregation, branching, concurrency, loops, exceptions, compensations, and time constraints. Links are used to define control dependencies between two block definitions: a source activity and a target activity. Activities can be grouped within a scope, and associated with a scope are three types of handlers: fault handlers, compensation handlers, and event handlers. When an error occurs, the normal processing is terminated and the control is transferred to the corresponding fault handler. Then, a process is terminated when it completes normally, when a terminate activity is called (abnormal termination), when a fault reaches the process scope or when a compensation handler is called. BPEL basic activities are handled by three types of messages: <invoke> to invoke an operation on a partner, <receive> to receive an invocation from a partner and <reply> to send reply message in partner invocation. For each message, one must associate a partner, which prohibits the message exchange between two internal components for instance. Furthermore, there is no ability to associate a timeout to the <invoke> activity. This can block the system if no response is returned. Data flow management is ensured using scoped variables. Input and output of activities are kept in variables, and data is transferred between two (or more) activities thanks to shared data spaces that are persistent across Web services and global to one scope. The <assign> activity is used to copy data from one variable to another. BPEL also proposes a compensation protocol to handle the transaction flow, and particularly long running transactions. One can define either a fault handler or a compensation handler. Handlers are associated with a scope, and a fault handler defines alternate execution paths within the scope, while the compensation handler is used to reverse the work performed by an already completed scope. On collaboration aspects, BPEL is able to model several types of inter-actions from simple stateless interactions to stateful, long running, and asynchronous interactions. Partner Link Types are used to model partner relationships and correlation sets represent the conversations, maintaining the state of the interaction. The choreography of the collaborative business processes is defined as an abstract process. Web Service Choreography Interface (WSCI) The WSCI specification [BEA et al. 2002] proposed by Sun, SAP, BEA and Intalio, is an XMLbased language for describing the observable behaviour of a Web service during a message exchange in the context of a collaborative business process. This language gives the ability for describing the sequence of Web service invocations, i.e. the conditions under which an operation can be invoked. The specification is mainly concerned with public message exchanges among Web services and supports message correlation, sequencing rules, exception handling and transactions. As WSCI defines the flow of messages exchanged by a stateful Web service describing its observable behaviour, it does not directly address the issue of supporting executable business processes, as BPEL does. A WSCI document defines only one partners participation in a message Page 85 of 366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

exchange, including the specification of temporal constraints and logical dependencies using constructs for expressing the flow chart and conditional correlation. Thus, other Web services can unambiguously interact with it according to the intended collaboration. This means that a collaboration is described using a set of WSCI documents, one for each partner. There is neither private workflow nor global cooperation business process. A WSCI interface is built on top of a WSDL interface which defines stateless operations that are supplied by a Web service. Therefore, a WSCI interface can be regarded as an augmented WSDL interface that includes operation abstraction, simple sequencing (call, delay, empty, fault, and spawn), message correlation and properties based on message contents. An action in WSCI maps to a WSDL operation and a role to perform it. This corresponds to a basic activity in BPEL. A second level aims at defining exceptions, transactions and compensating transactions, and offers rich sequencing rules: loops, branches, joins and nested activities (all, choice, foreach, sequence, switch, until, and while). Thus, a stateless WSDL description can be transformed in a stateful message exchange using WSCI. This corresponds to structured activities in BPEL. However, WSCI does not define a transactional protocol, but only expose the transactional capacities of Web services in the context of a collaboration. An extensibility feature of WSCI suggests using RDF to annotate a WSCI interface definition with additional semantics. Business Process Management Language (BPML) BPML [BPMI2002] from BPMI (Business Process Management Initiative) is a language that provides an abstract model and grammar for describing business processes. BPML allows both abstract and executable processes, Web services orchestration and multi-partners collaboration choreography to be defined. BPML can be used to develop a private implementation of already existing WSCI collaborations. In fact, BPML is more or less at the same level as BPEL and can be used to define a series of activities a business process performs using a block-structured language. An activity is a component performing a specific function and atomic activities can be composed into complex activities. A BPML specification extends WSCI activity types adding assign, raise and synch. A process is a complex activity which can be invoked by other processes. The language includes three process types: nested processes (a process that is defined to execute within another process, such as WfMC nested processes), exception processes to handle exceptional conditions and compensation processes to support compensation logic. An activity executes within a context which is similar to a BPEL scope. A context is an environment for the execution which allows two activities to (1) define a common behaviour e.g. coordination of the execution using signals (such as raise or synchronize signal) and (2) share properties (data flow exchange between activities). A context is transmitted from a parent to a child and it can be nested. The language includes a logical process model to express concurrency, loops or dynamic tasks. The process instantiation is based either on the receipt of a message, either in response to a system event and scheduling, or invoked from an activity (called or spawned). ebXML and the Business Process Specification Schema (BPSS) ebXML (Electronic Business using eXtensible Markup Language) is a global electronic business standard envisioned to define an XML based framework that will allow businesses to find each other and conduct business using well-defined messages and standard business processes [ebXML 200?]. The ebXML Business Process Specification Schema (BPSS) is part of the ebXML framework B2B suite of specifications aimed at representing models for collaborating e-business public processes. Using XML syntax, BPSS describes public business processes as collaborations Page 86 of 366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

between roles, where each role is an abstraction of a trading partner. It also defines relationships and responsibilities. Being abstract, a definition is reusable as it only defines the exchange of information between two or more partners business documents and business signals. A business process includes business collaborations, which are a choreographed set of business transaction activities. There are two types of collaborations: binary collaborations between two roles, and multi-party collaborations between three or more roles. Multi-party collaborations are decomposed into binary collaborations. BPSS does not use WSDL to describe services. Instead, BPSS process models contain service interface descriptions and capabilities for each role. A partner can declare its support for a given role (services interfaces) in a ebXML CPP Collaboration Protocol Profile which serves two purposes. Firstly, it supports messaging exchange capabilities i.e. specific asynchronous request and response operations, each with a defined message content. ebXML uses SOAP with attachments to manage XML document types and MIME attachments. Secondly, it supports generic acknowledgement and exception messages. This allows for reliable and secure messaging service management e.g. authorization, encryption, certification and delivery. In BPSS, there is no explicit support for describing how data flows between transactions. Instead, BPSS assigns a public control flow (based on UML activity graph semantics) to each binary collaboration. The control flow describes the sequencing of business transactions between the two roles. It can specify sequential, parallel, and conditional execution of business transactions. In addition, BPSS supports a long-running business transaction model based on transaction patterns. A business transaction consists of a request and an optional response. Each request or response may require a receipt acknowledgement. Time constraints can be applied on messages and/or acknowledgements. If a transaction fails, the opposite side is notified so that both sides can decide on the actions that need to be taken. Transactions are not nested and there is no support for specifying compensating transactions so a business transaction either succeeds or fails completely. BPSS handles exceptions by defining a number of possible exceptions and prescribing how these are communicated and how they affect the state of the transaction. Then, BPSS provides explicit support for specifying quality-of-service semantics for transactions such as authentication, acknowledgements, non-repudiation, and timeouts. WSCL Web Services Conversation Language (WSCL) is a proposition from Hewlett-Packard related to previous work on e-Speak. WSCL is an XML vocabulary that offers the ability to define the external behaviour of the services by specifying the business level conversations between Web services. One of the main design goals of WSCL is simplicity. As such, WSCL provides a minimal set of concepts necessary for specifying the conversations. A WSCL document specifies three parts: the XML schemas that correspond to the XML documents being exchanged as part of the conversation, the conversation description (order in which documents are exchanged), and the description of the transactions from one conversation to another. In contrast with BPEL or BPML, WSCL does not specify how the content of the messages that are exchanged is created. The specification states that typically the conversation description is provided from the perspective of the service provider. This can also be used to determine the conversation from the perspective of the Page 87 of 366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

user. Although the conversation is defined from the service provider's perspective, WSCL separates the conversational logic from the application logic or the implementation aspects of the service. WS-Coordination, WS-AtomicTransaction and WS-BusinessActivity Since ACID transactions are not suitable for loosely-coupled environments like the Web, OASIS BTP and WS-AtomicTransaction/WS-BusinessActivity/WS-Coordination are proposals for dealing with specific WS aspects of coordination. WS-Coordination [Microsoft et al. 2003a.] defines a generic framework that can support various coordination protocols. Each protocol is intended to coordinate a different role that a Web service plays in the activity. Some examples of coordination protocols are Completion (a single participant tells the Coordinator to either try to commit the transaction or force a rollback), 2PC Two Phase Commit (a participant such as a resource manager registers for this, so that the Coordinator can manage a commit/abort decision across all resource managers), and PhazeZero (the Coordinator notifies a participant just before a 2PC protocol begins). A Coordination Service propagates and coordinates activities between services. The messages exchanged between participants carry a Coordination Context that contains critical information for linking the various activities within the protocol. A Coordination Service consists of several components: an Activation Service that allows a Coordination Context to be created, a Registration Service that allows a Web service to register its participation in a Coordination Protocol, and a set of Coordination Protocol Services for each supported Coordination Type (e.g., Completion, 2PC). WS-AtomicTransaction and WS-BusinessActivity [Microsoft and al. 2003b, Microsoft and al. 2004] are two specifications released in September 2003 and January 2004 by Microsoft, IBM and BEA Systems. It specifies transactional properties of Web Services independently of coordination aspects. An Atomic Transaction is used to coordinate activities having a short duration and executed within limited trust. It has the classical atomicity property (all or nothing behaviour) from ACID properties. A Business Activity provides flexible transaction properties (relaxing Isolation and Atomicity) and is used to coordinate activities that are long in duration and aimed at applying business logic to handle business exceptions. Actions are applied immediately and are permanent. This is because the long duration nature of the activities prohibits locking of data resources. A Web Service application can include both Atomic Transactions and Business Activities. III.3.1.4 Web Service Security, Reliability & Policy WS-Security WS-Security [Kaler and al. 2002] aims at integrating several existing security-related technologies in a coherent model and providing an XML syntax for this model. This is achieved by defining header elements to be included in SOAP messages. WS-Security does not provide a complete security framework for Web services; however it does provide mechanisms for ensuring singlemessage security within SOAP. Three mechanisms are supported in the current specification: Propagation of unsigned and signed security tokens in both text and binary formats. Examples of unsigned security tokens include usernames and passwords, while signed tokens include Page 88 of 366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

X.509 certificates and Kerberos tickets. Recent extensions provide support for SAML (Security Assertions Markup Language) assertions and XrML (eXtensible rights Markup Language) licenses. Message integrity of SOAP messages is provided using the XML Signature specification together in conjunction with security tokens. Message confidentiality using XML Encryption specification in con-junction with the security tokens.

WS-Reliability WS-Reliability [Evans and al. 2003] and WS-ReliableMessaging [Langworthy 2003] are two competing standards which aim at defining SOAP header elements for addressing three issues: Guaranteed message delivery through retries At most once message delivery through duplicate elimination Guaranteed message ordering by attaching sequence numbers to mes-sages within a message group. WS-Policy WS-Policy [HK 2002] provides a framework with an XML-syntax for defining capabilities and requirements of Web services in the form of policy assertions. Policy assertions are statements about an XML element or a Web service description that provide indications regarding the text encoding and natural language used in an XML element, the version of a given standard specification used by a Web service, and the mechanisms used for Web service security (e.g. authentication scheme) with reference to the WS-Security specification (see above). A related specification, namely WS-PolicyAttachement, provides a mechanism for associating policy assertions expressed in WS-Policy to WSDL descriptions and UDDI entries. III.3.1.5 Web Service Billing Web service billing concerns service brokers and service providers. Service brokers create and manage taxonomies, register services and offer rapid lookup for services and companies. They may also offer value-added information for services, such as statistical information for the service usage and QoS data. One key issue for brokers is how to make profit out of these provided services. On the other hand, service providers need to charge the users of a Web Service. Unlike todays software packages, which are typically made available through licenses based on a per-user or site pricing model, Web Services will be most likely be licensed according to a pay-as-you-go subscription based pricing model. This has the potential to significantly reduce the IT-related costs for supporting software within an enterprise. Rather than having to buy monolithic software packages, wherein users might only use a fraction of the whole package, the Web Service model allows users to pick and choose the precise bits of functionality for the time interval that are needed to perform a specific task. This means that the use of the service should be monitored and billed. Standards do not provide any answers to these questions and research results are still minimal. An initial research contribution is the meter and accounting model described in [EK 2001] that operates on a base assumption that services with a high degree of value are contracted via Service Level Agreements (SLAs) or their equivalent. This implies that both the provider and the requester agree to the contract. This model has been developed as a service itself. The service stores the contract details, such as type of contract (long term, ad hoc, etc.), start and expiration dates of contract, the Page 89 of 366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

time model to be used (everyday, once weekly, etc.) and security details. A number of alternative business models such as pay-per-click/fee-for-use model, subscription and lease model can be used by the meter. HP Web Services platform continuously tracks service usage, and allows the service provider to bill only for the time the service was actually used. None of the two previous solutions fully address the semantic aspects of billing. III.3.1.6 Web Services and the Semantic Web The main challenge of the Semantic Web is to introduce a language for describing web resources, so also Web Services. This section aims giving a simple overview on the present semantic languages scenario. All the languages, as shown in the figure below, use XML as common syntax.

OWL-S 1.0 DAML-S 0.9

OWL Full

OWL DL DAML - OIL

OWL Lite

DARPA DAML RDF Schema XOL SHOE OML RDF(S) RDF

OIL

XML

Figure 24 Semantic languages stack XML-Based Ontology Exchange Language (XOL) XOL is a language for ontology exchange, developed by the US bioinformatics community to facilitate the creation of shared ontologies. Originally the language was developed for bioinformatics use and now it is intended to get used as an intermediate language on transferring ontologies between different applications and tools. The XOL syntax is based on XML and the semantic is based on OKBC-Lite (Open Knowledge Base Connectivity), a simplified form of an API for accessing knowledge representation systems such as object databases, relational databases and so on. However, XOL is not intended to be used for the development of ontologies but only to integrate different programs and databases. Simple HTML Ontology Extension (SHOE) SHOE language had developed at the University of Maryland as an extension of HTML, to include a machine readable semantic knowledge in the Web documents. Recent releases has adapted the SHOE syntax to XML.

Page 90 of 366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

SHOE adds new tags to the HTML standard useful for declaring and extending ontology description, defining classification rules, relationship rules, inference rules, instance and it includes also data type definition. SHOE aims to raise an agents architecture in which the agents classify the web contents with semantic constructors. It prevents the possibility of logical contradictions permitting only assertions, no retractions and negations, and relations with only one value or a fixed set of values. Ontology Markup Language (OML) OML is a partially XML serialization of SHOE based on conceptual graph. It is divided in four different levels: Core is related to logical aspects of the language and it is included in the others levels, Simple is a direct mapping to RDF language, Abbreviated includes conceptual graphs features and Standard is the most comprehensive and expressive level of the language Each of these versions is designed for a specific purpose, from the most simple to the most expressive and natural. The OML earlier releases were basically translations of the SHOE language into XML but the news versions are compatible with RDF language and RDF Schema, and include expressiveness of conceptual graphs. Resource Description Framework (RDF) RDF (Resource Description Framework) is a framework that enables describing and interchanging metadata with a few simple constructors. It has been extended from a description of a Schema with which it composes RDF(S). RDF(S) is the most important semantic language and it is the baseline for the new ones, such as DAML, OIL and OWL. RDF provides a model for metadata. Its a complement of XML enabling knowledge-base applications and ontology languages. Its based on the idea of identifying things, such as Web Services, in terms of properties and related values. The most important feature of RDF is simplicity, so that provides a very well understood metadata structure for information modeling, based on three assumptions: Resource: Anything that can have an URI such as a web site or a Web Service. Property: A property of the thing that the statement describes. Statement: A link between a resource and its property. Practically each statement is composed of three terms: Subject: An URI that indicates a resource. Predicate: The property of the subject. Object: The value of the property. It can describe a single statement with a graphical representation or with XML.

Page 91 of 366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

http://thispaper

author

Name of the author

Subject (resource)

Predicate (property)

Object (value)

<rdf:Description rdf:about=http://thispaper> <s:Author>name of the author</Author> </rdf:Description> Figure 25 Graphical and XML-based representation of an RDF statement The model language is based on three simple artifacts. The resources are represented from an oval, a value from a rectangle and the predicates from an arrow that links the subject with the object. RDF Schema extends the language with a new vocabulary and defines a semantic extension of the basic language that provides the necessary instruments to describe groups of related resources and relationship between resources. The RDF Schema is very important for the understanding of every semantic language descending from RDF, such as DAML and OWL, because it introduces many capabilities used for describing ontologies. It adds some very important basic concepts such as properties, classes, sub-classes, data types, constraints, containers and collections. DARPA Agent Markup Language (DAML-ONT) DAML is a semantic language, based on XML/RDF and developed by the Defense Advanced Research Projects Agency (DARPA), which ties the information on a web resource to machine readable semantics and provides mechanisms for the explicit representation of services (DAML-S), processes and business models. The goal of this project is to create technologies that enable software agents to dynamically identify and understand information resources, and to provide interoperability between agents in a semantic manner. DAML language includes the concepts of classes and subclasses, properties and attributes, datatype, list, constraints such as cardinality and logical functions such as union, disjoint, inverse and so on. Ontology Interchange Language (OIL) OIL is based on XOL and RDF, and it is a OntoKnowledge project that permits semantic interoperability among Web resources. It is a web-based representation and inference layer for ontologies which is compatible with RDF Schema and includes semantic for describing information and term meanings. OIL provides support for three main concepts: structured knowledge is expressed by a Description Logic which describes knowledge in terms of concepts and role restrictions; frame-based structure provides a way for modeling classes with attributes; syntax of the language is formulated using existing and well known W3C recommendations (RDF and XML). Page 92 of 366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

This language, in December 2000, joined the DAML program to create DAML+OIL that includes most of the features of the two languages. Web Ontology Language (OWL) The OWL language extends earlier W3C recommendations, such as RDF and DAML+OIL, with richer modeling primitives, providing a set of constructs to define web ontologies. It sets the basic infrastructure to build some simple inferences between web resources and enables a knowledgemanagement architecture. In the 2000 was born, from a subproject of OWL, an extension of the language for describing Web Services, named previously DAML-S and now OWL-S. This extension is the actual W3C recommendation for the semantic description of Web Services. Web Ontology Language for Web Services (OWL-S) OWL-S is a W3C recommendation that provides a complete framework, based on the RDF syntax and the OWL ontology model, for describing Web Services in terms of what they can do, how they can work and how they can be caught up. OWL-S was born for solving challenges related to Web Services such as semantic Web Services discovery, dynamic Web Services composition and Web Services execution monitoring.

Resource
provides

ServiceProfile
presents DescribedBy

ServiceModel

Web Service

ServiceGrounding
supports

Figure 26 OWL-S model of service OWL-S describes a Web Service as a resource presented by a ServiceProfile, described by a ServiceModel and supported by a ServiceGrounding: The ServiceProfile has the similar functions of a registry, such as UDDI. It describes organizations that present the service, provides a list of the I/O parameters, preconditions and effects used by the service and describes additional information such as the quality and the accuracy. The ServiceModel provides the fundamental information needed for the composition and interoperation of services. OWL-S allows three different kinds of processes: Atomic, Simple and Composite. The ServiceGrounding specifies explicitly the input and output links between the atomic processes of the service, showing how the communications between these processes are to be realized as messages. However, the real implementation of the Service Grounding is a set of elements which links each atomic process to the corresponding WSDL file. Page 93 of 366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

III.3.2 P2P Services III.3.2.1 Description of P2P Services Description or advertisement of a p2p service is a structured representation of the service made available by a peer in a p2p network. In the area of p2p systems there are not standards for the description of p2p services. E.g. in Gnutella (file sharing p2p system), peers exchange advertisements about themselves, which include their IP addresses, the number of files they have decided to share on the network and the total size of these files; in JXTA (p2p platform), advertisements are XML documents that describe the resources of a p2p system, e.g. peers, peer groups, services, etc. For each class of p2p application class-specific functionality can be used for the description of the offered service. For example, metadata applies to content and file sharing applications. Metadata describes the content stored across nodes of the peer community and may be consulted to determine the location of the desired information. III.3.2.2 P2P Service Publication / Discovery Publication of a p2p service refers to the sharing of the service. In the area of p2p systems there are no standards for the publication of p2p services. For example, a document is published on a peer in order to be shared in a p2p file sharing system, or a service object is loaded into a lookup server. Discovery of a service refers to discovery of a resource, e.g. a peer or a file. Peers perform discovery in a variety of ways including multicasts, inquiring about services that other peers know about, and using hubs, known as rendezvous, to act as a meeting place for peers with similar interests. There are three common p2p algorithms used for the publication and discovery of p2p services [MKL 2003]: - Centralized directory model. It was made popular by Napster. The peers connect to a central directory where they publish information about the content they offer for sharing. Upon request from a peer, the central index will match the request with the best peer in its directory that matches the request. Then a file exchange will occur directly between the two peers. This model requires some managed infrastructure (the directory server), which hosts information about all participants in the community. This can cause the model to show some scalability limits. - Flooded requests model The flooding model is different from the central index one. This is a pure P2P model in which no advertisement of shared resources occurs. Instead, each request from a peer is flooded (broadcast) to directly connected peers, which themselves flood their peers etc., until the request is answered or a maximum number of flooding steps (typically 5 to 9) occur. This model is typically used by Gnutella and requires a lot of network bandwidth. - Document routing model and Distributed Hash Tables The document routing model was first used by FreeNet. Each peer is assigned a random ID and also knows a given number of peers. When a document is published on such a system, an ID is assigned to the document based on a hash of the documents contents and its name. Each peer will then route the document towards the peer with the ID that is most similar to the document ID until the nearest peer ID is the current peers ID. When a peer requests the document from the P2P system, the request will go to the peer with the ID most similar to the document ID until a copy of the document is found. Then the document is transferred back to the request originator, while each peer participating the routing will keep a local copy. In structured p2p systems five main algorithms that have implemented an improved form of document routing model based on Distributed Hash Tables are: Chord [SMKKB 2001], CAN [RFHKS 2001], Tapestry, Pastry [RD 2001] and PPage 94 of 366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

Grid [A 2001]. The goals of each algorithm are to reduce the number of P2P hops to locate a document of interest and to reduce the amount of routing state that must be kept at each peer. Each of these algorithms either guarantees logarithmic bounds with respect to the size of the peer community, or argues that logarithmic bounds can be achieved with high probability. A more recent approach to discovery algorithms is semantic routing: the process of routing that is based on keywords or other sort of metadata, rather than file hashes on which document routing is based. Examples of semantic routing can be found in Neurogrid [Jos 2002] which maintains routing tables that associate nodes with query keywords, and routing tables are updated on the basis of user feedback, and in LimeWire proposal [Rohrs 2002] that adds query routing to the Gnutella network. III.3.2.3 P2P Service Composition In many cases it would be useful to use p2p services that maybe offered by different p2p systems in order to perform a more complex task. Composition refers to services that can be produced by the combination of different p2p services. For example, one may want to communicate with everyone he can reach in different Instant Messaging systems. Typically, to do that, a user must maintain accounts and run clients from many IM systems at the same time. In order for two or more services to be able to coordinate and compose another service different p2p systems should be able to interoperate. Although some efforts have been made towards improved interoperability, interoperability is still not supported. Hence, composition of p2p services is still not feasible. Users who want to combine p2p services offered by different p2p systems should support these systems at the same time, however this is difficult as the necessary resources are not always available. III.3.2.4 P2P Service Management Management in p2p systems has two different dimensions: Managing the underlying p2p infrastructure. It includes discovery of other peers in the community and location and routing between those peers. A number of factors influence the design of discovery algorithms. For example, mobile, wireless devices can discover other peers based upon their range of communication [RHH 2001]. Protocols built for desktop machines often use other approaches such as centralized directories. Location and routing algorithms generally try to optimize the path of a message travelling from one peer to another. Another important aspect of p2p systems management is self-organization that is needed in a p2p system to improve scalability, fault resilience, intermittent connection of resources, and the cost of ownership. There are a number of academic systems and products that address self-organization. Managing the marketplace of services. True market mechanisms are needed for managing the market of services. The traditional way for this is pricing of services. On the other hand, rules which the market participants have to follow are a suitable alternative and have to be supported, e.g., you can use a service if you also provide one [GHMS 2003]. These mechanisms are described in Section 3.2.6 Billing for P2P Systems. III.3.2.5 Security in P2P systems P2P systems share most of their security needs with common distributed systems. Some of the security issues in P2P systems include availability, file authenticity, anonymity, access control, trust establishing. Security technologies that are used include encryption, digital digests, signatures, SSL protocol, certificates, trust chains between peers and shared objects, session key exchange schemes. New security requirements appeared with P2P systems [MKL 2003]: Page 95 of 366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

Multi-key encryption. It is used to protect a shared object, as well as the anonymity of its author, publishing peer and hosting peer in file sharing systems, like Publius. The security scheme chosen by Publius developers is based on a (Public key, Multiple private keys) asymmetric encryption mechanism derived from R. Shamirs shared secrets encryption method [S 1979]. Sandboxing. Some distributed computing P2P systems require downloading code on peer machines. It is crucial to protect the peer machines from potentially malicious code and protect the code from a malicious peer machine. Protecting a peer machine typically involves sandboxing that refers to techniques that isolate external code to its own protection domains. Digital Rights Management. In P2P file sharing systems it is necessary to be able to protect the authors from having their intellectual property stolen. One way to handle this problem is to add a signature in the file that makes it recognizable. This technique is referenced as watermarking or steganography [Katzenbeisser 1999]. Reputation and Accountability. In P2P systems, it is important to keep track of the reputation of peers to prevent ill-behaved peers from harming the whole system. Reputation requires ways to measure how good or useful a peer is and as a result some accountability mechanisms need to be devised. Lot of research effort is made in this area and many trust-based reputation systems for p2p systems have emerged. Firewalls. P2P applications inherently require direct connections between peers. However, in corporate environments internal networks get isolated from the external network, leaving reduced access rights to applications. However some mechanisms have been devised that enable connections between hidden (machines behind a firewall or NAT, inaccessible from the Internet) and Internet machines. These mechanisms have limitations and need to be improved. III.3.2.6 Billing for P2P Systems In some classes of p2p systems, like content exchange there is a need for a framework to incorporate payment services to enable service delivery via the p2p system. Billing can also be used as an accountability measure. In this case, peers are imagined to exchange tokens (currency) of a value equal to their concumption of resources. Thus the change in the mumber of tokens that a peer holds reflects their concumption or contribution. One early implementation of such a scheme was Mojo Nation [MojoNation 2001]. In this system, peers earned mojo by donating resources to the community and spent them when concuming those resources. Thus, peers needed to maintain a balance between contribution and consumption. There are various types of payment schemes used in p2p systems. We use the term micropayment for pricing schemes that use small value individual payments and the term macropayment for more complex schemes that allow large sums to be paid in a single transaction. Digital cash refers to both of these types of payments. Micropayment schemes can be nonfungible (they do not offer any real redeemable value, e.g. they perform some computationaly difficult problem) or fungible (the payment holds some redeemable value). Digital cash can be either anonymous or identified. Anonymous schemes do not reveal the requesters identity are the digital equivelant of real cash. The electronic coin itself is worth some amount. Not anonymous spending schemes are the digital equivelant of debit or credit cards. The requester sends a promise of payment that will be honoured by a financial institution. Representative micropayment not anonymous schemes are PayWord and Page 96 of 366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

MicroMint [RS 1996]. A representative example of anonymous macropayment schemes is Digicashs Ecash that uses a system of anonymous coins to pay for services and can be used for both private and real-world currencies. Some of the challenges that the various pricing schemes have to deal with are security, privacy, and regulation of prices. III.3.2.6.1 Performance in P2P Systems P2P systems aim to improve performance by aggregating distributed storage capacity and computing cycles of devices spread across a network. Because of the decentralized nature of these models, performance is influenced by three types of resources: processing, storage, and networking. There are three key approaches to optimize performance: replication, caching, and intelligent routing. Replication. Replication puts copies of files/objects closer to the requesting peers, thus minimizing the connection distance between the peers requesting and providing the objects. Changes to data objects have to be propagated to all the object replicas. In combination with intelligent routing, replication helps to minimize the distance delay by sending requests to closely located peers. Replication also helps to cope with the disappearance of peers. Because peers tend to be user machines rather than dedicated servers, there is no guarantee that the peers won't be disconnected from the network arbitrary. Caching. Caching reduces the path length required to fetch a file/object and therefore the number of messages exchanged between the peers. In Freenet for example, when a file is found and propagated to the requesting node, the file is cached locally in all the nodes in the return path. The object replicas can be used for load balancing and latency reduction. Intelligent routing and network organization. To fully realize the potential of P2P networks, it is important to understand and explore the social interactions between the peers. A lot of work has been done in this area (small-world phenomenon [Milgram 1967], power-law distribution of the P2P networks [Adamic et al. 2001], determination of good peers based on interests [Ramanathan et al. 2001]). A number of academic systems, such as Oceanstore and Pastry, improve performance by proactively moving the data in the network. The advantage of these approaches is that peers decide whom to contact and when to add/drop a connection based on local information only. III.3.3 Grid Services As already addressed in section 1.4, the Service Oriented GRID approaches are undergoing a conceptual change from the OGSA / OGSI standards to WSRF. The materialization of the OGSA requirements is now evolving from the OGSI (Open Grid Service Infrastructure) to the WSRF (Web Service Resource Framework). In this new WSRF the Grid Services are now called WS-Resources. The WS-Resource construct has been proposed as a means of expressing the relationship between stateful resources and Web services. These specifications allow the programmer to declare and implement the association between a Web service and one or more stateful resources. They describe the means by which a view of the state of the resource is defined and associated with a Web services description, forming the overall type definition of a WS-Resource. They also describe how the state of a WS-Resource is made accessible through a Web service interface, and define related mechanisms concerned with WS-Resource grouping and addressing. Page 97 of 366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

In the following sections we will focus on the requirements, that both OGSI and WSRF try to meet through OGSA, and not so much in the technical details, in order to address description, publishing/discovery, composition, management, security, billing, QoS and notification. The construction of a computational Grid, and other kinds of Grid systems, requires facilities common to general distributed computing architectures for example, a mechanism for identifying information by globally understood references which can be passed from one system to another, and to subscribe updates on information sources as the information changes. However, Grids make extreme demands on distributed programming because they are typically large-scale, and they exploit wide-ranging networks consisting of a variety of protocols and systems which may span organizational boundaries. Some of these issues are addressed by standards which are not part of the scope of OGSI: interoperability amongst heterogeneous systems is addressed by Web Services standards, while security issues relevant to widely-interconnected systems are being addressed by groups within the Global Grid Forum (GGF). OGSI exploits these standards, adding to them where necessary, and focuses on the mechanisms that enable resources to be located, managed and exploited at large scale. To summarize briefly, the main requirements addressed by OGSI are: Defining a framework which establishes a necessary common basic behavior, while allowing wide scope for extension and refinement (such as optimization) and specialization to particular uses. As part of the basic behavior, enabling systems to describe their own interfaces (a facility known as introspection as a means of flexibly enabling extensions, and to describe their underlying capabilities, such as the speed of a CPU or the currency or progeny of a database. The particular vocabularies for comparing capabilities are not part of OGSI, but the framework for identifying, describing, querying and modifying the descriptive information or other state of the system is central. The identification and description of the state which is the target of messages also enables a standard approach, called statefulness, to managing interactions between systems and their users. As part of the basic behavior, establishing a system of communication of information about systems which requires the least synchronization amongst them while still providing a useful aggregation of information and behavior. This attribute of OGSI is known as soft-state and exploits caching of information for a limited period which can be extended by explicit requests from its users. As one aspect of statefulness and soft-state, describing a basic style of behavior which gives artifacts (identities, information, resource allocations) an explicit point of creation and an explicit, limited but extendable lifetime. This provides a natural recovery mechanism for resources in the case of failed processes.

III.3.3.1 Grid Service Description

Page 98 of 366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

The Services Description model defined by OGSA is based on an extension of the WSDL concepts. Grid Service Descriptions can use portType extension to define standard operations through the use of the GWSDL extends attribute. For example, the Counter Grid Service begins: <ogsi:portType name="counterPortType" extends="ogsi:GridService"> This causes the inclusion of operations and ServiceData Declarations (SDD) from the ogsi:GridService portType. All Grid Service descriptions include these SDD definitions which are, in summary: The identity (called the Grid Service Handle - GSH) which implements the description of any Grid Service Instance. This identity uniquely identifies an Instance, and its associated state. The termination time of the Instance. This can be set when the Instance is created and may sometimes be modified by users or managers of the Instance. The names of all the portTypes that the Service implements. The identity of the Factory that created the Instance (or xsi:nil if the instance was not created by a Factory).

The operations provided by the ogsi:GridService portType allow a client to query and modify the state reported by ServiceData, to specify the earliest and latest desired termination time and, finally, to destroy the Instance. III.3.3.2 Grid Service Publishing/Discovery Applications require mechanisms for publishing and discovering available services and for determining the characteristics of those services so that they can configure themselves and their requests to those services appropriately. The OGSI address this requirement by defining a standard representation for service data, that is, information about Grid service instances, which is structure as a set of named and typed XML elements called service data elements, encapsulated in a standard container format. The standard operation, FindServiceData is used for retrieving service data from individual Grid service instances. On the other hand, WSRF uses resource property elements that are almost identical to service data elements in OGSI, but introduces a set of more specific operations [WS-ResourceProperties] for getting and setting resource properties: single element get, multi-element get/set, and XPath query. Thus, thanks to the XPath query in WS-ResourceProperties, the functionality provided by the WSRF for accessing resource property elements is a superset of that provided by OGSI. Although OGSI does not specify a Registry Service, a Registry can be described by using or extending the ServiceGroup or ServiceGroupRegistration portTypes. A Registry contains references to other Services, and a Requester can search for a Service that it needs by supplying the Registry with details of the Interface that it needs. The Registry responds with details of all the Services it knows about that match the search criteria. For example, a Requester that needs to create a Counter Service may search for a CounterFactory portType and receive back information, including a Handle, which identifies the Factory.

Page 99 of 366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

Additionally the main concepts for handling services are the management of service identity by providing concepts for Handles, References and Locators Grid Service Handles Grid services are dynamic and stateful, therefore we need a way to distinguish one dynamically created service instance from another. When a Grid Service Instance is created, it is given a globally-unique identity, known as a Grid Service Handle (GSH), which can be used by consumers to refer to the state associated with that Instance and distinguishes a specific Grid service instance from all other Grid service instances that might exist or be created. A GSH is a standard Universal Resource Identifier (URI) it indicates how to locate the Instance, but not how to communicate with it. Before the GSH can be used, it must be resolved into a Grid Service Reference. During the Grid services lifetime, all the information regarding instance-specific information required to interact with a specific service instance, and to maintain the GSH unique, is encapsulated into a single abstraction called a Grid service reference (GSR). This strategy has the advantage of the increased flexibility from the perspective of the grid service providers but raises the problem of obtaining a valid GSR once the GSR returned by the service creation operation expires. To solve this, OGSI provides the HandleResolver mechanism to support client resolution of a GSH into a GSR. In contrast, the WSRF builds on the recently published WS-Addressing specification to achieve the same goals in slightly different ways [WS-Adressing]. First, it adopts the endpoint reference construct defined in the WS-Addressing specification as XML syntax for identifying Web service endpoints. It then defines a particular usage pattern for endpoint references, the implied resource pattern, in which the reference properties field of the endpoint reference contains an identifier of a specific stateful resource associated with the Web service. These two pieces of information are the logical equivalent of the addressing content of the OGSI defined GSR. Second, rather than distinguishing between two fixed types of names, immutable GSHs and potentially mutable GSRs, it introduces (in WS-RenewableReferences) a mechanism for associating with any endpoint reference a resolver service. One small feature of an OGSI GSH is its URI syntax, thus making it a short, human readable name for a service. There is not equivalent feature in the WS-Resource Framework. Instead, various forms of naming services can be built on top of the WSRF, which can provide whatever form of name desired, and which map to endpoint references. WSRF provides virtually all functionality present in OGSI, and has the advantages of leveraging WS-Addressing, allowing for arbitrary hierarchies of resolver services, and allowing to be used independently of each other. Grid Service References Although a Grid Service Handle uniquely identifies an Instance, the Handle by itself does not provide all the information needed to allow the Requester to send messages to the Instance. What is needed is a description of a binding that can be used to translate application-level operations into network messages. In OGSI, these Handle-specific bindings are called Grid Service References (GSRs), and they can be described in WSDL. A Reference is created by passing a Handle to a special service called a Handle Resolver. In the case of the Counter Service example, a GSR is simply the WSDL description of the Service with the service, port and binding elements of the information fully specified, including an address to which the client should send requests. To be useful to a client, the binding description must be Page 100 of 366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

recognised by the client-side infrastructure, which allows the infrastructure to construct and transmit the message containing invocation parameters provided by the client application. The OGSI Specification also allows GSRs which are not described by WSDL. This facility allows distributed programming architectures such as CORBA [8] to be used without change yet still within the framework of OGSI services. In the case of CORBA, the GSR would be a CORBA Object Reference which is recognisable to clients using CORBA infrastructure. The means by which such a client retrieves a non-WSDL GSR from a Handle Resolver and the way it is subsequently processed is beyond the scope of this Primer. Unlike a Handle, which is unique for all time, a Reference is a temporary identifier: it has a limited lifetime, and should be refreshed periodically if the client has a long-term need to use the associated Instance. The References good until property indicates its expiration time, and while it may continue to work after that time the client should refresh it to ensure that the most appropriate binding is being used. If the Reference has become invalid which may happen if the Instance has been relocated to a different server any attempt to use it will result in a failed operation. A single Handle may resolve to several different References, each with a different set of characteristics. If so, the client can choose which one to use. For example: A Reference that enables a high-performance binding might be available to clients that are local to the Instance, while clients running on remote systems have to use a lower-performance network binding. A client that is local to the Instance may use a Reference representing a binding that describes unencrypted transmission, while remote clients are required to use encryption. If multiple copies of a single Instance exist, the Handle Resolver may return a separate Reference for each copy. Many handle resolution schemes are possible; the OGSI Specification does not restrict or mandate any particular one, and allows multiple schemes to be used in parallel. For example, a client may prefer a Handle described by a secure resolution scheme such as SGNP [15] to safeguard the integrity of Reference which is returned. The HandleResolver Service It is convenient to the Requester for Services which supply Handles, such as Registry Services, to supply a WSDL document describing GSRs along with the Handle, but the GSRs can also be discovered from a HandleResolver. A HandleResolver Service is a fundamental part of an OGSI Grid. Since Requesters need access to a HandleResolver before they can access any other service, they must be bootstrapped with information about which HandleResolver to use. This is typically done using configuration data. The HandleResolver is used to retrieve up-to-date or otherwise improved References to a Service identified by a Handle. A Requester can use the HandleResolver at any time to obtain the best available binding. III.3.3.3 Grid Service Composition It seems that composition of grid services will follow the evolution of standards for web service composition (see section 3.1.3). In general, grid services can be thought as loosely coupled peers that, either on their own or as part of an interacting group of services, realize the capabilities of Grid systems through implementation, composition, or interaction with other services. For example: to realize the orchestration capability a group of services might be structured such that one set of services in the group drives the orchestration (i.e., acts as the orchestrator), while other services in the group provide the interfaces and mechanisms to be orchestrated (i.e., be the orchestrates). A Page 101 of 366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

specific service may implement and/or participate in multiple collections and interactions to realize different capabilities. On the other hand, it is not necessary that all services participate to realize a particular capability. III.3.3.4 Management in Grid Services Management in Grid systems has three different dimensions: Management of the resources themselves (e.g., rebooting a host, or setting VLANs on a network switch) Management of the resources on Grid (e.g., resource reservation, monitoring and control) Management of the infrastructure, which is itself, composed of resources (e.g., monitoring a registry service)

OGSA defines an optional Manageability interface that supports a set of manageability operations. Such operations allow potentially large sets of Grid service instances to be monitored and managed from management consoles, automation tools, and others. At the resource level, resources are managed directly through their native manageability interfaces (for discrete resources, these are usually SNMP, CIM/WBEM, JMX, or proprietary interfaces). Management at this level involves monitoring (i.e., obtaining the state of the resource, which includes events), setup and control (i.e., setting the state of the resource), and discovery. The Factory and Instance Pattern The Factory design pattern is commonly used in Objected-Oriented software systems to enable the creation of multiple, similar artifacts. An OGSI Factory Service is a Grid Service that is used by a client to create other Grid Service Instances. When a client needs to create a new instance of a particular Grid Service it locates a corresponding Factory Service, invokes its createService operation, and receives a unique identifier that it can use to access the newly-created Instance. A Factory Service may choose to implement either the Factory portType defined by OGSI or another portType which serves some more specialized function. Lifetimes of Instances In many cases, the artefacts (the Grid Service Instances) that are created to accomplish a task are only needed for a limited time. They may represent physical resources such as storage allocations, processor and bandwidth reservations, or the right to use sensors and databases. In such cases, an explicit lifetime can be associated with an Instance when it is created. Requesters may sometimes need to be aware of the lifetimes of the Instances they use or create. If the initial lifetime proves insufficient, they may periodically prolong the lifetime of an Instance by extending its terminationTime. Often, the lifetime will be significant in the case of any failure in the progress of a task. In that case, when their lifetimes expire, the Instances can be safely destroyed, and the resources they represent can be recycled. This convention is essential in a large scale, distributed system such as a Grid, where it is not possible to take into account all the possible consumer dependencies when sweeping up Service Instances.

The Factory Service Page 102 of 366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

The Factory Service was described earlier, but is included here for completeness. A Requester invokes a createService operation on a Factory, and receives in response Handles and/or References for the newly-created Instance. III.3.3.5 Security in Grid Services Standard, secure mechanisms are required to protect Grid systems while supporting safe resourcesharing across administrative domains [GWD-I]. Security requirements include: Authentication and authorization: Authentication mechanisms are required so that the identity of individuals and services can be established. Service providers must implement authorization mechanisms to enforce policy over how each service can be used. The Grid system should follow each domains security policies and also may have to identify users security policies. Authorization should accommodate various access control models and implementations. Multiple security infrastructures: Distributed operation implies a need to integrate and interoperate with multiple security infrastructures. Grid Services need to integrate and interoperate with existing security architectures and models. Perimeter security solutions: Applications may have to be deployed outside the client domain. There is a need for standard and secure mechanisms that can be deployed to protect institutions while also enabling cross-domain interaction without compromising local control of equipment for protecting domains, such as firewall policy and intrusion-detection policy. Isolation: Various kinds of isolation must be ensured, such as isolation of users, performance isolation, and isolation between content offerings within the same Grid system. Delegation: Mechanisms that allow for delegation of access rights from requestors to services are required. The rights transferred through delegation are scoped only to the intended job and should have a limited lifetime to minimize the risk of misuse of delegated rights. Security policy exchange: Service requestors and providers should be able to exchange dynamically security policy information to establish a negotiated security context between them. Intrusion detection, protection, and secure logging: Strong monitoring is required for intrusion detection and identification of misuses, malicious or otherwise, including virus or worm attacks. It should also be possible to protect critical areas or functions by migrating attacks away from them. OGSA security architectural components support, integrate, and unify popular security models, mechanisms, protocols, platforms, and technologies in a way that enables a variety of systems to interoperate securely. The components are able to support integrated with existing security architectures and models across platforms and hosting environments. This means that the architecture must be implementation-agnostic, so that it can be instantiated in terms of any existing security mechanisms (e.g., Kerberos, PKI). Extensible, so that it can incorporate new security services as they become available, and interoperable with existing security services. Services must also traverse multiple domains and hosting environments, thus introducing the need for interoperability at multiple levels: protocol, policies and identity. In addition, certain situations can make it impossible to establish trust relationships among sites prior to application execution. Given that the participating domains may have different security infrastructures (e.g. Kerberos or PKI) it is necessary to realize the required trust relationships through some form of federation among the security mechanisms. III.3.3.6 Billing in Grid Services Billing services are critical for success of Grid Services. This will include the ability for schedulers to interact with resources to establish prices, as well as for resources to interact with accounting and Page 103 of 366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

billing services. Billing is not yet a service, defined by OGSA, but Billing is a specific GRID topic, which has to be solved. In the future billing will use auditing and/or metering logs and other data to generate bills, or chargeback.fic services standardized, but Billing is a specific GRID topic, which has to be solved. Mechanisms under development for Web Services will be exploited. III.3.3.7 QoS in Grid Services Key QoS dimensions include, but are not limited to, availability, security, and performance. QoS requirements should be expressed using measurable terms that can be captured in Service Level Agreements (SLAs). QoS assurance requirements include: Service level agreement: QoS should be represented by agreements which are established by negotiating between service requester and provider prior to service execution. Standard mechanisms should be provided to create and manage agreements. Service level attainment: If the agreement requires attainment of Service Level, the resources used by the service should be adjusted so that the required QoS is maintained. Therefore, mechanisms for monitoring services quality, estimating resource utilization, and planning for and adjusting resource usage are required. Migration: It should be possible to migrate executing services or applications to adjust workloads for performance or availability Services such as job execution and data services must provide the agreed-upon QoS. III.3.3.8 Data/Information Model in Grid Services With the WSRF specification the mechanism of Web Services Resource Properties and a concept for managing the service states are introduced. The Resource Property specification standardizes the means by which the definition of the properties of a WS-Resource may be declared as part of the Web service interface. The declaration of the WS-Resources properties represents a projection of or a view on the WSResources state. The projection is defined in terms of a resource properties document. This resource properties document serves to define a basis for access to the resource properties through the Web service interface. This specification also defines a standard set of message exchanges that allow a requestor to query or update the property values of the implied resource. The set of properties defined in the resource properties document, and associated with the service interface, defines the constraints on the valid contents of these message exchanges. The WSRF Modelling Resources specification introduces a set of conventions intended to formalize service interactions with state. The motivation for these new conventions lies in the realization that there are many ways of representing state in Web services, but there does not exist an agreed upon convention that would promote interoperability among Web services and their interactions with stateful resources. Even those Web service implementations commonly described as stateless frequently allow for the manipulation of state, i.e., data values that persist across, and evolve because of, Web service interactions. For example, an online airline reservation system must maintain state concerning flight status, reservations made by specific customers, and the system itself: its current location, load, and performance. Web service interfaces that allow requestors to query flight status, make reservations, change reservation status, and manage the reservation system must necessarily provide access to this state. In what we term the WS-Resource approach, we model state as stateful resources and codify the relationship between Web services and stateful Page 104 of 366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

resources in terms of the implied resource pattern, a set of conventions on Web services technologies, particularly XML, WSDL, and WS-Addressing [WS-Addressing]. WSFR describes a WSResource in terms of a stateful resource and an associated Web service. It also describes an approach for making the properties of a WS-Resource accessible through its Web service interface, and for managing and reasoning about a WS-Resources lifetime. WSFR contributes to an ongoing debate within the Web services community concerning whether and how Web services should allow for the representation of state. In this debate, one view is that Web services have no notion of state and Interactions with Web Services are stateless; contextualisation is one proposed as a way of modeling stateful interactions, while others, have argued that the critical role that state plays in distributed computing requires that it be addressed within the Web services architecture. The WS-Resource construct may help reconcile these two positions, by showing how the relationship between Web services and state can be formalized in a straightforward manner that builds on other Web services specifications.

Page 105 of 366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

III.4 State of the Art in Research projects Some of the ongoing research projects in the area of service oriented computing are the following:

PROJECT NAME Akogrimo - Access to Knowledge through Grid in a Mobile World

FUNDING Web page SOURCE European Union www.akogrimo.org, (FP6) www.mobilegrid.org

Brief Description Mobility has become a central aspect of life for people in business, education and leisure. Related mobile 3G network infrastructures and user communities have surpassed corresponding Internet figures. Independent of this development, Grid technology is evolving form a niche market solely addressing the HPC domain towards a framework useable within a broad business context. However, while affecting largely identical complex applications, user and provider domains, the Grid community has been basically mobility-unaware. Akogrimo develops a blueprint of a Mobile-Grid Environment, develops necessary tools and concept to demonstrate Next Generation Mobile Grid Applications. ALVIS is an EU Sixth Framework Programme (FP6), Information Society Technologies (http://www.cordis.lu/ist/) project, which conducts research in the design, use and interoperability of topicspecific search engines with the goal of developing an Open Source prototype of a distributed, semantic-based search engine. The goal of this project is to develop architectural and methodological support for the development of future mobile, attentive, context-aware and personalized services. The project combines and specialises a number of developments that can contribute to this goal: model-driven design, as a way to raise the level of abstraction at which services are specified; service-oriented computing as a new paradigm for distributed computing; and semantic services as a solution for making service provisioning more 'intelligent'.

ALVIS - Superpeer Semantic Search Engine

European Union (FP6)

http://www.alvis.info/

A-MUSE: Architectural Modelling Utility for Service Enabling

Dutch Ministry of Economic Affairs through the Freeband Programme.

http://www.freeband.nl/proj ect.cfm?id=489

Page 106 of 366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

ASG: Adaptive Services Grid

European Union (FP6)

http://asg-platform.org/cgibin/twiki/view/Public

CBSE Net

European Union

http://www.cbsenet.org

Community Grid for PDA

Community Grids Lab, Indiana University

http://grids.ucs.indiana.edu/ ptliupages/projects/carousel/

The goal of Adaptive Services Grid (ASG) is to develop a proof-of-concept prototype of an open development platform for adaptive services discovery, creation, composition, and enactment. ASG provides the integration of its sub-projects in the context of an open platform, including tool development by small and medium sized enterprises. Based on semantic specifications of requested services by service customers, ASG discovers appropriate services, composes complex processes and if required generates software to create new application services on demand. Subsequently, application services will be provided through the underlying computational grid infrastructure based on adaptive process enactment technology. CBSEnet is a network which aims to create a European-wide forum for the exchange of information between researchers and developers working in the area of CBSE, to suggest how CBSE technologies could improve software engineering processes in different domains and to propose future research requirements for development and deployment of CBSE technologies. Community Grid for PDA project is developing the environment supporting ubiquitous accessing to Community Grid systems from various small wireless devices such as Smartphone. These devices are running the Microsoft Windows CE or PalmOS and working with traditional desktop computer. The GRid Interoperability Project (GRIP) is a 2 year research project, funded in part by the European Union (EU) to realise the interoperability of Globus and UNICORE and to work towards standards for interoperability in the Global Grid Forum. Its objectives are: 1. To develop software to facilitate the interoperation of Unicore and Globus combining the unique strength of each system. 2. To build and demonstrate biomolecular and meteorological inter-grid applications.

GRIP: The GRid Interoperability Project

European Union (FP5)

http://www.gridinteroperability.org/index.ht ml

Page 107 of 366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

3. To contribute to and influence international grid standards. GRIP had it's final review on the 6th February 2004. MMAPPS: Market European Union Management of Peer- (FP5) to-Peer Services NaradaBrokering Community Grids Lab, Indiana University http://www.mmapps.org/ MMAPPS is an EU collaborative project developing a framework to provide incentives for cooperation in P2P applications beyond simple filesharing. The NaradaBrokering project at the Community Grids Lab is an open source project that researches fundamental issues pertaining to distributed middleware systems. These include, among others, issues of efficient routing, support for complex interactions, robustness, resilience, ordering, security and trust. NaradaBrokering aims to provide a unified messaging environment that integrates grid services, web services, peer-to-peer interactions and traditional middleware operations. The matcher demonstrates another algorithm that outputs different degrees of matching for individual elements of OWL-S descriptions of Web services. The implementation is provided as a Java tool with a Swing-based GUI, which allows to select a pair of OWLS descriptions for requester and provider. Distribution is licensed under LGPL and comes as Apache Maven software project. The main objective of the P2P_Architect project is to enable software developing organisations to build dependable software systems conforming to a P2P architecture. The primary goal of the SeCSE project is to create methods, tools and techniques for systems intergrators and service providers to support the cost-effective development and use of dependable services and servicecentric applications. The project brings together IT companies, developers and research labs and is driven by the emerging needs in the automotive and telecommunications industries. Technically, SeCSE focuses on four ares for the engineering of sw systems: specification, discovery, design, and management of

http://grids.ucs.indiana.edu/ ptliupages/projects/narada/

OWL-S Matcher (OWLSM)

Technical University of Berlin (Technische Universitaet Berlin)

http://www.ivs.tuberlin.de/Projekte/owlsmatc her/

P2P_Architect

European Union (FP5)

http://www.atc.gr/p2p_archit ect/

SeCSE: ServiceCentric Systems Engineering

European Union (FP6)

http://wwwhcid.soi.city.ac.uk/rhSecse.h tml

Page 108 of 366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

services for which new techniques and tools will be delivered. These tools and techniques will be integrated to provide a SeCSE development environment and, based on visionary scenarios from application partners, domain-specific adaptations. Serviam Swedish Agency for Innovation Systems (Vinnova) http://www.serviam.se The Serviam project contributes to the development of service based systems by managing knowledge about fundamental decision support, arhitectural patterns, technical guidelines and maintenance processes for Web services. SODIUM is an EU Sixth Framework Program project (FP6), which vies to provide tools, standards-based languages and a methodology supporting the unified discovery, invocation and composition of heterogeneous services (web, grid and p2p services). The UniGrids project will develop a Grid Service infrastructure compliant with the Open Grid Service Architecture (OGSA). It is based on the UNICORE Grid software initially developed in the German UNICORE and UNICORE Plus projects. UniGridS is a follow-on project from GRIP (Grid Interoperability Project) which was funded for 2 years by the EU to realise the interoperability of Globus and UNICORE and to work towards standards for interoperability in the Global Grid Forum.

SODIUM

European Union (FP6)

http://www.atc.gr/sodium

UniGrids

European Union (FP6)

http://www.unigrids.org/

Page 109 of 366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

III.5 Tools/Platforms for service development This chapter introduces current popular tools and platforms for Web Services, P2P Services and Grid Services respectively. While this is not a complete list of all available tools and platforms, the selected candidates are the must-know in developing services. From the perspective of Web Services development, we present J2EE (Sun), .NET (Microsoft), HP Web Service Platform and IBM Websphere. XL, a new platform for Web Services developed by XQRL, is discussed as well. Next, a number of best-known candidates competing for future P2P platforms are described, namely, JXTA (Sun), .Net Framework (Microsoft), NextPage's NXT3 platform, Groove Networks and Magi Open-Source Infrastructure. Moreover, a list of tools for supporting architecture-based P2P applications is provided. In the last section we provide an introduction to the tools and platforms for Grid Service development. This is a relatively new field where research activity has increased exponentially. Among such research, the Globus project provokes the most attention. Open Grid Service Infrastructure (OGSI) and the successor, Web Service Resource Framework (WSRF) are proposed specifications for implementing Web Services relevant to grid computing. The corresponding implementations, such as OGSI.NET, WSRF.NET, are described. III.5.1 Tools and Platforms for Web Services development In this section we briefly describe the following Web Service platforms and tools: 1. Sun (J2EE, Open Net) See also section IV.2.3 2. Microsoft (.NET) See also section IV.2.4 3. HP (e-services) 4. IBM (Web Services) 5. XL III.5.1.1 J2EE J2EE (java.sun.com/j2ee/overview.html) has historically been the architecture for building serverside deployments in the Java programming language [VR 2004]. It has been extended to support existing server-side enterprise Java components to become Web services, and as well specifies how a J2EE client container can invoke Web services. The technologies for both purposes have existed for a while, and the new J2EE specifications rely on those existing APIs for Web services support. New specifications added to the existing technologies are: a set of interoperability requirements, and a programming and deployment model (deployment descriptor) for Web service integration. Web services in J2EE are considered as an implementation of one or more interfaces defined by a WSDL document. The operations described in WSDL are first mapped to Java methods following the JAX-RPC [http://java.sun.com/xml/jaxrpc/overview.html] specifications WSDL-to-Java mapping rules. Once a Java interface corresponding to a WSDL file is defined, one can implement that interfaces methods in a stateless session bean running in the EJB container or as a Java classes running in the J2EE servlet container. Finally, the respective container one has chosen is assigned to listen to incoming SOAP requests and map those requests to the respective implementation (EJB or Page 110 of 366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

Servlet). To process incoming SOAP invocations, J2EE 1.4 mandates the JAX-RPC runtime as an additional J2EE container service. The advantage of adopting J2EE-based Web service is probably that one does not need to learn new programming techniques or tools. Rather, one can use existing J2EE experience and expose EJB components or servlets as Web services without making programming changes to the existing code. III.5.1.2 .NET Microsoft .NET is a product suite that enables organizations to build smart, enterprise-class web services [J2EE&NET]. Microsoft .NET offers language-independence and languageinteroperability by adopting Microsoft Intermediate Language (MSIL or IL). This is one of the most fundamental aspects of the .NET platform. Microsoft.NET is largely a rewrite of Windows DNA, which was Microsoft's previous platform for developing enterprise applications. Windows DNA includes many proven technologies that are in production today, including Microsoft Transaction Server (MTS) and COM+, Microsoft Message Queue (MSMQ), and the Microsoft SQL Server database. The new .NET Framework replaces these technologies, and includes a web services layer as well as improved language support. The ASP.NET in the container serves as a critical role in .NET Web services infrastructure. Simply, one can create a Web service with ASP.NET by authoring a file with the extension .asmx and deploying it as part of a Web application. Consequently, methods can be invoked by sending HTTP requests to the URL of the ASMX file. ASP.NET inspects the class metadata to automatically generate a WSDL file when requested by the caller. The ASP.NET Web services model assumes a stateless service architecture because it is generally more scalable than the stateful architecture. III.5.1.3 HP Web Services Platform The HP Web Services Platform delivers a modular, standards-based architecture that allows for plug-and-play assembly of XML components for developing, deploying, registering, discovering, and consuming Web services. Included in the HP Web Services Platform are tools, utilities, and a robust run-time environment for exposing new or existing Java objects as Web services, and for deploying these Web services. Additional tools included in the HP Web Services Platform enable customers to automatically register these Web services in public or private Web services registries, as well as to discover relevant Web services offered by other businesses [HP04]. According to HP Web Services Platform whitepaper, the HP Web Services Platform allows businesses to expose their assets as Web services; examples include: Software applications such as an income tax preparation application or an application that computes the appropriate sales tax based on location Business processes such as a purchase order fulfillment service or a service that enables suppliers to be alerted to inventory shortages at their customer's locations Computing resource such as online storage or server capacity Content services such as stock quotes or a service that allows mobile phone users to locate the nearest hotel or restaurant

Page 111 of 366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

III.5.1.4 IBM WebSphere IBM WebSphere Software Developer Kit for Web Services (WSDK) is an integrated kit for creating, discovering, invoking, and testing Web services. WSDK V5.1 is the newest version currently (August, 2004). WSDK V5.1 is designed to address the needs of experienced Java programmers who want to quickly learn how Web services can be created using existing Java components and achieve seamless integration with disparate systems. WSDK V5.1 can be used with the Eclipse IDE. Eclipse provides a graphical interactive development environment, which provides tools for building and testing Java applications. WSDK V5.1 adds to the standard Eclipse package with tools relating to Web services, making it more straightforward to build Web services [Websphere51]. The basic characteristics of WebSphere software platform are summarized as follows: Business Integration: Business Integration and Product Information Management Foundation & Tools: Open Services Infrastructure - WebSphere Application; Application Development - WebSphere Studio; Enterprise Transformation: Enterprise Transformation; WebSphere and zSeries. Business Portals: Interactive User Experience WebSphere; Access On Demand WebSphere Everyplace and WebSphere Voice; Selling and Channel Management WebSphere Commerce.

III.5.1.5 XL XL [XL] is a new platform for Web Services. Like WSFL, WSCL, XLang, and WSCI, XL directly supports XML, the other W3C standards, and the Web services paradigm. Like Java, it provides a very powerful programming model; in fact, a great deal of syntax for imperative constructs (e.g., loops) has been adopted from Java. Since XL supports all W3C standards and communicates with other Web services using messages, applications written in XL can communicate with applications written in other languages (e.g., Java) just as well as with other XL applications. Furthermore, XL is portable (like Java) and it provides high-level programming constructs for routine work (e.g., logging and security). The implementation of XL combines techniques from different fields of computer science; in particular, database systems, compiler construction, distributed systems, and data flow machines. XL programs are translated into a core algebra which is in turn optimized and then interpreted in a dynamic and flexible way. Using languages like Java and current middleware architectures with many layers, such guarantees cannot be given because the level of abstraction in the programming model is too low and because calls to library functions must be treated as black boxes and cannot be optimized.

III.5.2 Tools and Platforms for P2P Services Development Platforms for P2P services development have support for primary P2P components: naming, discovery, communication, security, and resource aggregation. They have an OS dependency even though it is minimal. Most P2P systems are either running on an open-source OS (Linux) or they are based on Windows. There are a number of candidates competing for future P2P platform. .NET is a very ambitious one, going beyond P2P to encompass all service support on the client and server side. JXTA is another attempt, taking a bottom up and strong interoperability approach. Most other Page 112 of 366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

systems also have some level of platform support, such as Groove covering the enterprise domain and Magi, covering the handheld devices domain [MKL 2003]. In the following subsections, the best known-to-date platforms for P2P applications are described. These are [P2P ARCHITECT, 2002]: 1. Sun Microsystems JXTA 2. Microsofts .Net Framework 3. NextPage's NXT3 platform 4. Groove Networks 5. Magi Open-Source Infrastructure III.5.2.1 JXTA Sun Microsystems juxtapose group anticipating the users need to develop well designed and easy to implement P2P applications have released its open source platform JXTA [JXTA]. JXTA is positioned as a peer-to-peer stack, a wafer-thin layer sitting on top of the operating system or virtual machine, and below actual P2P services and applications. The idea is to provide P2P developers enough core functionality upon which to build, no matter what the application [POC Lab]. With JXTA in place, developers can safely assume that everything one needs for peer-to-peer interaction is on-board; much the same way as the Java Virtual Machine provides a core infrastructure and functionality from which to start. We must not confuse JXTA with a network stack. JXTA explicitly assumes a networking layer beneath the P2P application. JXTA is simply a protocol for inter-peer communication. Each peer is assigned a unique identifier (Peer ID). Each Peer belongs to one or more groups in which the Peers cooperate and function similarly and under a unified set of capabilities and restrictions. The JXTA platform concerns itself with slightly higher level P2P-specific needs, mechanisms for establishing, joining, and leaving peer groups, inter-peer communication pipes, monitoring (access control, metering, etc) and security provisions. All the above are performed by publishing and exchanging XML advertisements and messages between peers. The JXTA Shell is a prototype application that illustrates the use of JXTA Technology. The JXTA Shell permits interactive access to the JXTA platform's building blocks through a simple, text-based interface (available on Solaris Operating Environment, Linux, or Microsoft Windows). III.5.2.2 .NET The Microsoft .NET framework SDK is a common framework of classes designed to provide a rich platform for building applications. With a variety of application models available on the .NET framework the user is able to design any kind of P2P application satisfying his requirements. More specific there exist four powerful application models to choose from: Web services: Web services are mainly used as a mean to handle registration, discovery and content lookup among Peers. That is succeeded by writing a specific class to listen for incoming requests and sent back useful information like the files shared by a peer. Windows forms: Windows Forms provide the appropriate tools to quickly and easily develop Windows based Graphic User Interfaces application. This class can be mainly used to rich the P2P application with a user-friendly environment for logging-in, request and share contents.

Page 113 of 366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

Web forms: It seems a very nice idea if we could populate our P2P application with information that the user can see when he logs in to the system. This information can be some useful data about our system or advertisements about using the service. This data can be downloaded to the users platform as HTML code. To implement the above we must use Web forms. Service Process: Like Web services we can use Service Process to implement a long-lived discovery server but now specialized to support non-HTML protocols. In cases where the discovery mechanism is not using HTTP protocol, a service process listens for some other protocol. III.5.2.3 NextPage The NextPage Application Services Interface [NextPage] provides a set of standard interfaces (XML, COM and Java) that allow companies to easily deliver the power of the NXT 3 platform through their own custom applications or integrations. Thus, companies are given the freedom to access and leverage core NXT 3 services (such as search, navigation, syndication, content integration, security/access control, and more) in a way that best meets their needs. For instance, they can provide access to the Content Network and the robust NextPage Distributed Search Service via any leading Corporate Portal. III.5.2.4 Groove Networks Groove software and services (www.Groove.net) are more than collaborative tools. The Groove platform includes a complete set of underlying system-level services that are exposed to, and inherited by, any Groove application. These are the Groove platforms open API, multiple language support and integration with back-end systems (such as ERP, CRM and knowledge management systems), providing a broad variety of collaborative solutions. III.5.2.5 Magi The network, once dominated by large resource rich processors, is now populated by a variety of smaller devices ranging from laptops to personal digital assistants to cell phones to embedded controllers [Bolcer2000]. The central concept of the Magi infrastructure is to provide a common networking platform for all these aforementioned devices and exploit their capabilities, while taking into consideration the great diversity of this population and supporting the powers of peer computing placement, security, sharing, governance, access, control and stewardship. Therefore, the Magi peer-computing infrastructure is based upon four architectural design principles: Build atop the existing Web infrastructure by relying on open, broadly adopted Internet protocols. The Webs architecture is embodied in four key standards HTTP, WebDAV, URIs, and MIME and has achieved remarkable utility, scalability, extensibility and performance. Therefore, its use is suggested as a foundation for peer computing. Exploit asynchronous, event-based, component architecture to build the core peer computing infrastructure. Event-based architectures present benefits for wide-area coordination, timely notification and service extension. On the other hand, component-based architectures offer benefits for customization, reuse, adaptation and extension. Their combination provides flexibility to developers in adapting the powers of Magi into their own specialized needs. Provide a platform for others that is easily adapted for specialized applications. That is achieved by exposing interfaces, using open standards and making many components opensource.

Page 114 of 366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

Promote the independence of peers and let application designers dictate the placement of resources. The philosophy of Magi is that application designers, and not the infrastructure, should decide whether that resource is duplicated among some select set of other peers, or is replicated throughout the network.

The systematic application of these principles has led to a flexible and robust peer-to-peer computing infrastructure that empowers its users and provides an underpinning for building peer computing applications. III.5.2.6 Tools for P2P Services Development In the previous sections we have gone through platforms which can be used for the development of p2p systems. In this section we will look at tools that assist the development of p2p applications and systems. These tools assist the implementation of p2p systems in all phases of the software development lifecycle ranging from requirements elicitation to system development. Such tools were developed in the P2P_Architect project [P2P ARCHITECT, 2002] which aimed at supporting architecture-based development of p2p applications and ensuring at the architecture level that dependability requirements are met: DISCOS Tool: The DISCOS tool is used to support system analysts and designers in the requirements elicitation and P2P system architecture specification phases. This tool provides a mechanism for the gathering and refinement of system requirements. Based on the on the functional and non-functional requirements the discos tool suggests the most appropriate p2p architectures from a list of existing reference and logical network architectures respectively. Design tool: The Design tool is used to support the configuration of the application architecture of a system. This is achieved through using specialized modelling primitives and notations which support the description of dependable p2p systems and architectures. These modelling primitives and notations are provided in the form of a UML profile which can be also incorporated in other uml design tools, e.g. Rational Rose, etc. Workflow tool: The P2P_ARCHITECT workflow module guides the implementation of the overall P2P_ARCHITECT process according to the discos methodology which is an iterated development process focused in architecture driven development of p2p systems. III.5.3 Tools and Platforms for Grid Services Development The emergence of Grid results from the demands for sharing supercomputers, using spare compute cycles and passing vast amounts of experimental data. A Grid Service is a potentially transient Web Service with specific interfaces and behaviors for such purposes as lifetime management, discovery of characteristics, notification, and so forth. Grid services provide for the controlled management of the distributed and often long-lived state that is commonly required in sophisticated distributed applications [TCFFGK 2002]. In this section we discuss the Globus Toolkit [FK 1999] a de facto standard for major protocols and services in grid computing. In addition, we briefly describe OGSI.NET and OGSI.Lite. Next we describe two distinct user interface tools, pyGridWare and pyGlobus. Finally, another tool for mastering widely differing access and authentication procedures in Grid community, UNICORE, is briefly described. III.5.3.1 Globus Toolkit The open source Globus Toolkit is a fundamental enabling technology for the Grid, letting people share computing power, databases, and other tools securely online across corporate, institutional, Page 115 of 366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

and geographic boundaries without sacrificing local autonomy (www-unix.globus.org). The toolkit includes software for security, information infrastructure, resource management, data management, communication, fault detection, and portability [FKNT 2002]. It is packaged as a set of components that can be used either independently or together to develop applications. Every organization has unique modes of operation, and collaboration between multiple organizations is hindered by incompatibility of resources such as data archives, computers, and networks. The Globus Toolkit was conceived to remove obstacles that prevent seamless collaboration. Its core services, interfaces and protocols allow users to access remote resources as if they were located within their own machine room while simultaneously preserving local control over who can use resources and when. The stable release of Globus Toolkit 4.0 (GT4) is planned by the The Globus Alliance on January 31st, 2005 (www-unix.globus.org). III.5.3.2 OGSI.NET and WSFT.NET OGSI.NET, developed in the University of Virginia, is an implementation of the Open Grid Service Infrastructure (OGSI) specification on Microsoft's .NET platform. It provides a container framework on which to do OGSI-compliant grid computing in the .NET/Windows world. However, the OGSI.NET project is committed to inter-operability with other OGSI compliant frameworks (such as the Globus Toolkit 3) which run primarily on Unix systems and so represents a bridge between grid computing solutions on the two platforms. OGSI.NET provides tools and support for an attribute-based development model in which service logic is transformed into a grid service by annotating it with meta-data. OGSI.NET also includes class libraries that perform common functions needed by both services and clients [OGSI.net]. WSRF.NET is a platform for grid computing on .NET and a bridge to grid computing solutions on Unix machines. It is based on both the Web Service Resource Framework (WSRF) and Microsoft .NET technologies [WSRF.NET Project]. WSRF.NET is a set of software libraries, tools and applications which implement the WSRF specifications on top of .NET. WSRF.NET allows service authors to easily build WSRF-compliant web services by adding meta-data to their ASP.NET web service logic. III.5.3.3 OGSI::Lite and WSFT::Lite OGSI::Lite (formerly ILCT) has been developed in the University of Manchester (www.sve.man.ac.uk/Research/AtoZ/ILCT). It provides a Container for running Perl Grid Services in, and various modules that can be used to build Perl Grid Services. Grid Services are implemented using either processes (the state and functionality for the service are bound in a process) or using "sessions" were the session ID is used to identify some stored state (either in database, file or "frozen" Perl module) and the functionality and state is loaded on demand. The second implementation allows the Container to scale to large number of Grid Services and provides tolerance for Container or host failure, the cost is Service performance and ease of development for the Service developers. Writing a Grid Service is simply a matter of creating a Perl module, making sure the module inherits from a base module that provides the standard OGSI GS functionality and placing the module in the correct directory, you may also be required to provide a Factory module, however most of the default functionality can be inherited from a base Factory module.

Page 116 of 366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

WSRF::Lite is the follow on work from OGSI::Lite, it implements the Web Service Resource Framework which has effectively replaced OGSI. III.5.3.4 pyGridWare The pyGridWare, developed by the Lawrence Berkeley National Laboratory, is a Python client being developed to be compatible with the Globus Toolkit 3 Java OGSI server. The goals of pyGridWare are to make Grid Toolkit 3 accessible through a Python interface, develop a standalone Python OGSI server and develop a full Python implementation of an OGSI-compliant server [pyGridWare]. The pyGridWare will focus first on developing automated client side tooling to interact with the Globus Toolkit 3 implementation of OGSI. III.5.3.5 pyGlobus The pyGlobus [pyGlobus], developed by Lawrence Berkeley National Laboratory, is an objectoriented python interface to the Globus toolkit. The project also provides a reliable file transfer web service and client - Python Reliable File Transfer (pyRFT), built with pyGlobus. III.5.3.6 Unicore UNICORE (UNiform Interface to COmputing REsources) (www.unicore.org) provides a science and engineering GRID combining resources of supercomputer centers and making them available through the Internet. Strong authentication is performed in a consistent and transparent manner, and the differences between platforms are hidden from the user thus creating a seamless HPC portal for accessing supercomputers, compiling and running applications, and transferring input/output data [http://www.opengroup.org/tech/grid/whos-who.htm].

Page 117 of 366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

III.6 Interoperability Issues and Challenges Interoperability is a very important issue in the e-services area with many aspects and challenges. Different service-oriented systems, built using different platforms, operating systems, languages and installed on different kinds of devices, need to interoperate in order to provide more flexible and efficient applications. In this chapter we analyze interoperability issues and challenges for each kind of e-services and we examine the possibilities and existing efforts for the integration of web services, p2p and grid services. A study regarding interoperability for networked enterprises applications and software, as well as an analysis of challenges for B2B standardization are presented in the appendix of this document.

III.6.1 Web Services Interoperability Web services provide a standard means of interoperating between different software applications, running on a variety of platforms and/or frameworks. The Web Services Architecture (WSA) document [WSA 2004] is intended to provide a common definition of a Web service, and define its place within a larger Web services framework to guide the community. The WSA provides a conceptual model and a context for understanding Web services and the relationships between the components of this model. The Web services architecture is an interoperability architecture: it identifies those global elements of the global Web services network that are required in order to ensure interoperability between Web services. The architecture does not attempt to specify how Web services are implemented, and imposes no restriction on how Web services might be combined. It describes both the minimal characteristics that are common to all Web services, and a number of characteristics that are needed by many, but not all, Web services. III.6.1.1 Interoperability Requirements Concerning the WSA The following interoperability requirements concerning the WSA are identified: 1. Conformance to WSA The presence of a concept of a WSA is a strong hint that, in any realization of the architecture, there should be a corresponding feature in the implementation. Furthermore, if a relationship is identified here, then there should be corresponding relationships in any realized architecture. The consequence of non-conformance is likely to be reduced interoperability: The absence of such a concrete feature may not prevent interoperability, but it is likely to make such interoperability more difficult. 2. Web Services Description Requirements Current technologies used for describing Web services are probably not yet sufficient to meet interoperability requirements on a global scale. We see the following areas where increased and richer meta-data would further enhance interoperability:

Page 118 of 366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

It should be possible to identify the real-world entities referenced by elements of messages. i. Example: When using a credit card to arrange for the purchase of goods or services, the element of the message that contains the credit card information is fundamentally a reference to a real-world entity: the account of the card holder. The appropriate technology for this is standardized ontology languages, such as OWL [OWL]. It should be possible to identify the expected effects of any actions undertaken by Web service requester and provider agents. i. Example: We consider two Web services: one for withdrawing money from an account and one for depositing money (more accurately, transferring from an account to another account, or vice versa). The datatypes of messages associated with two such services may be identical, but with dramatically different effects: instead of being paid for goods and services, the risk is that one's account is drained instead. We expect that a richer model of services, together with technologies for identifying the effects of actions, is required. Such a model is likely to incorporate concepts such as contracts (both legally binding and technical contracts) as well as ontologies of action. In some cases, a Web service program may "understand" what a particular message means in terms of the expected results of the message, but, unless there is also an understanding of the relationship between the requester entity and the provider entity, the provider agent may not be able to accurately determine whether the requested actions are warranted. i. Example: A provider may receive a request to transfer money from one account to another. The request may be valid in the sense that the datatypes of the message are correct, and that the semantic markers associated with the message lead the provider agent to correctly interpret the message as a transfer request. However, the transaction still may not be valid, or fully comprehensible, unless the provider agent can properly identify the relationship of the requester to the requested action.

We expect that a model that formalizes concepts such as institutions, roles (in business terms), "regulations" and regulation formation will be required. With such a model we should be able to capture not only simple notions of authority, but more subtle distinctions such as the authority to delegate an action, authority by virtue of such delegation, authority to authorize and so on. Requirements concerning web services security At this time, there are no broadly-adopted specifications for Web services security. As a result developers can either build up services that do not use these capabilities or can develop ad-hoc solutions that may lead to interoperability problems.

III.6.1.2 Interoperability Requirements Concerning the Implementation of Web Services

Page 119 of 366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

According to [Eric 2004], some imperoperability requirements concerning the implementation of web services are: 1. Use of implementation tools and standards that have proven imperoperability Web services interoperability is, in practical terms, the ability to decouple an application into functional parts, develop those parts independently using any platform and any language, and have it all work together seamlessly. The essence of Web services is interoperable machine-tomachine communication. Figure 27 below provides one illustration of the layered and interrelated technologies involved in the web service architecture. Interoperability needs to happen at all levels of the Web services protocol stack for a Web service to be certified as "interoperable". This means using implementations of standards that have proven interoperability, and also keeping up with the advancement of standards.

Figure 27 Web Services Architecture Stack Web service toolkits often cause interoperability problems. For example, independent toolkit providers interpreted the WSDL 1.1 specification loosely (due to the lack of a W3C-sponsored reference implementation for the behavior of a Web service) and language-specific idiosyncrasies began to impair interoperability between various toolkits. 2. Due diligence of the development team Interoperability however does not solely rest on the interoperability of the tools and implementations used. It also depends on the due diligence of the development team while creating a Web service. Many toolkits have become available to help automate the creation and consumption of Web services, but not all the toolkits behave in the same way, which can lead to a wide range of interoperability problems. The entire development team must be aware of this new layer of technology, know its strengths and weaknesses, and ensure that the service they are developing is compatible with other interoperable toolkits. Page 120 of 366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

3. Interoperability Testing Whatever the due diligence of the team, and whatever tools are used, before any Web service is placed into real production use, it needs to be tested. Functional testing is not enough to conclude a tested deployable production ready Web service. Web services require interoperability testing. A fully interoperable Web service is synonymous with quality software. Testing a Web service for interoperability is only partly analogous to HTML web page testing and browser compatibility. In those tests fairly few clients existed (Internet Explorer, Netscape), and testing would simple entail loading a few web pages in various browsers. Web services have a much greater number of clients, which is due in part of the large number of programming languages with Web services support. Testing all clients would be a big challenge. And even then, when test results show interoperability problems, whose fault is it - your Web services or the client software? Interoperability testing will need to become a standard policy as part of ensuring the quality of a Web service. The developer will need to have interoperability concerns in mind from the beginning of the development cycle. And the tester will be continually checking interoperability of the service. Interoperability has become a major component to the quality of a Web service.

III.6.1.3 The Web Services Interoperability Organization The Web Services Interoperability Organization [WS-I] was formed in early 2002 to promote consistent interpretation of the Web service specifications by providing specific guidelines and recommended practices for using and developing Web services. It has considerable support from within the Industry and is staffed by representatives from many of the big players. In the last few years the core work of the WS-I has been to provide a document called the Basic Profile (BP). This document focuses on the core Web services specs such as WSDL and SOAP, and addresses known interoperability issues. More specifically, the current BP is predominantly a restriction of the original WSDL 1.1 specification to the most interoperable practices. Another major constraint by the WS-I in the BP is to disallow any kind of Web services protocols apart from SOAP via HTTP POST. SOAP sent over HTTP is the most widely supported and understood method for connecting to a Web service. However, the original WSDL 1.1 specification did not indicate how the SOAP semantics correlated with the HTTP semantics. The WS-I has provided the missing information and specified what the proper HTTP response should be for all the possible SOAP events. The WS-I will continue to identify new interoperability problems as they arise, and will publish their recommended solution so that everyone can adjust their service behavior appropriately. Because it is possible to test for specific interoperability problems, the WS-I has put together a suite of test tools that will check a Web service's message exchanges and WSDL against the assertions in the BP. The test tools are passive, requiring the tester to invoke the service manually, capture the SOAP messages off the wire, and log them in the WS-I's log format. Using the tools effectively is not an easy task and the output of their analysis is a large, flat report. The report serves well as an indicator of whether or not the service failed any of the assertions, however, using the report to determine exactly what went wrong and how to begin fixing the problem can be difficult and tedious. Page 121 of 366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

III.6.2 Interoperability in P2P Systems Member nodes of a P2P network have variety of operating systems, networking technologies and other platforms in business applications. Thus, advanced interoperability techniques are necessary. Interoperability is also an important requirement for the aggregation of diverse resources, such as computing power or storage space. Resource aggregation is very important in p2p systems, such as distributed computing and content sharing systems. Most peers in current p2p systems interoperate by relying on standard communication stacks (IP) and on Web standards (SOAP, XML, UDDI, WSDL, and WebDAV). However, most P2P infrastructures implement proprietary interfaces and communication standards. Figure 28 provides an illustration of the interoperability support provided by various p2p systems [MKLN 2003]. XML-based interoperability technologies hold lot of potential for P2P computing. P2P System Avaki Groove Magi Freenet .NET/My Services Interoperability Support interoperates with Sun Grid IP-based JXTA, Web-DAV Low SOAP, XML, UDDI, WSDL

Figure 28 interoperability support in p2p systems Only a few P2P systems are able to interoperate, such as Avaki with Suns Grid, and Magi with JXTA. Some of the requirements for interoperability include: How do systems determine that they can interoperate How do systems communicate, e.g., what protocol should be used, such as sockets, messages, or HTTP How do systems exchange requests and data, and execute tasks at the higher level, e.g., do they exchange files or search for data How do systems determine if they are compatible at the higher protocol levels, e.g., can one system rely on another to properly search for a piece of information How do systems advertise and maintain the same level of security, QoS, and reliability In the past, there were different ways to approach interoperability, such as standards (e.g., IEEE standards for Ethernet, token ring, and wireless); common specifications, (e.g., Object Management Group [OMG 2001]); common source code, (e.g., OSF DCE [Rosenberrg 201992]); open-source (e.g., Linux); and de facto standards (e.g., Windows or Java). In the P2P space, some efforts have Page 122 of 366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

been made towards improved interoperability, even though interoperability is still not supported. The P2P Working Group [P2PWG] is an attempt to gather the community of P2P developers together and establish common ground by writing reports and white papers that would enable common understanding among P2P developers. The P2PWorking Group gathers developers from both ad-hoc communication systems and grid systems. The Global Grid Forum [GGF] is a similar effort in the grid computing space. Both efforts represent an approach similar to OMG, in defining specifications and possibly reference implementations. JXTA [JXTA] approaches interoperability as an open source effort, by attempting to impose a de facto standard. A number of developers are invited to contribute to the common source tree with different pieces of functionality. Only a minimal underlying architecture is supported as a base, enabling other systems to contribute parts that may be compatible with their own implementations. A number of existing P2P systems have already been ported to the JXTA base. III.6.3 Interoperability in Grid Systems

III.6.3.1 The Grid Interoperability Problem According to the Grid Protocol Architecture Working Group [GPA WG], which is steered by the Global Grid Forum [GGF], the grid interoperability problem can be defined as identifying a minimal set of Grid services [SBLW 2002], via which different grid systems interoperate (although these may be described in different languages in each Grid). On the one hand if we have identified the minimal set of Grid services then any well-composed resource request expressed in terms of this minimal set will be honoured by other Grids even if some translation may need to be made between different resource description mechanisms. On the other, if we are seeking to identify such a minimal set then looking at practical issues of interoperability can help since the minimal set can be defined as the intersection of all interoperability sets between objects that are considered to be Grids. III.6.3.2 Other Interoperability Issues The following interoperability issues are identified: 1. Interoperability of Grid Infrastructures There has been a substantial progress in developing Grid technologies in the recent year. At universities and research centers world-wide scientists work on the evolution of Grid computing. Even if the way differs in many cases one of the principal goals of all those projects is the same: to give users universal access to distributed resources. Different projects focus on different aspects. Integration between different grid infrastructures could lead to interoperability between different grid systems [WR 2002]. Such work is done in the Grid Interoperability Project (GRIP) [GRIP 2002], which aims, among others, at integrating UNICORE and Globus. Both Globus and UNICORE provide a Grid infrastructure which gives users access to distributed resources. Globus can be characterized as a toolkit that allows the development of Grid applications using the rich set of Globus services. UNICORE represents a vertically integrated solution focusing on uniform access to distributed computing resources. Page 123 of 366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

2. Common description of grid resources There is, as yet, no common standard for describing Grid resources. Different Grid middleware systems have had to create ad hoc methods of resource description and it is not yet known how well these can interoperate.
Work in the Grid Interoperability Project (GRIP) [BGG 2003] investigates the possibility of matching the resource descriptions provided by different schemas (descriptions provided by the GLUE schema and implemented in MDS-2 and resource descriptions provided by the Abstract Job Object framework utilised by UNICORE and stored in the UNICORE Incarnation Database). From their analysis they propose methods of working towards a uniform framework for resource description across different Grid middleware systems.

III.6.4 Convergence of Web Services with P2P There are quite a few common features of p2p and Web Services technologies, as they both focus on publishing and discovery across networks. Since p2p networks are based on a decentralised model and its primary focus is on supplying processing power, content, or applications to peers in a distributed manner, it is less focused on messaging formats and communication protocols. Web Services, on the other hand, is based on a centralised model and its primary focus is on standardising messaging formats and communication protocols. The convergence of these two systems would enable companies to use the processing power benefits of the p2p networks, through the standard formats for discovery and content exchange over the standard communication protocols of the Web Services world [SamSad 2002].

III.6.4.1 Examples of P2P and Web Services Working in Conjunction 1. Use of web services in p2p applications. Web Services provide a very elegant way to handle: registration, discovery, and content lookup for p2p applications. As Web Services mature, their security standards could also ensure the integrity of data and services accessed by p2p software. Both user and application centered Web Services could play a role in p2p systems as the clients or nodes of a p2p system, ranging from a user, desktop, laptop, PDA, or Pocket PC to a server. 2. Use of web services for the communication of different p2p systems. The communication between different peer systems could be based on open standards, like web services. This would allow companies using p2p technology to have different architectures and platforms for their p2p systems and integrate their processes across corporate boundaries [SamSad 2002]. 3. Use of UDDI for p2p applications. Many design problems associated with directory services, communication protocols, and message formats are being addressed by p2p applications. UDDI is expected to reduce the complexity and amount of effort required to create p2p applications. 4. Decentralized UDDI based on p2p for web services discovery. Using p2p, the Web Services implementation of using a single central UDDI registry, which contains the service description of Web Services, can be converted into a decentralised mode. Since a p2p network is completely decentralised, the Web Services descriptions (the content of the UDDI registry) can be described and indexed locally to a given peer. Page 124 of 366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

The current UDDI attempts to alleviate the disadvantages of the centralized approach by replicating the entire information and put them on different sites (operators). Three major operators, namely IBM, Microsoft, and ARIBA are providing public UDDI service. There are also other private UDDI operaters that help locating web services for experimental or private network on the internet. Replication, however, may temporarily improve the performance if the number of UDDI users are limited. Furthermore, the more the replication sites, the less consistent the replicated data will be. Approaches to a decentralized UDDI implementation using p2p technology: One example of the implementation of a decentralized UDDI by using p2p protocol is a framework called PeTerPan [LLT 2002] which aids web services running on the grid. The current implementation is based on the Gnutella protocol. The decentralized searching and computing paradigms exist in this system as the norm. One other approach to a decentralized UDDI implementation using a p2p architecture, is found in [TSN 2003], which suggests moving from a central design to a (de facto already existing) distributed approach by connecting private registries with peer-to-peer technology and creating a virtual global registry from all connected local registries. Thus, companies as well as universities can build their own Web Service registries which are maintained by themselves. Being a peer in a p2p-network makes it easy to search all local registries. A third example is an ontology-based peer-to-peer topology suitable for service discovery that has been described in [SSDN 2002].

5. Search engines. Search engines would probably be the most widely used applications based on p2p and Web Services technologies, such as Google [BM 2003]. III.6.4.2 Challenges of P2P and Web Services Convergence There are a few challenges of p2p and Web Services converging that arise [SamSad 2002]: Network bandwidth. A critical factor of the success of p2p technology and its convergence with Web Services is the available network bandwidth. As the number of peers searching for content or communicating in the network increases, the resulting traffic can potentially block the whole network bandwidth. p2p applications eliminate central servers and create a loose, dynamic network of peers. Thus in any content retrieval operation, such as a search for a specific item in a database, all the peers in the network are searched, using a lot of network bandwidth. As this network size increases and becomes more distributed, it may be affected by poor and slow connections. Security. The decentralised and distributed architecture of p2p is also one of its largest weaknesses. P2P applications are a major security threat to companies, as by definition they distribute data and computing resources over several peers. The architecture of p2p-based applications with or without Web Services integrated is extremely complex. It involves security issues and use of distributed resources in an optimal way, making decentralised and independent systems work as one. Maintenance of such applications is much more difficult since it is extremely difficult to identify, replicate, and fix application or network related problems. Page 125 of 366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

III.6.4.3 Technologies and Vendors Enabling P2P and Web Services Convergence Some examples of how large hardware and software vendors are planning to support p2p and Web Services platforms together include: 1. Intel which is building p2p components and services on top of Microsoft's .NET platform. Microsoft's Visual Studio .NET and Intel's p2p services are intended to serve as infrastructure for vendors developing p2p services like instant messaging (IM), knowledge management, and collaboration. 2. The Project JXTA p2p platform, supported by Sun Microsystems, which is being changed at the core to make peers interoperate with Web Services using protocols like SOAP and WSDL [JXTAv2.0 2003]. 3. The Microsoft .NET Framework which provides a rich platform for building p2p applications [WebCast 2003].

The technologies enabling this convergence are some of the leading p2p protocols such as Jabber [Jabber 2002] and JXTA [JXTA]. Jabber is an instant messaging and presence notification, open source XML-based protocol. It is platform neutral, thus Jabber is interoperable with messaging across both wireless and browser based messaging services. The real-time messages are exchanged as XML streams. Jabber's open XML protocol contains three top level XML elements, which in turn contain data through attributes and namespaces [Jabber 2002]. The project JXTA is an open network platform for p2p computing. It provides a framework and a set of protocols that standardise the way in which peers discover and advertise network resources, and communicate and cooperate with each other to form secure peer groups. Publishing and exchanging XML messages among participating peers perform all the functions supported by JXTA. JXTA is independent of programming languages. Heterogeneous digital devices such as PCs, servers, PDAs, and appliances with completely different software stacks can interoperate with JXTA protocols. Moreover, it is independent of transport protocols such as TCP/IP and HTTP and can be implemented on top of them like SOAP [JXTAv2.0 2003]. III.6.5 Towards a Synergy between P2P and Grids III.6.5.1 Differences and possible integration Both peer-to-peer (p2p) networks and grids are distributed computing models that enable decentralized collaboration by integrating computers into networks in which each can consume and offer services. In analyzing the two models, we discover that grids and p2p share many characteristics and goals and could converge towards a common model that could alleviate complexities of each other and fulfill the need for secure, scalable and decentralized collaboration. Although many aspects of today's grids are based on hierarchical services, this is an implementation detail that should be removed in the near future. As grids used for complex applications increase from tens to thousands of nodes, we should decentralize their functionalities to avoid bottlenecks. The p2p model could thus help to ensure grid scalability: designers could use the p2p philosophy and techniques to implement nonhierarchical decentralized grid systems. Page 126 of 366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

In order to find out why a synergy between p2p and grids could be helpful and how it can be achieved, we must consider several significant aspects and issues. In the following, the techniques that the p2p and grid models use to handle some of the main issues of distributing computing are discussed [TalTru 2003], in order to find a common foundation.
Issues of Distributed Computing

Grids
Security is a central theme in grids. Several efforts are devoted for integrating mechanisms for authentication, authorization, integrity, and confidentiality in grid platforms. However, such mechanisms are designed mainly for closed communities. Grids generally include powerful machines that are statically connected through high-performance networks with high levels of availability. However, the number of accessible nodes is generally low because access to grid resources is bonded to rigorous accounting mechanisms.

P2P

Towards a Common Model

P2P systems originate in open communities. Security mechanisms in most systems do not address authentication and content validation, but rather offer protocols that assure anonymity and censorship resistance.

Security

It should be interesting to analyze how to exploit the different approaches to create a security model for p2p grids.

Access to remote resources is the primary goal of grids. Grid toolkits provide secure services for submitting batch jobs or executing interactive applications on remote machines. They also include mechanisms for efficiently sharing and moving data across nodes.

Access Services

P2P systems are composed mainly of common desktop computers that are connected intermittently to the network, remaining available for a limited time with reduced reliability. The number of nodes connected in a p2p network at a given time is much greater than in a grid. Thus, the grid connectivity approach is still too stiff for new nodes and user access and accounting. P2P systems provide protocols for sharing and exchanging data among nodes.

The Grid connectivity approach could benefit from the more flexible connectivity models used in p2p networks.

Connectivity

P2P job-submission models and p2p job scheduling might be very attractive topics for research into applying the p2p approach to grid scheduling and job management.

Page 127 of 366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

Fault Tolerance

Beyond simple checkpointing and restarting, reliability and fault tolerance are largely unexplored in grid models and tools. The Globus toolkit allows fault detection, for instance, but developers must implement fault tolerance at the application level.

One of the primary design goals of a p2p system is to avoid a central point of failure. Although most p2p systems (pure p2p) already do this, they nevertheless are faced with failures like disconnections/unreachabil ity, partitions, and node failures, and nonavailability of resources.
P2p systems use decentralized p2p algorithms and techniques to address fault tolerance (e.g., replication, broadcasting)

For greater reliability, designers of fault-tolerance mechanisms and policies for grids should consider using decentralized p2p algorithms, which avoid centralized services that can represent critical failure points.

Resource discovery is based mainly on centralized or hierarchical models, which do not deal with more dynamic, large-scale distributed environment. The number of queries in such environments quickly makes a client-server approach ineffective. Resource discovery includes, in part, the issue of presence managementdiscovery of the nodes that are currently available in a gridbecause global mechanisms are not yet defined for it.

Despite the interest in p2p and grid networks, few noteworthy research efforts are currently devoted to finding commonalities and synergies between them. In a significant exception, integration efforts are made by the Communitygrids Lab [CGrids Lab], but much more remains to be done by members of both communities. A p2p approach is needed both to

Recource Discovery and Presence Management

Resource discovery in p2p Future grid systems could systems is based on routing implement a p2p-style decentralized resource algorithms. discovery model that can On the other hand, the support grids as open resource communities. presence-management protocol is a key element in p2p systems: each node periodically notifies the network of its presence, discovering its neighbors at the same time.

III.6.5.2 Towards a common model

implement grid tools and services, and design and develop grid applications that access and coordinate remote resources and services.

In [TalTru 2003] we find two suggestions for p2p and grid integration: Page 128 of 366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

1. Redesigning of Globus components using a super-peer network topology: they suggest that two core Globus Toolkit componentsthe monitoring and discovery service (MDS) and the replica management servicecould be effectively redesigned using a p2p approach. If we view current grids as federations of smaller grids managed by diverse organizations, we can rethink the Globus MDS for a large-scale grid by adopting the super-peer network model (wwwdb.stanford.edu/~byang/pubs/superpeer.pdf). In this approach, each super peer operates as a server for a set of clients and as an equal among other super peers. This topology provides a useful balance between the efficiency of centralized search and the autonomy, load balancing, and robustness of distributed search. In a grid information service based on the super-peer model, each participating organization would configure one or more of its nodes to operate as super peers. Nodes within each organization would exchange monitoring and discovery messages with a reference super peer, and super peers from different organizations would exchange messages in a p2p fashion. 2. Aligning p2p, Grid and Web Services Technologies based on OGSA: their second example of how p2p and grid technologies can be integrated is based on the Open Grid Services Architecture (OGSA) that lets developers integrate services and resources across distributed, heterogeneous, dynamic environments and communities. The OGSA model adopts the Web Services Description Language (WSDL) to define the concept of a grid service using principles and technologies from both the grid and Web services communities. Web services and the OGSA both seek to enable interoperability between loosely coupled services, independent of implementation, location, or platform. OGSA provides an opportunity to integrate p2p and the Grid. The architecture defines standard mechanisms for creating, naming, and discovering persistent and transient grid-service instances. It will be an interesting challenge to determine how to use OGSA-oriented grid protocols to build p2p applications. By implementing service instances in a p2p manner within such a framework, developers can provide p2p service configuration and deployment on the grid infrastructure. A peer could thus invoke a grid service by exchanging a specified sequence of messages with a service instance, which might invoke another grid service published by another peer through an associated grid service interface. Developers and users could exploit the many contact points between p2p and grid networks by recognizing p2p's relevance to corporations and public organizations rather than viewing it as just a home computing technology. They also could exploit p2p protocols and models to face gridcomputing issues such as scalability, connectivity, and resource discovery. A synergy between p2p and grids could lead to new highly distributed systems in which each computer contributes to solving a problem or implementing a system while also using services offered by other computers in the network. Enterprises, public institutions, and private companies could find it both useful and profitable to develop distributed applications on a world-wide Grid. III.6.6 Integration of Web Services, P2P and Grid Computing The confluence of Web services, peer-to-peer systems, and grid computing provides the foundation for a common model allowing applications to scale from proximity ad hoc networks to planetaryscale distributed systems. Such a common model is the vision of Intel for Internet Distributed Computing (IDC) [MRK2003], which could make Internet an application- hosting platform.

Page 129 of 366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

The foundation of their proposed approach is to disaggregate and virtualize individual system resources as services that can be described, discovered, and dynamically configured at runtime to execute an application. They postulate that such a system can be built as a combination and extension of Web services, peer-to-peer computing, and grid computing standards and technologies. It thus follows the successful Internet model of adding minimal and relatively simple functional layers to meet new requirements while building atop already available technologies. Intel is proposing a convergenced middleware, where web services, grids, and p2p are the building blocks, and a common functional stack [Bar 2002] as illustrated in Figure 29 and Figure 30 below:

Figure 29

The building blocks of the convergenced middleware

Figure 30

IDC Functional Stack

According to [Bar 2002], if we consider the intersection between grid, p2p and grid services, there are some areas of divergence. The vision of Intel for the future is a foundation for a truly interoperable distributed computing environment. Figure 31 and Figure 32 illustrate the current and expected intersection of grid, p2p and web services.

Page 130 of 366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

Figure 31 Some areas of divergence

Figure 32 Foundation for a truly interoperable distributed computing There are some challenges regarding such an integration, which can be summarised in the following: Interoperability for mobility and access to environment Trust for sharing resources in the cloud and on own devices Connectivity for access, bandwidth, mobility, intermittency Virtualization of all resources Device Intelligence for automation, proactive, context-based computing Page 131 of 366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

Manage complexity of dynamic, scaled-up, environment

Another view of interoperability between grid, web and p2p services is provided by the SODIUM Project [SODIUM 2004]. SODIUM provides a platform and associated languages for the unified discovery and composition of grid, web and p2p services, as illustrated in Figure 33.

COMPOSITION SUITE VSCL Editor SODIUM PLATFORM


VSCL USQL
ery M LQu DL/D A
S)

UDDI Registry LDAP Registry

VSCL 2 USCL Translator


USQL Engine

S /W 2P (P

P2P Networks
ebXML Registry

USCL

USQL Engine

USCL Compiler

USQL

Execution Engine RUNTIME ENVIRONMENT

Invoke

Services (P2P ,Web, Grid)

Figure 33 The SODIUM platform components and their interaction Therefore, the applications developed using the SODIUM platform enable interoperability between web, grid and p2p services in the sense that the latter heterogeneous types of services are being orchestrated and thus interoperate through the applications logic and the SODIUM run-time environment.

Page 132 of 366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

III.7 Conclusions Service Oriented Computing is beginning to revolutionize the way Information Technology is structured. It emerged as an evolution to the component based development and among its goals is to support the loose coupling of system parts in a far better way than existing technologies. Service Oriented Architecture (SOA) provides agility, flexibility and cost savings via reusability, interoperability and efficiency. SOA benefits enable organizations to respond to changes better, faster and cheaper. SOA is composed of three main components: Service provider, Service broker and Service requestor which interact through the respective basic service operations: publish, find and bind. Web Services, Grid and P2P Services adhere to the SOA model and are all known as e-services. Various technologies supporting e-services have emerged and been developed in the last years. Web Services use open Internet standards such as: WSDL for service description, UDDI for service discovery and SOAP for service invocation. Grid systems adhere to a Grid architecture where shared resources are described and accessed using open Service-based interfaces. This architecture is known as OGSA (Open Grid Services Architecture) and was initially materialized by the OGSI (Open Grid Services Infrastructure). However, OGSI received critic from the Web Services community, that led to the so called refactoring of the standards and the introduction of the Web Services Resource Framework (WSRF). P2P systems, on the other hand, are not based on standard-based protocols, but use proprietary protocols for resource locating and communication. There are many issues regarding e-services technologies that need to be further explored, like security, standards, semantic description and routing, mobility support, interoperability and integration of p2p, grid and web services. This report focuses mainly on interoperability issues between web, grid and p2p services. Interoperability is promoted by e-services by minimizing the requirements for shared understanding: a service description and a protocol of collaboration and negotiation are the only requirements for shared understanding between a service provider and a service user. Web services also enable interoperability of legacy applications: By allowing legacy applications to be exposed as services, a seamless integration between heterogeneous systems is greatly facilitated. New services can be created and dynamically published and discovered without disrupting the existing environment. Web services and grid services support interoperability through standardization. In the p2p computing area, standards are not supported. However, several interoperability issues are identified in the areas of p2p, grid and web services, which the respective communities are trying to solve by promoting standards, architectures and tools, especially for grid and web services and interoperable platforms in the case of p2p systems. Each type of e-services has capabilities and specific characteristics that the other e-services types could exploit, leading in such a way to more flexible and efficient service-oriented systems. Synergies between p2p, grid and web services will promote interoperability and need further to be explored. Page 133 of 366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

Furthermore, integration of Web services, p2p, and grid computing provides the foundation for a common model. In such a model individual system resources could be disaggregated and virtualised as services that can be described, discovered, and dynamically configured at runtime to execute an application, allowing applications to scale from proximity ad hoc networks to planetary-scale distributed systems.

Page 134 of 366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

Page 135 of 366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

IV Component and Message Oriented Computing


IV.1 Introduction Component-based or Message-based computing is today the foundation of many distributed programming environments and platforms. Most typical for Component oriented Computing are environments like CORBA, MS DNA/DCOM and J2EE. Most typical for Message Oriented Computing are message platforms such as ebXML, or older technologies like EDIFACT, or message oriented middleware like MQSeries or Microsoft Message Queue. These technologies have been described in more detail in the previous IDEAS State of the art report, and are thus described only briefly here. More details can be found in the IDEAS report.

IV.2 What is COC Component Oriented Computing The component-based architecture was the next evolution in application architecture after object oriented programming. Component-based software was, and is, a promising approach to making distributed systems and Internet applications fit the requirements of the new information-based work organization. Component-based software encompasses many disciplines and application domains, such as groupware, distributed object-oriented software development, middleware, multimedia, CSCW, and distributed simulation. A component oriented architecture consists of software components, which are a software technology for encapsulating software functionality, often in the form of object in some binary or textual form, adhering to some interface description language (IDL) so that the component may exist autonomously from other components in an application. A common architecture pattern is a layered approach: Layers are typically a data layer, data access layer, business layer and presentation layer. The data access layer included all code and logic which accessed the data. Many systems packaged all SQL statements in a data access layer to isolate database schema changes. The business layer consisted of a domain model and/or services which encapsulated the business rules of the application. The main evolution in component oriented computing was the provision of server side components for the business layer. In a distributed system, the business layer usually was deployed to an application server. The first main contributions here were J2EE EJB and Microsoft COM+, then OMG followed up with Corba Components based on the EJB model for multi-language environments. The presentation layer consisted of all user interface related functionality. This includes web-based interfaces and rich windows-based interfaces. Other application types, such as console applications can also be considered part of the presentation layer. Component-based architectures allowed developers to build extremely scalable and distributed applications. But the underlying component technology did not adhere to industry standards and forced vendor lock in. Page 136 of 366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

IV.2.1 The OMG solution for interoperability and for components: CORBA Common Object Request Broker Architecture (CORBA) is an industry standard that defines a higher level facility for distributed computing. CORBA is proposed by Object Management Group (OMG), an industry consortium whose mission is to create a truly open object infrastructure. CORBA allows applications to communicate with one another without being aware of the hardware or software systems or the location of the application. A client can transparently invoke a method on a server object. The object can be on the same machine or on a remote machine on the network using the middleware, Object Request Broker (ORB). The ORB intercepts the call, finds an object that can implement the request, invokes its method passing the required parameters and returns the results to the client. Thus, ORB provides interoperability between applications on different machines in heterogeneous distributed environments and brings together multiple object systems. The basis for interoperability comes from Interface Definition Language (IDL). It is a technology independent syntax for describing object encapsulations. IDL is declarative i.e., it provides no implementation details. IDL specified methods can be written and invoked from any language that provides CORBA bindings. Programmers deal with CORBA objects using native language constructs. IDL provides operating system and programming language independent interfaces to all the services and components that reside on CORBA. Common Object Request Broker Architecture (CORBA) is a standards-based distributed object computing infrastructure for object-oriented applications, which has a multi-language nature. The main goal of CORBA is to give a solution that addresses all the problems of interoperability. Using CORBA, two objects can interoperate whether they are on the different systems (for example, one can be running on a Windows machine and the other on a UNIX server on the other side of the world). In the case of CORBA, the word architecture is not a buzzword: CORBA automates many common network programming tasks such as object registration, location, and activation; request demultiplexing; framing and error-handling; parameter marshalling and demarshalling; and operation dispatching. CORBA's principal strength is that it is an object middleware. The OMG introduced CORBA 1.1 in 1991. CORBA 2.0, adopted in December of 1994, defines true interoperability by specifying how ORBs from different vendors can interoperate through the Internet Inter-ORB Protocol (IIOP). In some way, SOAP is rediscovering now the interoperability problems which appeared in CORBA 1.1. CORBA 3.0 (the current version, released in 2002) simplifies the development of distributed object applications, gives support to distributed components, and includes features to facilitate CORBAs implementation in the enterprise. CORBA 3.0 capabilities fall into several categories: support for distributed components, quality of service features, new messaging support, and other technologies that enable complete Internet integration. Distributed components are a key point of this third version: CORBA 3.0 defines a CORBA Component Model (CCM) that defines a component architecture and a container framework in which the component life cycle takes place. The CCM is a server-side component model for building and deploying CORBA applications. Components and their supporting objects (homes, interfaces, etc.) are all defined using a formal language called the Interface Definition Language (IDL). Page 137 of 366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

Like SUNs Enterprise Java Beans (EJB), CORBA components are created and managed by homes; they run in containers and are hosted by application component servers. As it has been established before, J2EE supports two CORBA-compliant technologies: JavaIDL and RMI-IIOP. Like components made with the Microsofts .Net Framework, CORBA components can be written in several programming languages and can be packaged in order to be distributed.

GENERAL STRUCTURE OF A CORBA COMPONENT


Component interface

Event sinks

Event sources

Facets

Receptacles

Attributes

Figure 34 General structure of a CORBA component CORBA uses an Object Request Broker (ORB) to send requests from objects executing on one system to objects executing on another system. The ORB allows objects to interact in a heterogeneous, distributed environment, independent of the computer platforms on which the various objects reside and the languages used to implement them. For example, a C++ object running on one machine can communicate with an object on another machine which is implemented in COBOL or Java. In general, CORBA-compliant applications comprise a client and a server. The client invokes operations on objects which are managed by the server, and the server receives invocations on the objects it manages and replies to these requests. A Java object, for example, can either use CORBA services available over the network (as a client), or it can publish services to other components in the application (as a server). The ORB manages the communications between client and server using the Internet Inter-ORB Protocol (IIOP), which is a protocol layer above TCP/IP. By default, Page 138 of 366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

IIOP uses TCP, but it can use other protocols. The ORB abstracts object location, networking issues, request dispatching and delivering to target objects and activation issues. The objects living on a distributed CORBA application, regardless of the language in which they are implemented, can communicate with the ORB through an interface written in the CORBA Interface Definition Language (IDL). The CORBA IDL is a language included in the CORBA 2.0 specification which is designed to describe the interface of objects, including the operations that may be performed on the objects and the parameters of those operations. IDL is a declarative language, not a implementation language, with the same lexical rules as C++. New keywords are introduced to handle distributed computing concepts. IDL semantics coincide with the ANSI C++ standardization effort, and the IDL has full support for C++ pre-processing. The implementation of a CORBA object can be done on any language for which a mapping exists for IDL (Java, C, C++, COBOL, Phyton, Tcl, Lisp, Smalltalk, Ada95, etc.). Using IDL, the behaviour of an object is thus captured in the interface, independently of the object's implementation. Clients need only know an object's interface in order to make requests. Servers respond to requests made on those interfaces, and clients do not need to know the actual details of the server's implementation. To implement an interface, CORBA IDL is compiled into the source code language with which the client or server is implemented. On the client side, this code is called a stub. On the server-side, this IDL code is called a skeleton. In order to request a service from the server, the client application calls the methods in the stub (moving down the protocol stack from the client to the stub to the ORB). Requests are then handled by the ORB, and enter the server via the skeleton (moving up the protocol stack: i.e. an upcall). The server object then services the request and returns the result to the client. interface ChatServer { enum EventType { LIST_UPDATE, MESSAGE, LOGOUT, NOEVENT } ; union ChatEvent switch(EventType) { case LIST_UPDATE: sequence<string> clients; case MESSAGE: string message; case LOGOUT: boolean logout; case NOEVENT: boolean noEvent; }; void login(in string name); void logout(in string name); void tell(in string name, in string message); ChatEvent poll(in string name);
};

Figure 35 An example of a IDL interface described Page 139 of 366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

The primary components in the OMG Reference Model architecture are these: 1. Object Services. The OMG specifies a set of CORBA Object Services that define domainindependent interfaces that are used by many distributed object programs. Therefore, they are horizontally oriented services. For example, a service providing for the discovery of other available services (naming and trading) is almost always necessary regardless of the application domain. Here there are some examples of Object Services: 1.1. The Naming Service, which allows clients to find objects based on names. 1.2. The Trading Service, which allows clients to find objects based on their properties. 1.3. The Event Service, which supports asynchronous message-based communication among objects. Also it Supports chaining of event channels, and a variety of producer/consumer roles. 1.4. The Lifecycle Service, which defines conventions for creating, deleting, copying and moving objects. 1.5. The Persistence Service, which provides a means for retaining and managing the persistent state of objects. 1.6. The Transaction Service, which supports multiple transaction models, including mandatory "flat" and optional "nested" transactions. 1.7. The Concurrency Service, which supports concurrent, coordinated access to objects from multiple clients. 1.8. The Relationship Service, which supports the specification, creation and maintenance of relationships among objects. 1.9. The Externalization Service, which defines protocols and conventions for externalizing and internalizing objects across processes and across ORBs. 2. Common Facilities. Like Object Service interfaces, these interfaces are also horizontallyoriented, but unlike Object Services they are oriented towards end-user applications. An example of such a facility is the Distributed Document Component Facility (DDCF), a compound document Common Facility based on OpenDoc. DDCF allows for the presentation and interchange of objects based on a document model, for example, facilitating the linking of a spreadsheet object into a report document. 3. Domain Interfaces. These interfaces fill roles similar to Object Services and Common Facilities but are oriented towards specific application domains; they are vertically oriented interfaces, used in vertical industries such as financial markets, healthcare, and telecommunications. For example, one of the first OMG RFPs issued for Domain Interfaces is for Product Data Management (PDM) Enablers for the manufacturing domain. Currently, there are other OMG RFPs related to telecommunications, medical, and financial domains. 4. Application Interfaces. These are interfaces developed specifically for a given application. Because they are application-specific, and because the OMG does not develop applications (only specifications), these interfaces are not standardized. However, if it appears that certain broadly useful services emerge out of a particular application domain, they might become candidates for future OMG standardization.

Page 140 of 366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

Figure 36 Common Object Request Broker Architecture OMG defines CORBA consisting of four groups of entities: the Object Request Broker (ORB), the CORBAservice, the CORBAfacility and the CORBAdomain. The ORB is a specification for distributed objects communication, and the remainder are functionalities. These four groups of entities provide the required mechanisms to put objects communicating with each other in distributed heterogeneous platforms together with ready-to-use functionalities and proving different levels of abstraction and covering different domain areas (e.g., accounting, user interface). The CORBAs Basic Object Adapter (BOA) is a standard specifying how an object is activated in a CORBA environment, and acts as the communication manager for the objects with the ORB. With new version 3.0 of CORBA, BOA is being replaced with the Portable Object Adapter (POA), giving solution for some problems existent in BOA like portability across ORB implementations. CORBA 2.0 introduced the Internet Inter-ORB Protocol (IIOP), allowing objects developed for an ORB to communicate with objects developed by other ORB over TCP/IP [31]. This was an important issue to bring interoperability to CORBA environments towards open platforms. Figure 36 illustrates the primary components in the CORBA ORB architecture.

IV.2.2 Microsoft DNA / COM+ platform Windows DNA was Microsofts solution to the development, deployment, and management of multi-tier enterprise solutions, before the introduction of Microsoft .Net. The cornerstone of Windows DNA was COM+, a language-independent technology used to build re-usable components. Windows DNA has evolved from the middleware services provided in Windows NT, including clustering services, Web component services, and development and management tools. COM+ has evolved from several Microsoft products: The Component Object Model (COM), Distributed COM (DCOM), Microsoft Transaction Server (MTS), as well as parts of Microsoft Message Queue (MSMQ). Today, COM+ includes several middleware services including transaction management, resource management, and security management. In 2003 Microsft made a major move of its platform over to the Microsoft .Net architecture. Figure 37 presents the Windows DNA object model. Page 141 of 366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

HTML Client

HTML Client HTTP

Presentation Tier

HTTP

Firewall

Web Server

ISAPI CORBA Client COMCORBA Bridge ActiveX Control in Web Broswer Standalone Application

ASP

HTML / DHTML / XML DCOM DCOM DCOM

Application Server

Business Tier

COM+ Components Existing MTS Components ADO OLE DB ODBC

Shared Property Manager

"Babylon" Integration Server

Proprietary Protocol

Data Tier Database

Existing System Legacy System ERP System

Figure 37 The Windows DNA object model

IV.2.3 J2EE Framework

Introduction The J2EE Framework represents a single standard for implementing and deploying enterprise applications. It has been designed through an open process, engaging a range of enterprise computing vendors, to ensure that it meets the widest possible range of enterprise application requirements. J2EE is a Standard with an Industry Momentum. While Sun Microsystems invented the Java programming language and pioneered its use for enterprise services, the J2EE standard represents a collaboration between many partners. Sun partners include OS and database management system providers, middleware and tool vendors, and vertical market applications and component developers. Working with these partners, Sun has defined a robust, flexible platform that can be implemented on the wide variety of existing enterprise systems currently available, and that supports the range of applications IT organizations need to keep their enterprises competitive.

Page 142 of 366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

An overview The J2EE Framework is designed to provide server-side and client-side support for developing enterprise, multi-tier applications. Such applications are typically configured as a client tier to provide the user interface, one or more middle-tier modules that provide client services and business logic for an application, and backend enterprise information systems providing data management. Figure 38 provides an overview of the J2EE framework. The J2EE architecture defines a client tier, a middle tier (consisting of one or more sub-tiers), and a backend tier providing services of existing information systems. The client tier supports a variety of client types, both outside and inside of corporate firewalls. The middle tier supports client services through Web containers in the Web tier and supports business logic component services through Enterprise JavaBeansTM (EJBTM) containers in the EJB tier. The enterprise information system (EIS) tier supports access to existing information systems by means of standard APIs.

Figure 38 The J2EE Framework In addition to providing support for Enterprise JavaBeansTM, Java Servlets and JavaServer PagesTM components, the Java 2 Platform, Enterprise Edition specification defines a number of standard services for use by J2EE components: Java Naming and Directory InterfaceTM API: Designed to standardize access to a variety of naming and directory services, the Java Naming and Directory InterfaceTM (JNDI) API provides a simple mechanism for J2EE components to look up other objects they require. JDBCTM API: JDBCTM API enables applications to manipulate existing data from relational databases and other data repositories. JavaMailTM API: J2EE includes JavaMailTM to support applications such as e-commerce websites. The JavaMail API provides the ability to send order confirmations and other user feedback. CORBA Compliance: J2EE supports two CORBA-compliant technologies: JavaIDL and RMIIIOP. JavaIDL enables Java applications to interact with any CORBA-compliant enterprise Page 143 of 366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

system. RMI- IIOP technology combines the programming ease of the Java Remote Method Invocation API (RMI) with CORBA's Internet Inter-ORB Protocol (IIOP) for easier integration of J2EE applications with legacy applications. Java Transaction API: While J2EE provides transaction support automatically, the Java Transaction API (JTA) provides a way for J2EE components and clients to manage their own transactions and for multiple components to participate in a single transaction. XML Deployment Descriptors: J2EE defines a set of descriptors in the universal data language, XML. With its ability to support both standard and custom data types, XML makes it easier to implement customizable components and to develop custom tools. Java Message Service: The Java Message Service (JMS) API defines a standard mechanism for components to send and receive messages asynchronously, for fault-tolerant interaction. JMS is optional for J2EE release 1.0. More information can be found at: http://java.sun.com/j2ee/ IV.2.4 .Net Framework Introduction The .NET Framework is the programming model of Microsoft .NET-connected software and technologies for building, deploying, and running Web applications, smart client applications, and Extensible Markup Language (XML) Web services applications that expose their functionality programmatically over a network using standard protocols such as SOAP, XML, and HTTP. Microsoft announced the .NET initiative in the middle 2000. The .NET Framework was maeterialised in 2003. Microsoft has already implemented the first parts and it is expected that its partners will adopt it as parts become practically available. Since its launch, Microsoft claim a huge momentum around industry partners taking advantage of new opportunities by complementing the .NET Platform. The vast industry movement around Visual Studio.NET and the .NET Framework is a sign that the industry is energized for a new platform that unleashes the powerful capabilities made possible through XML Web services. An Overview The .NET Framework consists of two main parts: The Common Language Runtime (CLR). The common language runtime is the foundation and manages code at execution time, providing core services such as memory management, thread management, and remoting, while also enforcing strict type safety and other forms of code accuracy that ensure security and robustness. The runtime can be hosted by high-performance, server-side applications, such as Microsoft SQL Server and Internet Information Services (IIS). The .Net Framework Class Library. It is a unified set of class libraries including Microsoft ASP.NET for Web applications and XML Web services, Microsoft Windows Forms for smart client applications, and Microsoft ADO.NET for loosely coupled data access. The Common Language Runtime The Common Language Runtime (CLR) in built on top of operating system services. It is a highperformance execution engine. Code that targets the runtime and whose execution is managed by Page 144 of 366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

the runtime is referred to as managed code. Responsibility for tasks such as creating objects, making method calls, and so on is delegated to the common language runtime, which enables the runtime to provide additional services to the executing code. Despite its name, the CLR also has a role in a components development-time experiences. While the component is running, the runtime provides servicessuch as memory management (including garbage collection), process management, thread management, and security enforcementand satisfies any dependencies that the component may have on other components. At development time, the runtimes role changes slightly. Because it automates so much (for example, memory management), the runtime makes the developers experience very simple. In particular, features such as lifetime management, strong type-naming, cross-language exception handling, delegate-based event management, dynamic binding, and reflection dramatically reduce the amount of code a developer must write in order to turn business logic into reusable components. Runtimes are nothing new for languages; virtually every programming language has a runtime. The key features of the runtime include a common type system (enabling cross-language integration), self-describing components, simplified deployment and versioning, and integrated security services. The .Net Framework Class Library The classes of the .NET Framework provide a unified, object-oriented, hierarchical, and extensible set of class libraries, or APIs, that developers can use from the languages they are already familiar with. Today, Visual C++ developers use the Microsoft Foundation Classes, Visual J++ developers use the Windows Foundation Classes, and Visual Basic developers use the Visual Basic framework. The classes of the .NET Framework unify these different classes, creating a superset of their features. The result is that developers no longer have to learn multiple object models or class libraries. By creating a common set of APIs across all programming languages, the .NET Framework enables cross-language inheritance, error handling, and debugging. In effect, all programming languages, from JScript to C++, become equal, and developers are free to choose the right language for the job.
The .NET Framework provides classes that can be called from any programming language. These classes comply with a set of naming and design guidelines to further reduce training time for developers. Some of the key class libraries are following listed: Web Classes - ASP.NET (Controls, caching, Security, Session, Configuration etc.) Data ADO.NET (ADO, SQL, Types etc) Windows Forms XML Classes (XLST, Path, Serialization etc) Enterprise Services (Transactions, messaging, partitions, events etc) System Classes (Collections, Diagnostics, Globalisation, IO, Security, Threading Serialization, Reflection, Messaging, etc).

ASP .NET A set of classes within the unified class library, ASP.NET provides a Web application model in the form of a set of controls and infrastructure that make it simple to build Web applications. ASP.NET comes with a set of server-side controls (sometimes called Web Forms) that mirror the typical HTML user interface widgets (including list boxes, text boxes, and buttons), and an additional set of Web controls that are more complex (such as calendars and ad rotators). These controls actually run on the Web server and project their user interface as HTML to a browser. On Page 145 of 366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

the server, the controls expose an object-oriented programming model that brings the richness of object-oriented programming to the Web developer. One important feature of these controls is that they can be written to adapt to client-side capabilities; the same pages can be used to target a wide range of client platforms and form factors. In other words, Web Forms controls can sniff the client that is requesting a page and return an appropriate user experience WML for phones or HTML 3.2 for a down-level browser and Dynamic HTML for Internet Explorer 5.5. ASP.NET also provides cluster session-state management and process recycling, which further reduce the amount of code a developer must write and increase application reliability. It uses these same concepts to enable developers to deliver software as a service. Using ASP.NET XML Web services features, ASP.NET developers can simply write their business logic, and the ASP.NET infrastructure will be responsible for delivering that service via SOAP and other public protocols. ASP.NET works with all development languages and tools (including Visual Basic, C++, C#, and JScript). More information can be found: http://msdn.microsoft.com/netframework/ IV.3 What is MOC Message Oriented Computing The MOC, Message Oriented Computing, approach allows for a more loosely coupled connection between various system parts and between independent systems. The focus here is on asynchronous message passing versus the synchrounous request-oriented style of component oriented computing. This is supported by message queuing with gurarntueed delivery, and potential for priority setting. IV.3.1 Microsoft Biztalk Framework Biztalk Framework provides a specification for the design and development of XML-based messaging solutions for communication between applications and organisations. This specification builds upon standard and emerging Internet technologies such as Hypertext Transfer Protocol (HTTP), Multipurpose Internet Mail Extensions (MIME), Extensible Markup Language (XML), and Simple Object Access Protocol (SOAP). The logical implementation model for the BizTalk Framework is composed of three layers, as shown in Figure 39. The layering described here is for illustrative and explanatory purposes. As the BizTalk Framework specification definitively specifies only the wire format for BizTalk Messages and the protocol for reliable messaging, alternative logical layering may be used, provided it supports equivalent functionality, without affecting compliance with this specification. These logical layers include the application (and appropriate adapters), the BFC Server, and transport. The application is the ultimate source and destination of the content of BizTalk Messages, and communicates with other applications by sending Business Documents back and forth through BFC Servers. Multiple BFC Servers communicate with one another over a variety of protocols, such as HTTP, SMTP, and Microsoft Message Queue (MSMQ). The BizTalk Framework does not prescribe what these transport protocols are, and is independent of the implementation details of each.

Page 146 of 366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

The application is responsible for generating the Business Documents and any attachments to be transmitted to its peer(s) and submitting them to the BFC Server. The responsibility for wrapping the Business Documents in a BizTalk Document may rest with either the application or the BFC server, depending on the implementation of the BFC server. The server processes the document and any attachments and constructs a BizTalk Message as appropriate for the transport protocol. The BFC Server uses information contained in the BizTags to determine the correct transport-specific destination address. The server then hands the message to the transport layer for transmission to the destination BFC Server. The interfaces between the business application, the BFC Server, and the transport layer are implementation specific.

Nd oe

Nd oe

Ap a n p lic tio

Ap a n p lic tio

BFC S rve e r

BFC S rve e r

Tra s o n p rt

Tra s o n p rt

Figure 39 BizTalk Layer Architecture BizTalk is being specified and implemented by Microsoft and its partners. The main product that acts as the basis for implementations of the BizTalk Framework 2.0 is the Microsoft BizTalk Server. The BizTalk Web site provides information about BizTalk including links to the Framework specification, to XML schema that conform, and to discussion mailing lists. URL: http://www.biztalk.org

IV.3.2 ebXML Technical Architecture BCM and BCF IV.3.2.1 Introduction

ebXML (http://www.ebxml.org/) was established as a joint initiative of the United Nations (UN/CEFACT) and OASIS, developed with global participation for global use. This partnership brought a great deal of credibility to ebXML, representing major vendors and users in the IT industry and support from leading vertical and horizontal industry groups including RosettaNet, OTA (Open Travel Alliance) and many others. The initiative enjoyed broad industry support with hundreds of member companies, and more than 1,400 participants drawn from over 30 countries. The continuation of the standardisation process is now shared between UN/CEFACT and OASIS. Page 147 of 366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

UN/CEFACT has more focus on the content standardisation and Core Components, while OASIS is focusing on standards for the technical infrastructure, such as messaging and RegRep. There is, however, some competition between these two organisations, in particular related to the different approaches of modelling and framework that is being proposed. UN/CEFACT promotes BCF Business Collaboration Framework, based on extensions from the UMM methodology also used early in the ebXML work, while OAIS has a new initiative in BCM, Business Centered Methodology, which has a basis in Integration architecture work from US Army. The technical approach of ebXML can be classified as a message-oriented approach, basing its approach on replacing earlier EDIFACT/EDI standards with similar XML messages and documents. The fundamental goal of ebXML was to create an infrastructure for a single global electronic market. ebXML is complimentary, not competitive, with existing standards, such as UN/EDIFACT, X12, etc., thus preserving much of the existing investment in these applications. Much of the focus was on the needs of smaller organizations. A plug-and-play architecture allows modular and incremental investment and development. Off the shelf applications, built in established open standards, enable affordable, rapidly developed implementations, even for small organisations. Several working groups were formed to address specific areas of development of the ebXML standard, such as the development of requirements, common business models, trading partner profiles, registry and repository requirements, transport and routing packages, a technical architecture for implementation, and quality review procedures. Requirements were gathered from a broad range of user community representatives, and the entire development effort was co-ordinated with a steering committee to ensure that the various parts would work well together. The unique thing about ebXML, is that unlike most standard development efforts, it developed the entire infrastructure necessary to do electronic business. Most related standards only focus on one part, such as transport or registry. The initial project closed in May 2001. Work continues in within two groups, UN/CEFACT works with issues related to business process modelling, and OASIS continues with work related to the technical infrastructure. By end of 2004, OASIS has standardised version 2.0 of the technical infrastructure. The development of Core Components is still evolving and related information models is still evolving in UN/CEFACT. A dispute about technology IPR with Microsoft has delayed some of the progress, and made the overall standard situation a bit unclear. IV.3.2.2 The ebXML components

The technical infrastructure is composed of the following major elements: Messaging Service: This provides a standard way to exchange business messages between organisations. It provides for means to exchange a payload (which may or may not be an XML business document) reliably and securely. It also provides means to route a payload to the appropriate internal application once an organisation has received it. The messaging service specification does not dictate any particular file transport mechanism (such as SMTP, HTTP, or FTP) or network for actually exchanging the data, but is instead protocol neutral. SOAP was finally selected as the standard for the message service. Registry: The registry is a database of items that support doing business electronically. Technically speaking, a registry stores information about items that actually reside in a repository. The two together can be thought of as a database. Items in the repository are created, updated, or deleted Page 148 of 366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

through requests made to the registry. The particular implementation of the registry/repository database is not specified, but only how other applications interact with the registry (registry services interfaces) and the minimum information model (the types of information that is stored about registry items) that the registry must support. Examples of items in the registry might be XML schemas of business documents, definitions of library components for business process modelling, and trading partner agreements. An original goal of the ebXML registry was to support a fully distributed, networked set of interacting registries that would provide transparent interaction to any ebXML compliant registry by interfacing with only one of them. Time ran out for this and instead only a single registry is specified. A supplemental report offers a way to locate ebXML registries using UDDI. Trading Partner Information: The Collaboration Protocol Profile (CPP) provides the definition (DTD and W3C XML schema) of an XML document that specifies the details of how an organisation is able to conduct business electronically. It specifies such items as how to locate contact and other information about the organisation, the types of network and file transport protocols it uses, network addresses, security implementations, and how it does business (a reference to a Business Process Specification). The Collaboration Protocol Agreement (CPA) specifies the details of how two organisations have agreed to conduct business electronically. It is formed by combining the CPPs of the two organisations. A CPA can be used by a software application to configure the technical details of conducting business electronically with another organisation. The CPA/CPP specification discusses the general tasks and issues in creating a CPA from two CPPs. However, for various reasons it doesn't specify an actual algorithm for doing it. Business Process Specification Schema (BPSS): The Specification Schema provides the definition (in the form of an XML DTD) of an XML document that describes how an organisation conducts its business. In this section we focus on the architectural aspects of ebXML, and its message based architecture. The section on Business process management discussed the ebXML approach to business process modelling. ebXML also developed several registries and patterns for use of ebXML. These include Catalog of Common Business Processes and E-Commerce Patterns. Another central delivery related to this area is Core Components, which are intended to be the basic information elements used in building business messages. IV.3.2.3 The Collaboration Architecture

The illustration based on the ebXML Technical Architecture Specification will probably go a long way toward sorting out what ebXML means for business. Company A in Figure 40 below will first review the contents of an ebXML Registry, especially the Core Library that may be downloaded or viewed there. The Core Library (and maybe other registered Business Processes) will allow Company A to determine the requirements for their own implementation of ebXML (and whether ebXML is appropriate for their business needs). There is only one strong link between the process architecture and the product architecture, and that is the Business Process Specification Schema, which is a machine interpretable encoding of a business process. It is compliant with the UMM meta model, but does not necessarily need to be developed using the full UMM process.

Page 149 of 366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

Figure 40 High-level overview of ebXML interaction between two companies Based on a review of the information available from an ebXML Registry, Company A can build or buy an ebXML implementation suitable for its anticipated ebXML transactions. The hope of the ebXML initiative is that vendors will support all of the elements of ebXML. At such time, an "ebXML system" might be little more than a pre-packaged desktop application. Or maybe, more realistically, the ebXML system will at least be as manageable as a commercial database system (which still needs a DBA). Figure 40 suggests that the hypothetical Company B uses something like this pre-packaged application. Either way, the next step is for Company A to create and register a CPP with the Registry. Company A might wish to contribute new Business Processes to the Registry, or simply reference available ones. The CPP will contain the information necessary for a potential partner to determine the business roles in which Company A is interested, and the type of protocols it is willing to engage in for these roles. Once Company A is registered, Company B can look at Company A's CPP to determine that it is compatible with Company B's CPP and requirements. At that point, Company B should be able to negotiate a CPA automatically with Company A, based on the conformance of the CPPs, plus agreement protocols, given as ebXML standards or recommendations. Finally, the two companies begin actual transactions. These transactions are likely to involve Business Messages conforming to further ebXML standards and recommendations. At some point in all of this, however, "real-world" activities will probably occur (for example, the shipment of goods from one place to another, or the rendering of services). ebXML will have helped in agreeing to, monitoring, and verifying these real-world activities. Of course, in our "information economy," a Page 150 of 366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

lot of what goes on might stay within the realm of ebXML -- maybe everything within a particular business relationship.

Figure 41 ebXML Business Interactions and use of Repository The user will extract and transform the necessary information from an existing Business Process and Information Model. Associated production rules could aid in creating an XML version of a Business Process Specification. Alternatively a user would use an XML based tool to produce the XML version directly. Production rules could then aid in converting into XMI, so that it could be loaded into a UML tool, if required. In either case, the XML version of the Business Process Specification gets stored in the ebXML repository and registered in the ebXML registry for future retrieval. The Business Process Specification would be registered using classifiers derived during its design. IV.4 Interoperability of Component oriented and Message oriented systems Communication interoperability can be discussed by comparing how various communication approaches can interoperate with each other. The evolution of SOAP, with its request-reply pair, clearly shows that a synchronous request call can be implemented by a pair or request and reply messages, and that also event/notification interactions can be based on messages. In the situation of trying to achieve interoperability between different communication approaches, this might be used as part of bridge connections from one system to another.

Page 151 of 366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

Challenges on interoperability are as least twofold: One aspect is the component integration and configuration to design and to implement component oriented systems, and the other aspect is the integration of existing (legacy) systems in the context of business to business and enterprise integration. For component interoperability the follwing topics have to be addressed: Component Orchestration Infrastructure Design Methodologies Component life-cycle Component orchestration has to achieve interoperability on the data (messages), functionality and processes level. Interface, usage and process models are important tools to gain component interoperability. The available infrastructures (run time environments) are dominated by the J2EE and the .NET platforms. Components implemented for one of these platforms can hardly be migrated to the other. The driving companies behind J2EE and .NET like Sun, IBM, Oracle, Microsoft etc. are supporting Web Services as the main bridge between components instantiated in these environments. There are several Design Methodologies applied for component bases software systems. Most methodologies rely on UML for modelling. That helps to understand the design but still allows different interpretations of interface semantics. The components life cycle is influenced by the methodology and the business case.

Page 152 of 366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

IV.4.1 Communication Model/Services - Description / Publication As a principal overview there are two ways of publishing and selecting services: proactive and reactive ones. A reactive approach is an approach where the call to the service is transmitted to potentially a large number of components that may decide or not to react and satisfy it. A proactive approach consists of an approach where the service is declared by the component that offers it. The publish/subscribe approaches are typically reactive and they consist in letting programs/components declare their interest for a certain type of events (subscribe) and those events be distributed to all interested components each time they are emitted (published) [EFGK02]. This paradigm benefits from having different commercial products and research directions. According to Eugster et al. [EFGK02], a first interesting classification may be made among types of publish/subscribe systems: topic-based systems, content-based systems and type-based systems. Topic-based systems [GBCvR93] only allow to select events depending on a keyword (possibly including wildcards). Content-based systems [RW97, SBC+98, FHA99] support more complex selection mechanisms (e.g. including variables comparison, template matching). Type-based systems [EGD01] support typed messages in the events they propose. A second classification may be made according the type of treatments applied when receiving an event. Events may be treated as messages or invocations. A message-like treatment [HBS+02, EGS00] of events as records of different message types and properties may typically be done after receiving the messages, as a filtering action. An invocation-like treatment of events is provided by some publish/subscribe systems [Ses97, HL02]. It generally consists in providing a set of objects that correspond to an interface for treating events. When an event is redirected to the system, a method is called on the object, corresponding to the event type. In the publish/subscribe paradigm, decoupling is a fundamental point, as it lets programmers build potentially large distributed systems, however, this time of system does not allow decoupling in time between the different components. Another type of reactive approaches are the coordination models/tools. A first collection of examples are coordination media like Linda, Jada [CR97], Klaim [DFP97, DFP98], X-Klaim [BDFP98], JavaSpaces [W+98], T-Spaces [WMLF98], SecOS [BOV99] or Lana spaces [BRP02]. A coordination medium is conceptually a centralized repository that contains tuples of values. Schematically a tuple may be retrieved using a pattern tuple that matches of the retrieved tuples. First developed as a tool for parallel computing, this type of model has the advantage to provide decoupling in both time and knowledge. As a general comment, reactive approaches fail to provide complex description languages needed for the interoperability of components belonging to very different paradigms. In the contrary, proactives approaches being supported widely by industries willing to reuse already existing software at large, provided such infrastructures. In the proactives approaches like Enterprise JavaBeans [Sun97] and CORBA [OMG96], services are referenced through a naming service (namely JNDI [LS00] for JavaBeans or the naming and trading services [Obj01] for CORBA) that have mainly the same functionality: a component willing to use another unknown one must make a search through these services and, once found, it has to Page 153 of 366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

decide if it is the right one, given some specific information and then to use it in a potentially longterm communication process. However, automating the choice of the code that is effectively called is a possibility still being explored. HydroJ [LLC03] is one of those systems that is a little bit more evolved. HydroJ contains two levels: (1) the intercomponent calls are resolved through the use of a standard service architecture (a yellow page service returns a list of services), (2) a dynamically chosen method. HydroJ allows to choose which component is called through the use of a yellow page service. When a call is made on a chosen component the most adapted method is chosen and called. This choice is made through the use of a pattern matching language based on semi-structured data. In the same lead other completely automated redirection approaches have been tried like LuckyJ [ODMS03]. Another possibility is to redirect the choice of components. Locators [HEWB03], Keris [Zen02], GRUMPS [EAB+03] and other works on components evolution platforms [IJVB02, SM02] have in common that the component infrastructure is responsible for redirecting, if necessary, calls to the correct component. This is what can be called the composition level. Schematically, the first level is used to define the functionalities, while the second level allows to modify the reference resolutions. In recent systems, the composition level is more and more automated in order to provide component (functionality) automatic redirection and discovery, thus freeing the programmer from writing the related code. This means that the inter-component semantics and type of interfaces are generally preserved. Nevertheless, new types of reference resolutions arise [IJVB02] that allow more flexibility through the use of properties to guide the reference resolution mechanism. Few service description languages have been described in the research community. As an example WSDL [CCMW01] is an XML specification for describing web services. The description is a document that binds some parts to other documents (like protocols description). The goal of WSDL is mainly to allow programmers to describe services. Once found, active components have to decide if the described service corresponds to a valid possibility or not. UDDI [udd00] allows to fill the gap between describing and finding a service. It constitutes a phone-book used to find services. Services described using WSDL can be published and retrieved through UDDI [CER01]. Adding WSCL to the whole system [BKL01] allows to describe future interactions. Services match or do not match: there is no way to quantify the quality of matching. It is also not anonymous, and components have references to each other when the server and the client have been defined. Web services are further discussed in part III on service-oriented computing. This also includes a description of the use of DAML-S for service descriptions.

IV.5 Methodologies for Components Component oriented architectures can be understood as an answer to face the problem of software complexity. By composing larger entities then single objects an application is an assembly of building blocks. There are a number of methodologies developed by industry and academics mostly derived from methodologies for the design of object oriented design [BSJ04]. The different methodologies differ by their background and application domain, their way of working, their usage of modeling technologies, tool support and target infrastructure. Page 154 of 366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

IV.5.1 Catalysis Catalysis is developed at the University of Brighton [DW99]. Catalysis addresses the modeling and construction of open distributed systems. Catalysis covers the temporal evolvement of distributed systems by describing component and service instantiation and de-instantiation over time. Catalysis is based on the construction of a business model and applies packaging, refinement and decomposition. Therefore Catalysis covers the mapping between business processes and its implementation. The type model uses by Catalysis plays an important role. IV.5.2 CADA CADA is an amalgamation of a number of different design methodologies to combine best-practice methods [HV00]. CADA is based on Catalysis [DW99] UML DSDM: Dynamic Systems Development Method (www.dsdm.org) CRC (Class Responsibility Collaborations) [BS97] It incorporates Catalysis by extended usage of UML implementation diagrams and adds business modeling concepts. CADA offers a number of methodological building blocks, which can be used by the designer on-demand for various development tasks: Business Models Context Model System Modela Deployment Model Within CODA the deliverables for these design activities are defined. The component relationships are specified with usage of case models and activity diagrams. IV.5.3 Select Perspective for CBSE Select Perspective is the main methodology, which is used by the Select toolset of Princeton Softech [AP03] but is also supported by other tools like Rational. Select Perspective scopes on application development by focusing on development results and change management. It capitalizes on previous methodologies such as Structured Development, RAD and object-oriented development. Select Perspective has an underlying hierarchical development method. It is based on phases with assigned core activities. Each activity has its own standard models. The phases are align, architect and assemble. Each phase does an refinement on the previous phases. The system architecture and implementation details are elaborated in parallel. Select Perspective supports parallel development by supporting parallel work with an own method called LUCID. The core of all development steps is a tight linkage of the application to the business processes, which is to be supported.

Page 155 of 366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

IV.5.4 Rational Unified Process The Rational Unified Process (RUP) [Kr99] is the most popular design process and it is the software Engineering method of the Rational Toolset from IBM. It is an interactive, object-oriented and tools supported method which can be used for a number of different software development projects. While UL is addressing the applied models, RUP concentrates on the underlying development process. The process is divided into four phases: inception elaboration construction transition These phases are supported by a number of workflows to support and to structure the cooperation of the engineers. These workflows group different kind of activity like: Business modeling Requirement modeling Analysis and design Implementation Test Deployment And supporting workflows like project management and configuration and change management. Based on UML RUP is not introducing new modelling concepts but its application in the design process. RUP therefore give a guidance of how to apply models in the design process to derive component specification from business models. IV.6 State of the Art Research projects CBSE A number of research projects related to CBSE (Component Based Software Engineering) have been undertaken in the 5th and 6th Framework programme. A network project, CBSENet, as www.cbsenet.org provides an overview of activities in this area. COMPETE The main goal of the ESPRIT Project COMPETE is to provide IT based methodologies and tools to help companies structured as Extended Enterprises to cope with the challenges of globalization, deregulation and contracting life cycles, combining fast decision making and flexibility to change. Such methodologies embrace both product function, market, life cycle analysis, together with organisational and individual competencies identification and evaluation. COMPETE has developed a software platform which integrates process modeling methods, competences management, project management and workflow through a common data model shared by all company departments. COSMOS The aim of COSMOS project was to improve the competitiveness of mould makers involved in the plastic injection moulding business by re-engineering the moulds development process, developing especially tuned SW tools and integrating these two recipients into a suitable and especially dedicated environment. COSMOS project developed these structured methodologies and the related Page 156 of 366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

SW tools in order to introduce into the market an integrated environment called MoldIntegra being able to operate in a Concurrent Engineering way of using brand new tailored tools developed by mould makers for mould makers in every phase of mould development. At the following web address http://www.cordis.lu/ist/ka4/tesss/projects.htm a description of a number of projects can be found, a few examples are given below:

PROJECT NAME COMBINE

FUNDING SOURCE European Union (FP6)

Web page

Brief Description
Based Interoperable Enterprise System Development

http://www.opengroup.org/combine/overview.htm Component

AGEDIS

IST-199920218

http://www.agedis.de/

BANKSEC

IST-199920711

http://www.atc.gr/banksec/

Automated Generation and Execution of Test Suites for Distributed Component based Software Secure Banking Application Assembly using component based approach

IV.7 Conclusions The areas of component- and message-oriented computing will continue to serve as an implementation basis for interoperability architectures. In particular service-oriented architectures as described in part III will be realized using such technology as an implementation platform. Technical interoperability for communication between these can be achieved using bridge technology, this will, however, typically not include interoperability of non functional aspects as described in part VII.

Page 157 of 366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

V Agent-Oriented Computing
V.1 Introduction The primary goal of multi-agent systems is to obtain a coherent, adaptive and robust behavior thanks to the interaction of a large amount of software components, eventually heterogeneous and individually non-reliable [8].
V.1.1

Historical context

In the beginning, software systems were considered as entities that receive data as input, and transform this data in order to deliver results as output. But as problems became larger and larger, it was necessary to decompose such systems into several less complex sub-systems. Their capacity to solve local problems and to interact provided a mechanism for solving more complex global problems. This is the approach taken by the DAI (Distributed Artificial Intelligence) community. The multi-agent community, stemming from the DAI community, considers agent interaction as the key for systems that want to be capable of dealing with complex tasks. Activities of a system are then considered as the result of interactions between various concurrent and autonomous agents. These agents work within groups, or societies, using cooperation, concurrency, conflict solving mechanisms, in order to fulfill the tasks of the system [18]. Nevertheless, DAI is not the only origin of multi-agent systems, Artificial Life also contributed to their emergence. The combination of these two fields provided both a cognitive and a reactive dimension. Indeed, DAI is attached to the concept of intelligence as symbolic reasoning, and Artificial Life is attached to the concepts of autonomy, behavior, viability,...
V.1.2

What is an agent?

V.1.2.1 Definition According to M. Wooldridge, an agent is a software system able to act autonomously and flexibly in a changing environment [72]. According to G. Weiss [62], an agent is a computational entity, like a program or a robot, that perceives and acts autonomously on its environment. V.1.2.2 Characteristics According to M. Wooldridge [72], agents main characteristics are autonomy, reactivity, proactivity and social ability. Autonomy An agent is able to take decisions without the intervention of an human or another agent. Reactivity An agent is able to perceive its environment and keep a constant link with it, in order to deal with the change within this environment. Page 158 of 366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

Pro-activity An agent should not only be directed by events generated by its environment, it must also take initiatives, following its own objectives. Social ability An agent must be seen as a social being that is integrated in a society, in which tasks, resources and roles are distributed among the agents.

V.1.3 Types of agents


There are two main "schools" inside the multi-agent community. The first one, called the "cognitive school", considers agents as entities that are basically intelligent. The second one, called the "reactive school", considers agents as very simple entities that react directly upon environments changes [70]. Cognitive agents, or intelligent agents, are able to work independently on certain tasks, due to their reasoning capabilities. They also need sometimes to coordinate their tasks, and negotiate on certain goals. This approach is directly influenced by the DAI. Reactive agents have no individual intelligence, and are based on the Artificial Life models. They only react to the stimulis created by the environment and by the other agents. In reactive systems, its the whole system that is considered to have an intelligent behavior, not the agent itself (like in an ants colony for example). The different schools of thought as well as the different needs of applications have lead to different agent architectures. The architecture of a single agent can be one of the following : Reactive where the agent responds to some stimuli from its environment. The behavior of the agent is stimulus-response. The agent does not contain any internal model of its environment and does not deliberate on its actions. Deliberative where the agent contains an internal model of its environment and reasons about its world to decide upon its actions Hybrid a combination of reactive and deliberative agent architectures where the agent contains a reactive component as well as a deliberative component. Hybrid agents are capable of behaving reactively as well as reason about its environment.

V.1.4 Agents vs Objects


Are agents merely objects endowed with more (smarter) functionality? Many researchers question the properties an object must possess in order to be defined an agent. As agents, objects are characterized by their behavior, the state in which they are, and by the fact that they communicate thanks to simple message passing. The first difference concerns the control of their behavior. Indeed, objects dont have any control over their behavior, whereas agents control it in order to fulfill their goals. The second but most significant difference resides at the level of autonomy and flexibility. The use of private and public methods do not permit objects to control the application of these methods. Whereas agents are able to decide whether they want to apply a request, according to the specific agents goals. Page 159 of 366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

The third difference concerns the activity of agents and objects. Agents are active, in the sense that their execution is an infinite loop in which they observe their environment, update their state, and select actions to perform. Objects only become active when another object invoke a method on it. The last difference is that agents are "able" to make incorrect decisions, and learn from these errors. Whereas objects can not make such errorneous decisions: errors committed are only programming and conception errors, from which they cant learn anything.
V.1.5

What is a multi-agent system?

V.1.5.1 Definition A multi-agent system is a set of intelligent agents interacting in a common environment, in order to fulfill a set of goals, or to accomplish a set of actions [70]. V.1.5.2 Characteristics A multi-agent system is not dedicated to the resolution of one particular problem, they dont have predefined organization. Agents reason on their organization depending on the problem to solve. Agents inside a multi-agent system do not need to know the global goal to act, they are autonomous. Data in a multi-agent system is decentralized, i.e. distributed among the agents. Multi agent systems are characterized by coordination, communication and relations that exist between their agents. Coordination may be either cooperation in order to achieve a common goal, or negotiation in order to satisfy optimally each agents interest. Communication is achieved thanks to several protocols, like FIPA and KQML. V.1.5.3 Application examples Here are some examples of real-life applications of multi-agent systems: Robocup Rescue In the field of multi-agent simulation, the Robocup Rescue comes from the problem of inefficiency of the rescue teams in case of important natural disaster, such as the earthquake that happened in Kobe, Japan. The goal of this competition is thus to improve in the long run the efficiency of the rescue teams during major civil disasters. Multi-agent systems seem to be the best solution to simulate the management of a big amount of heterogeneous agents in a hostile environment. That simulation, based on aerial photographs, consists in managing a set of intelligent agents (such as firemen, victims, policemen, headquarters, etc.), each of them having their own capabilities, in order to deal optimally with the disaster. http://www.rescuesystem.org/robocuprescue/ GUARDIAN the GUARDIAN systems goal is to manage post-operative cares of patients in an intensive cares surgery unit. Multi-agent systems serve here to model the cooperation Page 160 of 366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

between several surgeons and nurses. http://ksl-web.stanford.edu/projects/guardian/ ADEPT, or Advanced Decision Environment for Process Tasks is a project whose aim is to develop a software system to assist enterprises in their business process. That system considers that a business process is a set of agents that negotiate and supply services, and permits the enterprise managers to have a better knowledge of the information coming from their departments, and to better adapt their decision making. http://citeseer.ist.psu.edu/jennings96adept.html A good overview of agent applications is available from [32] and [81].
V.2

Agent Design - Considering agents as a new design metaphor

This section will present methods and tools whose goal is to permit the development of software systems using the agent abstraction.
V.2.1

Agent-oriented design languages and methodologies

Most of the agent-oriented design languages and methodologies are based on already existing methods, and mostly on object-oriented methods. This is the case of the first three methods exposed here: Agent-UML [6], AAII [34] and Gaia [71]. The last one, MAS-CommonKADS [30,28] is an adaptation of the knowledge engineering method CommonKADS [47]. By exposing briefly these four methods, we try to provide a large scope of the different approaches used to model multi-agent systems. Other surveys of agent-oriented design methodologies can be found in [29,10]. Here is a list of other design languages and methodologies: Message [7], MaSE [68], Prometheus [43], Tropos [20,60]. Two workshops that are dedicated to this area of work are: International Workshop on Agent-oriented Software Engineering (AOSE), held at AAMAS, International Joint Conference on Autonomous Agents and Multi-Agent Systems. Links to recent workshops are available at http://www.csc.liv.ac.uk/~mjw/aose/ International Bi-conference Workshop on Agent-oriented Information Systems (AOIS). Held at AAMAS, International Joint Conference on Autonomous Agents and Multi-Agent Systems and CAISE, International Conference on Advanced Information Systems Engineering. V.2.1.1 Agent-UML Agent-UML [6] is an extension of the Unified Modeling Language (UML), resulting from the cooperation between the Object Management Group (OMG) and the Foundation of Intelligent Physical Agents (FIPA), aiming to increase acceptance of agent technology in industry. Page 161 of 366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

The main parts of the Agent-UML modeling language are the agent class diagram the protocol diagram, which is a new type of UML diagram, extending in many ways the state and sequence diagrams. The agent class diagram is an extension of the UML class diagram. Agents are specified using classes with the +<<agent>>+ stereotype. They have three characteristics: 1. identifier: this characteristics uniquely identify each agent in the multiagent system. 2. role: a role defines the behavior of an agent into the society, for instance, Seller or Buyer. Agents can have multiple roles in the multiagent system or they can change from role to role during the execution of the multiagent system. 3. organization: agents evolving in multiagent systems belong to one or several organizations. These organizations define the agent roles and the relationships between these roles. Organizations correspond generally to human or animal organizations such as hierarchies, markets, groups of interest or herds. There are also classes that model capabilities, services and protocols (using respectively the <<capability>>, <<service>>, and <<protocol>> stereotypes), each of them also having their own characteristics (such as input, output for services, type, ontology for services, etc.). The protocol diagram combines the notation of the sequence diagram and the state diagram for the specification of interaction protocols. The figure below shows an example of such a diagram, representing an English-auction protocol for surplus flight ticket. This diagram uses the concepts of agent roles, agent lifelines, nested and interleaved protocols, extended semantics of messages and protocol templates. The reader may find more details in [6].

Page 162 of 366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

Figure 42 Agent UML Protocol diagram V.2.1.2 AAII The AAII methodology [34] is also based on object-oriented modeling, but its goal is to model BDI agents, i.e. agents that have Beliefs, Desires and Intentions (see section 4). More precisely, the AAII methodology distinguishes two levels of modeling, or viewpoints: the external viewpoint, which focuses on decomposing the system into agents, and describing their responsibilities, provided services, interactions, etc. the internal viewpoint, which focuses on defining the architecture of each agent, in terms of beliefs, goals and plans 1. External viewpoint The external viewpoint is described by two models: Page 163 of 366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

- the agent model, that describes the hierarchical relations between agent classes, and also the instances that may exist in the system. The formalism is similar to the UMLs class diagrams. - the interaction model, that describes the services, responsibilities and interactions of an agent class, and the syntax and semantics of messages exchanged during interactions. Nevertheless, no particular language is proposed by the methodology for this model. 2. Internal viewpoint The internal viewpoint is described using 3 different models: - the belief model, which expresses the belief that an agent may have thanks to a class diagram, from which first-order logic predicates may be derived. - the goal model, which describes the set of goal an agent may have. Goals are expressed with a modal operator (describing whether it is an achievement, a test, or a verify goal) and a predicate from the belief model. - the plan model, which describes how to achieve goals, using a formalism similar to the UML statechart. These models are progressively defined in the AAII methodology, and can be then easily translated into an agent-oriented programming language based an the BDI formalism, like 3APL, or AgentSpeak(L). The reader may find further explanations on the AAII methodology in [34]. V.2.1.3 Gaia The Gaia Methodology [71] is another agent-oriented development methodology that is based on object-oriented modeling. Nevertheless Gaia is more generic than the AAII methodology, because it is independent from the BDI formalism. It is also much more focused on agent interactions, and is thus more dedicated for the development of reactive systems with a large number of agents. The reader may find more details about this methodology in [71]. V.2.1.4 MAS-CommonKADS The MAS-CommonKADS [30,28] methodology is based on Knowledge Engineering, and is an adaptation of the CommonKADS [47] methodology to multi-agent systems. The MAS-CommonKADS defines 7 models (the agent model, the task model, the expertise model, the organization model, the coordination model, the communication model and the design model) through three steps: conceptualization, analysis and design. Compared to CommonKADS, the only add-on is the coordination model. The conceptualization step describes a first sketch of the system via use cases and scenarios described in Message Sequence Charts. The Analysis step describes these 6 models: the agent model: describes the different characteristics of the agents of the system, such as their Page 164 of 366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

names, types, roles, services, goals, reasoning capabilities, constraints, the coordination model: describes the different interactions of the system (using MSC diagrams), and also the high level interactions (in HMSC diagrams). the organization model: describes the static relations between the agents of the system, this is achieved using the OMT object model. the expertise model: describes the knowledge of the system, i.e. the inferences over the domain, the reasoning of one agent, and the inferences over the environment. the task model: in this model, the tasks of the system are decomposed using a tree structure. the communication model: describes the human/computer interactions of the system. Finally, the Design step has the goal to transform the previous model in order to obtain an implemented system. More details about this methodology can be found in [30,28]. V.2.1.5 Agent Methodologies for Enterprise Integration Kendall et al. propose a methodology for developing agent-based systems for enterprise integration. They propose an agent-oriented system corresponding to an IDEF (ICAM definition) model by mapping IDEF concepts to an agent-oriented system by using use cases, [86]. An actor (or a resource) in the IDEF model is mapped onto an agent, a function with a control output are an agent's goals and plan, the input from an actor are the beliefs and multiple actors per function are mapped onto collaborations. V.2.1.6 Agent-based Modeling Agent-orientation or agent-oriented concepts allow seamless integration of business rules modeling and information modeling and attempts to capture the dynamic aspects of business situations [87]. In [88], he distinguishes between passive and active entities: passive entities are objects and active entities are agents. It is argued that object orientation does not capture communication and interaction in the high-level sense of business processes. An agent-based approach for the design of organizational information systems, called AgentObject-Relationship (AOR), was proposed in [88]. In this approach, an organization is modeled as a complex "institutional agent" defining the rights and duties of its subagents that act on its behalf. The information items in the system are viewed as beliefs, or knowledge and mental notions such as commitments and claims. He also proposed a graphical notation for modeling. The notion of an agent is assumed as an entity that senses or perceives something, reasons about it and then act. Kendall argues that role modeling is appropriate for intelligent agent systems as they emphasize social or interactive behavior, work towards accomplishing a goal, and role models can be patterns that can be documented and shared [89]. Based on characteristics of agents such as goal-oriented behavior and social ability, a role is defined to have a context, responsibilities (services, tasks and goals), collaborators (other roles it interacts with), external interfaces (access to services), Page 165 of 366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

relationships to other roles (aggregation, specialization, etc.), expertise (domain ontology, task models), coordination and negotiation capabilities and learning capabilities. This approach is important to the area of Virtual Enterprises (VE), as such patterns of roles and role models can be maintained in a reference library that can be used by VE creators to enable fast and efficient formation of VEs.

V.2.2 Mobile Agents V.2.2.1 Overview During the last decade, research and development in the field of agents has led to apply the agent paradigm to the context of the Internet. This research resulted in the sub-field of mobile agents systems [VT:97:ECOOPWS]. The goal of this domain is to exploit the property of mobility in order to improve the quality of service that agents offer. The original reason for agent mobility is that agents are meant to be autonomous. Every agent is a program that encapsulates its data and its behavior. Mobile agents are agents that can have their site of execution changed during the course of their lifetime. A mobile agent is a program whose execution can be halted with its execution state being packaged into an envelope. Execution of the agent only continues when the envelope is opened. The envelope can be seen as a snapshot of the agent's execution at the point in time that the envelope is created. The envelope itself is simply a blob of data, like a file, that can be exchanged between programs, perhaps over the network. When the envelope is opened on a different site to that on which it was created, the agent continues its execution on this new site. This creates the effect of a program (agent) having moved (under its own or external control) from one site to another. The old paradigm is Remote Procedure Calls, which enables one computer to call procedures on another computer. Here, a number of messages are transmitted across the network. This requires the two computers to be connected during the entire process. The new paradigm is Remote Programming, where one computer not only calls procedures on another computer, but also provides the procedure. Here, the data as well as the program is transferred to another computer and execution takes place at the remote computer, and the results are transferred back. Thus, the number of messages transmitted is reduced tremendously. This does not require the two computers to be connected all the time. Mobile agent systems have several advantages motivating their use for Internet applications. These advantages will be discussed in next section. V.2.2.2 Motivations The motivation for mobile agents is due to the increase in size of the network, increased traffic across the network, increase in communication and computational power. Solutions to specific situations where mobile agents can be used, could be replaced by a combination of more traditional techniques. But the principal difference with such solutions is that mobile agents provide an integrated framework built by design to efficiently implement robust Page 166 of 366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

applications having to deal with mobility issues. Principal reasons why agent mobility is a useful feature for applications are: Latency reduction and bandwidth conservation A traditional dialogue between a client and a server can involve the exchange of a number of messages over the network that connects them. Network connections can be expensive, unreliable, and constitute a performance bottleneck for the application. In the mobile agent approach, a client encapsulates his logic within a mobile agent and sends this agent to execute on the server machine, or on a machine close to the server. In this way, the interactions between the client and server happen independently of the network, and do not suffer from problems with the network. The network is only necessary for sending the agent and for communicating the result back to the client. Disconnected Operation The approach just outlined is particularly useful when the client machine often disconnects from the network. When it reconnects the machine may find radically different network connection or may not find needed data that could only be found in its past neighbourhood. By moving to the place where data is located, such loses are avoided. Deploying new services The fundamental feature of mobile agents is that programs can be flexibly and dynamically deployed onto one machine from another using the network. This is useful for deploying services on machines that are not easily accessible -- such as machines in a different location to where the owner is -- or for deploying / replacing application functionalities on a huge scale. This feature has been exploited for active networks where network protocols and services are deployed on routers. The mobile agent approach allows deployment to occur automatically, which is a significant improvement on the manual remote shell tools. Load balancing and external computing power exploit A more recently discovered potential of the mobile agent approach is to exploit computing power that exists elsewhere in the network. The client simply deploys an agent containing his computation on a machine that is currently available for this. This approach of exploiting extra computing power on the Internet is becoming increasingly common. The bottom line is that there are enough computers connected to the Internet that do nothing, enough of the time, for their power to be harnessed for other tasks. This has been seen in the peer-to-peer community and the Grid. SETI for instance is a peer-to-peer system that makes use of available PCs to compute functions that analyse radio signals from outer space. Entropia is another peer-to-peer system used for calculating the largest known prime number. Though these platforms do not yet use particular agent systems, they illustrate the benefits of deploying programs on foreign machines to get tasks done.

Page 167 of 366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

V.2.2.3 Mobile Agent Solutions The next subsections focus on three particular mobile agent systems, namely Lana, an academic agent platform, SMAP, a solution showing mobile agents suitability to P2P networks and Tryllian ADK, an industrial strength agent platform. A detailed list of existing mobile agent systems is maintained by the Mobile Agent List [MAL2004]. Furthermore [gray00mobile] gives a good overview of other different mobile agent systems such as multiple language systems (Ara, D'Agents or Tacoma), Java based systems (Aglets, Concordia, Jumping Beans, Voyager) or other systems (like Obliq, Messengers, Telescript) and presents their similarities and differences. One of the early examples of mobile agents was Telescript [82]. It is a language-based environment for constructing multi-agent systems and implements the concepts places, agents, travel, go, meeting, connection, authorities and permits. The network is a collection of places and agents occupy places. Agents can travel from one place to another and have a meeting with another agent. A connection lets two agents on different computers communicate. Lana Mobile agent computing does have the drawback that a new security risk is opened up. This comes from the fact that a user is able to deploy code on another user's machine. This code could contain a Trojan Horse or a virus. Currently, many agent platforms have serious problems in curtailing the effects of a malicious mobile agent. This security concern was one of the reasons for designing and building the Lana platform [RBP:02:ECOOP]. Lana offers several security properties. One of these is that a Lana agent that executes on a machine cannot gain access to resources on that machine or to other agents' data, unless this has been explicitly permitted by the security policy of that machine. This security property has important implications. For instance, a client agent that executes on a server machine cannot learn anything about the data in local files or inside of other client agents unless the local administrator and the other clients agree to allow that agent to gain access. Further, even the server software running on the machine cannot gain any access to information inside of the client agent unless that client agent explicitly agrees to release the information. This information remains inside of the agent. It is only released if the agent is programmed to release it. Likewise, an agent holding a business object that is part of a distributed business process involving different corporate entities should only reveal the information that concerns its current place of execution. The essence of the mobile agent paradigm is that agents are sent to execute on another site. This opens the possibility for viruses, though Lana has measures to eliminate these risks. There are other sources of problems though. If too many agents are moved to a machine at the same time, then the machine will run very slowly. This is the source of a potential denial of service attack. JXTA Simple Mobile Agent Protocol (SMAP)

Page 168 of 366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

SMAP [YCY:03:HICSS] discusses the application of mobile agents to e-Commerce on Peer-toPeer (P2P) networks, as well as an implementation on the Project JXTA, P2P platform which demonstrates the suitability of mobile agents on P2P networks. Rather than the traditional client/server networks where previous mobile agent protocols have been applied, targeted networks are P2P networks. SMAP takes advantage of JXTA protocols simplicity, and by using the mobility features these protocols provide which are explicitly directed at P2P networks. Thus, the JXTA platform gives the agent the ability to adapt to the unpredictable behavior of the peers and to their content and this content's potential mobility, i. e., if either the peer or the content move, both can still be found. Agents on the JXTA platform adapt their itinerary to the variable network behavior it might encounter. One interesting feature of JXTA protocols is that application programmers using such an agent system need not be concerned with the underlying real network communication issues such as Network Address Translation (NAT) and firewalls. SMAP is not meant to be a better mobile agent technology but it shows that the combination of Mobile Agent Technology and P2P networks is an ideal match that will benefit both of these technologies. Tryllian ADK Tryllian Agent Development Kit (ADK) [TryllianADK] is a Java-based mobile agent development platform. It allows easy build, deployment and management of secure large-scale distributed applications operating regardless of environment, location and protocols. It is built over advanced standards like JXTA, JMS, FIPA, JNDI, SOAP, WSDL, UDDI and TLS/SSL and takes advantage of their specificities. Its P2P architecture provides dynamic discovery, adding and removal of nodes and services. Tryllian ADK allows developers to build applications in a workflow manner, avoiding the need to take care intricacies of multi-threaded and asynchronous programming. It is clearly aimed at the development of complex robust and highly scalable industrial-strength applications. As such Tryllian ADK seems to be particularly well suited as a starting architecture for highly distributed systems like deployment of services on P2P networks. V.2.3 Agent Middlewares Since multi-agent systems are distributed in nature, there is a strong need for interoperability. There are two approaches: 1. By using interfaces designed to meet specific interoperability needs, 2. By using a standardized approach. The first attempt for standardization was the development of Agent Communication Language KQML [54] and later by FIPA. An example of middleware is JADE (Java Agent Development Framework), which is a FIPA2000 compliant framework for the development of multi-agent systems (http://jade.tilab.com/). JADE is the leading open source agent development framework.

Page 169 of 366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

The RETSINA (Re-usable Task Structure-based Intelligent Network Agents) architecture [33] is a reusable, distributed multi-agent infrastructure to coordinate intelligent agents in gathering, filtering and integrating information from the Internet and for decision support. It uses the notion of a middle-agent, which performs functions that can be analogous to those of a broker.
V.3

Multi Agent Systems

Multi-agent systems (MAS) extend single-agent architectures with an infrastructure for interaction and communication. Ideally, MAS exhibit the following characteristics: they are typically open and have no centralized designer; they contain autonomous, heterogeneous and distributed agents; they provide an infrastructure to specify communication and interaction protocols. Agents in MAS can be viewed as an implementation of modular design. The complex structure and functionality of the MAS as a whole are realized by an arrangement of components (agents) that each has a certain specific functionality and autonomy. As such, MAS are very interesting as an integration paradigm. There are two main differences with traditional paradigms, such as objectorientation: (a) it radicalizes the notion of component autonomy, and (b) it provides a coordination and communication infrastructure that traditionally has been present in a rudimentary form only (cf. CORBA), so that these aspects had to be part of the component code. Several architectures and models for MAS have been proposed that handle coordination in different ways. One of the first mechanisms in Distributed AI (which can be regarded as the forerunner of MAS), was Contract Net Protocol (CNP) which had a market-like coordination structure (for more details, see below). Another early architecture is based on mediators. The concept of mediator, first introduced by Gio Wiederhold, was a way to deal with the integration of knowledge from heterogeneous sources. An example of MAS architecture based on the concept of mediators, and typically not employing any centralized control, is RETSINA [33].
V.3.1

Agent Societies

The term "society" is used in a similar way in agent society research as in human or ecological societies. The role of the society is to allow its members to coexist in a shared environment and pursue their respective roles in the presence and perhaps cooperation with others. Main aspects in the definition of society are purpose, structure, rules and norms. Structure is determined by roles, interaction rules and communication language. Rules and norms describe the desirable behavior of members and are established and enforced by institutions that often have a legal standing and thus lend legitimacy and security to society members. When multi-agent systems are considered from an organizational point of view, the concept of desirable social behavior becomes of utmost importance. That is, from the organizational point of view, the behavior of individual agents in a society should be understood and described in relation to the social structure and overall objectives of the society. Not any kind of social, or asocial, behavior is acceptable. On the other hand, as agents are autonomous by definition, the individual behavior of the agents cannot be controlled directly. The solution for this dilemma is to be found in Page 170 of 366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

the society that should provide social mechanisms by means of which the behavior of the agents can be streamlined into a desirable direction without compromising at any time the autonomy of the agent. Davidson [11] has proposed a classification for artificial societies based on the following characteristics: openness - describing the possibilities for any agent to join the society; flexibility - indicating the degree agent behavior is restricted by society rules and norms; stability - defining the predictability of the consequences of actions; trustfulness - specifying the extent to which agent owners may trust the society. Open societies impose no restrictions on agents joining the society. They assume that participating agents are designed and developed outside the scope and design of the society itself and therefore the society cannot rely on the embedding of organizational and normative elements in the intentions, desires and beliefs of participating agents but must represent these elements explicitly. In closed societies, on the other extreme, it is not possible for external agents to join the society. Agents in closed societies are explicitly designed to cooperate towards a common goal and are often implemented together with the society [16]. Closed societies provide strong support for stability and trustfulness properties, but only allow for very little flexibility and openness. The large majority of existing MAS are closed. Besides open and closed societies, Davidson distinguishes semi-open and semi-closed
V.3.2

Coordination in MAS

Multi-agent systems that are developed to model and support organizations need coordination frameworks that mimic the coordination structure of the particular organization. The organizational structure determines important autonomous activities that must be explicitly organized into autonomous entities and relationships in the conceptual model of the agent society [13]. Furthermore, the multi-agent system must be able to adapt to changes in organization structure, aims and interactions. Coordination can be defined as the process of managing dependencies between activities [55]. Coordination is an important problem inherent to the design and implementation of MAS [2]. Examples of coordination theories are joint-intentions [44], shared plans [5] and domainindependent teamwork models [59]. Behavioral approaches are gaining terrain in agent research. Concepts as organizational rules [73], norms and institutions [39] and social structures [22] all start from the idea that the effective engineering of MAS needs high-level agent-independent concepts and abstractions that explicitly define the organization in which agents live [16]. An important contribution in agent coordination is Jennings joint responsibility framework [83], based on human teamwork models. This model is based on the idea that being a part of a team implies some sort of responsibility towards the other members of the team; joint responsibility. Jennings built on the work of Cohen et al. by distinguishing between the commitment that underpins an intention and the associated convention, where a commitment is a pledge or a promise to do something and conventions are means of monitoring commitments, e.g. specifying when a commitment can be abandoned.

Page 171 of 366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

From a programming point of view, coordination models can be divided into two classes: controldriven and data-driven [19]. Control-driven models are systems made up of a well-defined number of entities and functions, in which the flow of control and the dependencies between entities need to be regulated. The data-driven model is more suited for open societies where the number of entities and functions is not known a priori and cooperation is an important issue. In DAI, coordination approaches were often based on contracting. The most famous example of these is the Contract Net Protocol (CNP) [53] for decentralized task allocation. In short, CNP acts as follows: all agents must register with the matchmaker; when an agent needs to locate other agents, it must send a request message to the matchmaker describing the requested service; other agents can then make bids; once bids have been received, the request will select one (according to some criteria) and allocate the task to that bidder; the bidder can then accept the task. The CNP protocol assumes that all agents are eager to contribute, and the most appropriate bid is the bid of the agent with the best capability and availability. A more sophisticated version is the TRACONET model [57]. In this model, agents are supposed to be self-interested. This means that contractors have to pay a price for the service performed. Contractors try to minimize the costs by selecting the bidder with the lowest price (all things being equal). Potential subcontractors try to maximize their benefit, and may sometimes discard offers or respond by a counter offer. Contractual Agent Societies (CAS) apply the concept of contracting to the coordination of MAS, and are inspired by work in the areas of organizational theory, economy and interaction sociology, which model organizations and social systems after contracts [12]. A market place is a set of mutually trusted agents, and when an untrusted agent wants to join the market place, it applies at a socialization service that not only plugs in the agent technically, but also makes it agree on a social contract. Social contracts govern the interaction of a member with the society. A social contract is a commitment of an agent to participate in a society and includes beliefs, values, objectives, protocols and policies that agents agree to obey in the context of the social relationship. A mechanism of social control may be negotiated as part of the social contract, defining deviations from agreed "normal" behavior and corresponding sanctions (e.g. banning). The notion of contract is not necessarily related to market places; it was also used in a collaborative Information System context in the thesis of Verharen [15]. Economics and organizational theory consider that relationships between and within organizations are developed for the exchange of goods, resources, information and so on. Williamson argues that transaction costs are determinant for the choice of organizational model [67]. Transaction costs are not just the costs of delivering a message. They will rise when the unpredictability and uncertainty of events increase, or when transactions require very specific investments, or when the risk of opportunistic behavior of partners is high. Roughly speaking, when transaction costs are high, economic agents tend to choose a hierarchical model in order to control Page 172 of 366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

the transaction process. If transaction costs are low, than the market is usually a better choice [56]. Although this economic theory cannot be applied directly to agent societies, it strongly suggests that agent societies should support both the hierarchical and the market model, and not just one of them. Powell introduced networks as another possible coordination model [45]. Networks stress the interdependencies between different organizational actors and pay a lot of attention to the development and maintenance of communicative relationships, including the definition of rules and norms of conduct within the network. Coordination in markets is achieved typically through a price mechanism in which independent actors are searching for the best bargain. Hierarchies are typically coordinated by supervision, that is, actors that are involved in authorized relationships acting according to certain routines. Networks achieve coordination by mutual interest and interdependency [61]. A good overview of Agent Coordination is available from [84]. V.3.3 Negotiation Another important area of work in multi-agent systems is in agent negotiations. Agents within a multi-agent system resolve conflicts through negotiation. Several negotiation techniques and models have been studied and applied in agent systems. An overview is available in [85].
V.3.4

Communication

The main challenge of coordination and collaboration among heterogeneous and autonomous intelligent systems in an open, information-rich environment is that of mutual understanding. A mechanism for communication must include both a knowledge representation language (based on ontologies) and a communication protocol. V.3.4.1 Communication protocols An Agent Communication Language (ACL) provides language primitives that implement the agent communication model. ACLs are commonly thought of as wrapper languages in that they implement a knowledge-level communication protocol that is unaware of the choice of the content language and ontology specification mechanism. Most work done in the area of agent communication languages is based on Speech Act Theory [48] and the Language Action Perspective [58]. Speech Act Theory [48] sees human natural language as actions, such as requests, suggestions, commitments and acceptances. The theory provides the means to analyze communication in detail at three levels: content (locution), intention (illocution) and effect (perlocution). The Language Action Perspective (LAP) is an application of Speech Act Theory to the area of information systems [58]. The basic assumptions underlying LAP are [15]: the primary dimension of human cooperative activity is language. Action is performed through language in a world constituted by language; the meaning of sentences for the actors in a social setting is revealed by the kinds of acts Page 173 of 366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

performed; cooperative work is coordinated through language acts; the speech act is the basic unit of communication; speech acts obey socially determined rules; Information Systems are means of communication and coordination among people. There are two main agent communication languages (ACL) based on Speech Act Theory. The first is KQML [54]. KQML consists of a set of communication primitives (performatives) which aim to support cooperation among agents. The second language is FIPA-ACL [1]. FIPA-ACL is associated with FIPAs open agent architecture. As with KQML, FIPA-ACL is independent from the content language and is designed to work with any content language and any ontology specification approach. In [25] a formal language called [EQUATION] was described with which an integrated semantics for information and communication systems can be expressed. It is an extension of dynamic deontic logic and the semantics of speech acts is described using preconditions and postconditions. For example, the postcondition of an authorized request is that the Hearer is obliged to perform the requested action. Pre- and postconditions have been used also in agent communication languages such as KQML and FIPA-ACL. For example, the precondition of KQMLs tell message states that the sender believes what he tells and that he knows that the receiver wants to know that the sender believes it. The postcondition of sending the tell message is that the receiver can conclude that the sender believes the content of the message. In a similar vein, FIPA-ACL uses feasibility preconditions and rational effects. There have been many discussions about this approach. One problem in many of these proposals is that the semantics refer to mental states such as beliefs, and it is not very clear what it means that an agent, being a kind of software, holds a certain belief. Another problem is that FIPA-ACL does specify the effects in the mental state of the sender, but offers no clue on how to infer the mental states of the receiver. For semantics of communication, this is rather disappointing. With regards to the first problem, some have argued that the semantics should not be based on mental states, but on social commitments. Others have tried to ground the semantics in the notion of sign conventions. The latter approach takes its starting point in Searles dictum that if the performance by agent j of a given linguistic act counts as an assertion of the truth of A, js performance counts as an undertaking to the effect that A is true. In other words, given the right functioning of the sign conventions within a community, A "ought to be" true when j asserts A. This leads then to the introduction of a modality "ought to be (according to the conventions)" for a first approximation of the meaning of a speech act, from which beliefs, obligations etc are derived in a second step, according to additional norms. Another recent development is the attempt to ground the communication language in argumentation theory [35,31,23]. V.3.4.2 Content interchange languages and ontologies Content languages, in ACL terminology, are languages used by agents to exchange their information content while conversing. An example of a content language is the Knowledge Interchange Format (KIF). KIF defines a common language for expressing the context of a knowledge base to exchange. KIF proposed to use first order predicate logic so that it can be used Page 174 of 366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

as an "interlingua". The syntax of KIF is a prefix version of FOL and provides a support for nonmonotonic reasoning and definitions. KIF can be used to encode knowledge about knowledge. With the upcoming of the web, other languages appeared , typically based on HTML or later XML that can be seen as content languages, such as RDF. A possible basis for a content language is the concept of ontology (the reader may find in part 5 an overview of the role of ontology in agent-based systems). Ontology is a description of concepts and relationships that can exist for a community of agents [21]. Ontology aim at capturing domain knowledge in a generic way and provides a commonly agreed understanding of a domain, which may be reused and shared across applications and groups. Ontology provides a common vocabulary of an area and defines - with different levels of formality - the meaning of the terms and the relations between them [3]. In the Semantic Web community, the DAML Services effort attempts to provide a more expressive way of describing Web services using ontology. However, this approach does not separate the domain-neutral communicative intent of a message (considered in terms of speech acts) from its domain-specific content, unlike similar developments from the multi-agent systems community [42]. OWL is a recent Web Ontology language [4]. Where earlier languages have been used to develop tools and ontology for specific user communities (particularly in the sciences and in company-specific e-commerce applications), they were not defined to be compatible with the architecture of the World Wide Web in general, and the Semantic Web in particular. OWL uses both URIs for naming and the description framework for the Web provided by RDF to add the following capabilities to ontology: ability to be distributed across many systems; scalability to Web needs; compatibility with Web standards for accessibility and internationalization; openess and extensiblility . OWL builds on RDF and RDF Schema and adds more vocabulary for describing properties and classes: among others, relations between classes (e.g. disjointness), cardinality (e.g. "exactly one"), equality, richer typing of properties, characteristics of properties (e.g. symmetry), and enumerated classes. Communication and social interaction are always embedded in a social context. In AI, McCarthy was the first to argue that formalizing context was a necessary step toward the designing of more general computer programs [38]. The fact that a representation depends upon certain local assumptions and parameter settings is called context dependence. The problem is that we cannot be sure that locally produced knowledge is understood in the same (right) way by different agents in other contexts. To integrate knowledge from different sources, a process of meaning negotiation is needed [37]. [24] argues that context can be viewed at three levels: location level - the physical or virtual location in which the message is represented; informational level - the total of background knowledge relevant to the message that the communicating agents share; Page 175 of 366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

social level - the social organizations and conventions in which the message is embedded. For all three levels is holds that the context can be taken in a narrow or more wider sense, it typically takes the structure of widening concentric circles (cf. [9]). V.4 Multi-agent Architectures There are numerous multi-agent architectures. The architectures reviewed in this section are those that are considered relevant for the areas of work in networks of enterprises, enterprise integration and interoperability. V.4.1 Market Architectures The AVE (Agents in Virtual Enterprises) project, described in [90], provides a description of how agents can be used in the formation of a VE. One of the main components of the system that was proposed is an electronic VE market, where different enterprises can announce and obtain various information. The partners of a VE are enterprises. Fischer et. al describes the selection of a partner as a process of matching VE goals (or sub-goals) to the partial processes within the different enterprises that represent the VE. This approach was further developed as a multi-agent architecture in [91] and [92], where they focus on the formation of the VE, where agents that represent the partners of a VE negotiate to become a part of the VE. The agents conduct a multi-attribute negotiation and have the ability to learn from past experiences. This approach distinguished between a "market agent" and an "enterprise agent". The market agent plays the role of a coordinator in the electronic market, where its main goal is the formation of the VE. An enterprise agent represents an enterprise that is interested in becoming a member of a VE. A language called LARKS, for agent advertisements and requests, was defined in [93]. They also present a flexible and efficient matchmaking process, where both syntactic and semantic matching can be conducted. The matching process uses five different filters to narrow down the set of candidates. LARKS and the matching process are currently being incorporated into RETSINA (Reusable Task Structure-based Intelligent Network Agents) architecture, which is a reusable, distributed, multi-agent infrastructure to coordinate intelligent agents in gathering, filtering, and integrating information for the Internet and for decision support. It uses the notion of a middleagent, which performs functions that can be analogous to those of a broker. V.4.2 Broker Architectures The Oxford English Dictionary defines a broker as an "agent buying and selling for others, middleman". The concept of a broker has been used in the design of agent-based architectures, in particular to support the formation of VEs. In this context, brokers have been referred to as "cybermediaries", where a broker performs mediation tasks in electronic commerce, [95]. Avila et al. proposes a taxonomy for the functions that can be performed by a broker; explicit functions, which the broker makes available to the clients, such as selection of resources and integration of resources and implicit functions, which the broker uses to perform the explicit functions, such as the selection of algorithms for resource selection and interaction with other brokers. Page 176 of 366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

An agent-based brokerage architecture was proposed in [96] for the moulds industry. Here, the client's order is received directly or via a broker. The client order consists of a description of the deliverable such as the size of the mould, the due date and the mould material. The main types of agents in this approach are brokers, facilitators that represent a set of enterprises that possess a given competence, an enterprise agent that represents a single enterprise and a consortium agent which is a temporary agent created to manage the process of creating alternative VEs for any client order. A VE is selected after creating a set of alternative VEs based on the competencies and the scheduling requirements. Federated approaches have been suggested as a means of supporting broker architectures, [97] and [94]. It is an approach to agent interoperation where the agents are organized into what is called a federated system. The notion of a facilitator, a special class of agents to facilitate communication among the agents, is used. In this approach, instead of the agents communicating directly with each other, they communicate via the facilitator [98]. V.4.3 Information Agent Architectures The notion of an information agent was introduced as a component in an infrastructure for supporting collaborative work in [99]. The information agent consists of a problem solver, based on description logic, and an agent program, which turns the problem solver into an autonomous agent. It uses the KQML agent communication language to communicate with the outside world. The information agent makes other agents (maybe from other organizations) selectively aware of relevant information by providing communication and information services. This idea was applied in the domain of supply chain management, where each function, such as order acquisition and scheduling, were represented by agents. What is perhaps most important about this architecture is the fact that it is based on TOVE (Toronto Virtual Enterprise), thus using ideas from the Enterprise Integration community [100].
V.5

Agent theories - Semantics of agent systems

The aim of agent theories is to give semantics to intentional systems. Intentional systems are systems to which are attributed mental states, such as beliefs, desires, intentions, abilities, wants, etc. A good overview of intentional systems and their semantics is given in [70]. The most widespread semantics given to an agent architecture is the BDI semantics [46], which is an extension of the possible worlds semantics of Hintikka [26]. Intuitively, the possible world semantics considers that an agents beliefs can be seen as a set of possible worlds. Then, an agents belief is something that is true in all the agents possible worlds. The BDI architecture extends this notion by adding goal worlds and intention worlds, and by also adding a notion of time. Possible worlds are the modeled as time trees, on which CTL (Computational Tree Logic) [14] expressions are evaluated. Nevertheless, there are other architectures giving semantics to agent systems, that are not necessary based on possible worlds semantics. Some of these architectures, overviewed in [70], are : Moore, knowledge and action [40,41,36] Cohen and Lesveque, intention [44] Page 177 of 366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

Singh [51,50,49,52] Werner [63,64,65,66] Wooldridge [69] V.6 The role of ontologies in agent-based systems According to Nowostawski, Bush, Purvis and Cranefield [74] an ontology is a separate, publicly available information model that has been constructed for the given problem domain and serves as a common dictionary for the agents. Whereas Sycara and Paolucci [75] claim that ontologies not only provide agents with the basic representation for reasoning about interaction but also provide agents with a shared knowledge that they can use to communicate and work together (p. 343). However, further research is required to develop ways in which the meaning of what one agent says is precisely conveyed to another agent. In practice such problems are often ignored by assuming that all agents are using the same terms to mean the same things, and usually such assumptions have been built into the application. Uschold [77] recognizes that this approach only works when one has full control over what agents exist, and what they might communicate. In reality, agents need to interact in a much wider world, where it cannot be assumed that other agents will use the same terms, or if they do, it cannot be assumed that the terms will mean the same thing. In addition to these problems we must also consider that agents can communicate with other agents as well as with their environment. A more realistic approach is to encode the terms and their semantics in ontologies [77]. So when agent 1 sends a message to agent 2, then along with this message is an indicator of, or a pointer to what ontology agent 1 is using (e.g. [79]). This implies that agent 2 should be able to access agent 1's ontology to see what the terms mean, the message is successfully communicated, the service is performed to specification [77]. However it is very difficult to achieve such success even when a common ontology has been defined. We can classify different communication problems with respect to the ontology language used, and the ontology model created by different ontology engineers1. For example, different ontology languages and often based on different underlying paradigms (for example, description logic, first order logic, frame-based, taxonomy, semantic net, thesaurus). There is also variation in the expressiveness of ontology languages: some ontology languages are very expressive, some are not. In addition, not all ontology languages have a formally defined semantics, and not all ontology languages provide inference support. Uschold points out that even if the exact same language is used, two different people may build two different ontologies in the same domain. When this occurs ontologies can be incompatible with respect to the terms used to describe concepts, that is, people can use different terms to describe the same concept, or use the same term for representing different concepts. Moreover, a given notion, or concept may be modeled at different levels of granularity [77]. Recently there has been considerable effort made into increasing the degree of standardization, both in ontology languages and in the content of the actual ontologies. Nowostawski et al. [74] also comment that the content of an agent message can be understood by the participants in an agent
1

These problems are the similar to those described in the Ontology Mapping section of workpackage 8.

Page 178 of 366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

conversation if they also share a common ontology. In addition, where standardization is not possible, technologies have been developed for mapping and translating between and among ontologies. However the challenge remains to find ways to make simplifying assumptions, which enable agents to do useful things in practical situations. One approach is define a negotiation ontology [76] where agents commit to a shared negotiation ontology only when they want to interact. Wooldridge [80] claims that if two agents are to communicate about some domain then they must understand/agree on the terminology they use to describe this domain. In addition research has been undertaken using supervised machine learning techniques for ontology mapping in a MAS. In Williams work ([79]; [78]) he describes the DOGGIE MAS for ontology mapping. In particular his work describes how agents learn and translate similar semantic concepts between diverse ontologies using machine learning techniques. In their work they describe both the methodology and algorithms for multi-agent knowledge sharing and learning in a peer-topeer setting. The agents in the DOGGIE system teach each other what their concepts mean using their own conceptualization of the world. While we recognise the usefulness of their research we need to consider the types of machine learning algorithms used in such research, and what cues or inferences the agents really are able to make. It appears that teaching an agent to learn what a concept is through the instances of that concept, is where the state of the art lies in current MAS.

Page 179 of 366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

Page 180 of 366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

VI Management of business processes and workflows


VI.1 Introduction This section discusses requirements, solutions and underlying architectures that facilitate processaware collaboration between enterprises. Process-aware approaches include business process management, workflow management, and service orchestration and choreographing. Current trends of research, industrial consortia work, standardization and vendor effort bring together concepts of workflow and business processes, and often try to combine those with service oriented technologies. This development is very natural under the requirement of moving into an inter-enterprise setting. The difference between workflow and business process management has been under debate. As the work is merging, differences in emphasis and details are vanishing. Traditionally workflows have had emphasis on organizational structure, roles and responsibilities, while business process models have been started from organizational view but with accountability and work activities in focus. A convergence of three main technologies can be seen, namely the convergence of workflow, EAI and web services. Following this trend, this section of the state of the art analysis treats workflows and business processes as a unified concept whenever appropriate. Since orchestration and choreographing web services follow the same motivation and solution patterns, these are addressed at places as well. In the inter-enterprise collaboration context, business processes are divided into two categories: external (public) processes, i.e., processes performed in collaboration with customers, suppliers and other partners; and internal (private) processes, i.e., processes performed at enterprises own ICT system, possibly using workflows to execute the task. VI.1.1 BPMS Paradigm In today's business operations "speed" gets more and more important in comparison to "size". For successfully guiding a corporation in fast changing environments, the monitoring and control of running business processes and their interrelationship to corporation's strategy and objectives is one of the crucial success factors. In this context Business Process Management Systems (BPMS) have become a core concept in designing, developing, deploying, and maintaining business applications. Additionally, workflow technology has proven to be an indispensable aid to speed up processoriented application development. According to the BPMS paradigm, which was firstly introduced 1995 [Karagiannis, 1995], continuous Business Process Management consists of five core processes [Karagiannis et al., 1996] (cf. Figure 43): Strategic Decision Process, Re-Engineering Process, Resource Allocation Process, Workflow Process, and Performance Evaluation Process. Page 181 of 366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

A corporation defines itself by its products and services. Business processes describe the way they are produced, delivered, maintained etc. For the execution of its business processes a company requires two main resources: employeesin the following designated more generally as organizational structureand information technology (IT). This leads us to four core elements of corporations: products, business processes, organizational structure, and IT.
Strategic Decision Process Commitment to Strategic Goals and General Conditions Prod. Proc. Org. Balanced Scorecard Tools Management IS Executive IS ...

Strategic Level

IT

Re-Engineering Process

Business Level

Model-based Design of Core Elements Prod. Proc. Org. IT Performance Evaluation Process Analysis and Evaluation of Core Elements Prod. Proc. Org. IT

Business Process Management Tools Metamodeling Tools Simulation Tools Analysis Tools ...

Resource Allocation Process

Implementation Level
Prod.

Implementation of Core Elements Proc. Org. IT

Metamodeling Tools Customizing Tools CASE Tools Workflow Tools Code Generators Integrated Development Environments ...

Workflow Process

Execution Level

Execution of Business Processes Prod. Proc. Org. IT

Operational Data

Existing Applications Standard Software Workflow Technology Groupware Technology Object Technology ...

Figure 43 The Business Process Management Systems Paradigm The Strategic Decision Process takes place after a strategic decision has been made for the (re)engineering of an enterprises organisational environment. Based on global objectives, constraints for the processes to be selected are stated and success factors are recommended. The business processes are selected and the reengineering objectives are defined. Furthermore, the activities of initial information gathering and analysis concerning the selected business processes take place. The primary objective of the Re-Engineering Process is to design the new business process. Modelling constitutes a significant part of this BPMS subprocess, since the business process model to be generated has to be unambiguously defined, to further facilitate the execution of the following BPMS subprocesses. The designed business processes will have to conform with the evaluation criteria set in the Strategic Decision Process. Design takes place in an iterative way in order to obtain the best feasible results for the business process, keeping in mind all constraints relative to the business process, which might be imposed and affected by invariable factors. The ReEngineering Process can be supported by a number of techniques, for instance modelling, simulation, animation, characteristic index calculation etc. It can be further enriched by the actual use of new information technology as an enabler and facilitator. In any case, the Re-Engineering Process has to cover human resource management issues, which might appear. The primary objective of the Resource Allocation Process is to enable identification and coordination of resources, and realisation of the business processes (designed during the ReEngineering Process). These resources are mainly related to Information Technologies, for Page 182 of 366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

instance, existing legacy applications might be modified, new applications might be implemented. All resource requirements should be readily derived from the results of the Re-Engineering Process. The primary objective of the Workflow Process is the execution of the reengineered business process, using resources made available during the Resource Allocation Process [Junginger et al., 2000]. After test runs and additional corrective actions the process is executed in real time and location. This execution generates information necessary for the Performance Evaluation Process which follows. The primary objective of the Performance Evaluation Process is the qualitative and quantitative evaluation of all information obtained by the realisation and execution of the business process. These results constitute an invaluable feedback for both the Strategic Decision Process and Re-Engineering Process. VI.1.2 Purpose of Enterprise interoperability architectures The globalisation of business and commerce makes enterprises increasingly dependent on their cooperation partners. At present, competition takes place between networks of enterprises, instead of between individual enterprises. In this competition, the capabilities for interoperability of the enterprises ICT systems become critical. Interoperability, or capability to collaborate, means effective capability of mutual communication of information, proposals and commitments, requests and results. Interoperability covers technical, semantic and pragmatic interoperability. Technical interoperability means that messages can be transported from one application to another. Semantic interoperability means that the message content becomes understood in the same way by the senders and the receivers. This may require transformations of information representation or messaging sequences. Finally, the pragmatic interoperability captures the willingness of partners for the actions necessary for the collaboration. The willingness to participate involves both capability of performing a requested action, and policies dictating whether the potential action is preferable for the enterprise to be involved in. Various approaches have been developed to support inter-enterprise computing needs. Traditional solutions, like EDI, trust on standardized shared models of communication and computing, and on software developed in accordance to those standards. The drawback of such systems is in the expensiveness of maintenance and evolution of systems and services. Various approaches have been introduced to support integration of already existing enterprise IT systems. Integration aspects include processing platform integration, data integration and portal solutions. We can distinguish between the integration of full enterprise systems, covering workflows between enterprises (integrated ERPs, distributed workflow systems) and application integration (A2A). Yet another approach emphasises the dynamic nature of collaboration, namely systems that join into virtual enterprises (B2V). The inter-enterprise collaboration is formed in an environment that is able to support discovery of new partners, as well as the verification of interoperability between them. Essentially, enterprise interoperability architectures address, or can address, an enhancing set of the following services: Facilitating communication. The basic aim for communication is to share information. Any practical communication will be based on some concrete representation of information, and Page 183 of 366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

some concrete mechanism of signalling the information. In practice, networking system designers provide models of communication channels that determine the assumed contract of form for exchanging information. This level deals with technical and semantic interoperability. Facilitating abstract processing. The basic need in collaboration is to distribute (partition) the load of information processing. In practice, the proposals and commitments on processing need to be expressed in a computing platform independent way. This brings us to service oriented architectures (SOA). This level deals with technical and semantic interoperability. Facilitating process-aware collaboration. The collaboration itself needs to be manageable, which means that the systems should have facilities for expressing collaborative processes and assigning processing and communication steps to them. This level deals with semantic and pragmatic interoperability. Various exceptional situations in collaborations may rise so called emerging behaviour in the collaborating system; these aspects can only be managed by process-aware approaches and considerations of pragmatics. Facilitating evolution at the collaboration partners independently. In practice, isolation from technologies by service oriented approaches and late binding over abstract communication channels give fairly good status. In addition, repositories and ontologies for matching and retrieving current information.

An essential requirement for enterprise interoperability architectures is to preserve enterprise autonomy on issues like: selection of computing platform, and schedule of technical changes in it, selection of service components put externally available, evolution lifecycle of each offered enterprise application, including withdrawal of services already part of some collaboration, decisions on the kind of collaborations that are entered, decisions on the kind of partners are accepted, and decisions on leaving existing collaborations. VI.1.3 Interoperability architecture styles The purpose of interoperability in inter-enterprise setting can be reached using different architectural approaches, depending on at what layer the interoperability issues are addressed. This section is mainly concerned with the business, implementation, and execution levels of the above described BPMS paradigm. On those levels, we can draw an interoperability framework for business processes, as shown in Figure 44. The contractual layer addresses the agreement reached for the actual external business process to take place between enterprise services. The agreement is reached and maintainted by business process and contract management, interleaved with data contents management. The local processes associated with the external process are enacted by business process execution. Transactions are often supported by a separate layer on top of standard computing and communication platforms.

Page 184 of 366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

contractual layer (cooperation agreement between enterprises) data business process business process contents execution managemetn and contract management assignment to backend transaction layer

computing and communication platform

Figure 44 Levels of cooperation between enterprises. Within this framework different architectural approaches can be taken: integrated, unified or federated. Each approach focuses on a different method of ensuring interoperability between local business processes and or in other terms, the consistency of collaboration within an external business process. Integrated approaches build collaboration on a technical integration foundation. These approaches ensure interoperability by using shared execution environments and shared communication conventions. Unified approaches build collaboration on independent interpretations of the shared model of business. These solutions ensure interoperability by using shared metamodels and concepts, and shared specification environments. Federated approaches establish and maintain collaboration between autonomous local services, each of which runs a local business process. The interoperability between these services need to be addressed from information exchange and processing aspects; relevant is also the semantics of the external, joint processing. As we go from integrated to federated approaches, the scale of dynamicity of the collaboration and use of metalevel infrastructure services for maintaining the interoperability increases. The focus of methods used for integrated and unified architectures is in the modelling, design and deployment phases of the system; while the focus for federated approaches by necessity is moved towards an operational time management environment. Error! Reference source not found. illustrates the differencies between integrated, unified and federated approaches.

Page 185 of 366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

Integrated

Unified

Federated

mapping to common model common model

coupled individual models

Figure 45 The Integrated, Unified and Federated approach Architectural approaches: the method of reaching agreement on the external business process model. The interoperability approach at the external business process level is further reflected to the requirements for enactment and communication infrastructure too. Integrated systems require integration solutions at all layers; the infrastructure use same technical, semantic, and pragmatic solutions in all enterprises. The unified approaches are currently the topical ones: the shared process model is implemented over a heterogenous platform using differing transformations like in MDA. The federated approach minimize the needs of shared solutions at the implementation and execution infrastructure level. However, these approaches require additional high-level services to ensure process-aware interoperability at all. The benefits of contract-based solutions lie in the loose coupling of services, which in turn is necessary for autonomy preservation. The contracts involved need to include agreement on information flows, abstract processing, business rules agreed on, and make room for exceptional or emerging behaviour. Both agent-bases solutions and open service market systems are suitable examples of federated approaches (example projects, see [Fischer et al., 2001][Camarinha-Matos, 2003]. Infrastructure support for both kinds of solutions are introduced in Part 3 of this document.

The execution and implementation infrastructures should provide for interoperability at all four different levels [Steen and ter Hofte, 2002]. In the bottom, connectivity at the network level is required for technical interoperability. Second layer captures communication between applications, preserving semantic interoperability, i.e. contents of messages and information within exchanged documents. The third layer involves collaboration between people or applications, and coordination of business processes. At this level, some coordination standards start appearing, like ebXML and WSDL. At the topmost level, enterprises need ways of finding new partners, and of negotiating and closing contracts. The processing needs of the collaboration should be captured in enterprise modeling needs. However, each enterprise may view the collaboration model slightly differently; furthermore, the shared models evolve over time and changes are needed. For semantic interoperability, various ontologies can be used for matching together similar services, and similar information contents. The relationship between enterprise modeling, ontology-based infrastructure services, and business process management is still under further study, although a number of interesting projects are under way or emerging. Page 186 of 366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

VI.1.4 Phases of Business Process management In this document, the term Business Process Management (BPM) denotes activities and facilities for the design, enactment, static and dynamic management, and static and dynamic analysis of business processes adequacy. The design phase involves business process modeling, verification and analysis. We trust modeling aspects to be covered in Part 2, but address aspects of CASE tools with verification and analysis features in this section. We classify modeling environments into categories based on their endproduct to be used for enactment. The result reflects the requirements and architecture of the enactment environment, for example, whether is it centralized or distributed, is the focus on generating application code or controlling the overall external process control flow. Enactment involves making the process model executable and the actual execution of applications according to the business process model. Here, different approaches are used: a) abstract execution of the model, b) generation and execution of code c) invoking autonomous services to perform specified tasks either running the sequence from the process model or monitoring the activity externally and reacting to exceptions only. In the enactment environment we find two aspects especially important: the types of abstract processing steps (ACID transactions, long-lived business transactions with compensation routines, service invocations without transactional semantics), and the type of communication support required from the potentially heterogeneous working environment. In addition, the enactment phase involves detection and recovery from exceptions and needs for emerging behaviour. The management phase is partially overlapping the enactment phase: it deals with partnership, resource allocation, management of NFA aspects, communication channel management including transparencies like mobility transparency, Business process management will require management of dynamic business change. This causes requirements on more adaptive technology for process execution, like requirements of using late binding, rule engines, and adaptive processes for dynamic change during execution. Analysis phase involves collection of data about performance, usability, and exception situations, so that the business process models can be further improved. Beyond these life-cycle steps we need to consider the mechanisms using facilities for design, enactment, management and analysis. These can be centralized solutions, or in case of virtual enterprise environments, supported by distributed systems in themselves. These environments address aspects like partner selection, collaborative business process definition or negotiation, mapping to platform services, and collaboration contract management. The split into phases in this document is probably slightly different from other groupings, in that it has a wide scope of the enactment phase. This is in favor of capturing the most of the architectural variation into the enactment section of the document. VI.1.5 Interoperability issues

Page 187 of 366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

We believe that a clear a distinction should be made between two important dimensions of process interoperability. The horizontal dimension addresses the interoperability between different processes, where interactions, services and service orientation, and interoperability between different languages and tools play and important role. Secondly, the vertical dimension refers to interoperability between different architectural domains and encompasses the relations between process models and other types of models (such as application or infrastructure models). The discussion in this section focuses on the horizontal dimension. However, in the solutions discussed, various vertical dimension aspects become visible. Especially, there will be vertically oriented facility chains. These chains are composed of process modeling tools, the three aspects of enactment environment, and the feedback loop from the analysis phase. In each step of the chain, a consistent view of the general architecture is required. Achieving full interoperability among two business partners means that they are able to collaborate at all levels of their enterprise architecture. Interoperability does not only address the ability of software components to collaborate regardless of different languages, data formats, interfaces, execution platforms, communication protocols or message formats. A systematic approach to interoperability will also take into account interoperation issues at more abstract levels, such as business process interoperability. We characterise business process interoperability as the ability of business activities of one party to interact with those of another party, whether or not these business activities belong to different units of the same business or to different businesses. An integrated architecture approach considers that the first step when pursuing business process interoperability among collaborating partners is to ensure their process integration. Process integration involves modeling and visualizing of enterprise processes such that, exchanges of "events", can "work" together in order to accomplish a number of business services or transactions. Besides the integrated approach, a unified or federated architecture approach can be taken. While the integrated approach requires a shared process model across business partners, the unified model requires only that there is a shared metamodel on the shared processes. The actual processing, i.e. the vertical dimension is excluded. The federated approach requires tools to exist for the partners to coordinate the selection and refinement of a shared process. Different approaches present different kinds of interoperability supporting services for applications: Integration can be founded at design time by shared technology solutions, or can be supported by MDA tools which ensure that all correct transformations of the shared collaboration model are itneroperable. Furthermore, the runtime environment can provide federated systems with services that allow managment of the external process in all its interoperability aspects. VI.1.6 Part structure The management of business processes and workflows part is structured as follows. Section 2 discusses interoperability architectures that address business process management, introduce some management scenarios, and continue with practices of organising business process management. The final part of Section 2 lists research projects, industrial solutions and consortia recommendations in the field. Section 3 discusses the further work required on business process management aspects in the fields of research, markets, and standards. Page 188 of 366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

VI.2 Relevant scenarios This section demonstrates the needs of business process management with two case studies. The health-care case focuses on the use of distributed process manager technology. The insurance partner platform case brings up requirements on strategic, business, and implementation level of interoperability. Beside these cases, traditional supply-chain management problems and virtual enterprise approaches are of interest, aggregation of services in service oriented architecture (SOA) style as well, as can be found in other parts of this report. VI.2.1 Health-care processes This section presents a case of introducing process manager technology in the healthcare sector (Wrangler, hlfeldt, Perjons 2003) [Wangler et al., 2003b][Wangler et al., 2003a] . The interoperability scenario is essentially about making the different healthcare providers, such as primary care units, hospitals, and home healthcare units, work together throughout the patient process. One of the most important insights in business management during recent years is the awareness that organisations need to focus on the processes that create value for their customers. This is in order to see to that value is created as efficiently as possible and that unnecessary or redundant activity is avoided [Scheer, 1998][Johannesson et al., 2000b]. This in turn means that the organisations IT support need to interact with business processes in a better way than is currently the case. Healthcare is by no means an exception, but also here there is a great need for process orientation and for transparent communication between various actors and between IT systems [Bij et al., 1999].The most important of the healthcare processes may be referred to as the patient process, i.e. the process where various healthcare providers interact with and for the patient in order to increase his or her quality of life [Vissers, 1998]. Like most businesses of today, healthcare is functionally organised in e.g. primary care units, hospitals, home healthcare units, each with their own more or less isolated information systems. More precisely, these systems are characterised by the fact that they: support single organisational functions very well, but with little adaptation to a process oriented way of viewing things, i.e. where the intra- and inter-organisational processes can be efficiently coordinated, have been created at widely differing points in time and hence by using different development paradigms, and by using different software and hardware platforms. As a consequence, they are difficult to integrate and to make collaborate over the patient process. In spite of these difficulties, it is not practically possible or even desirable to throw out existing IT systems and replace them with new ones that are developed anew and from scratch. Instead one needs to develop and introduce methods and tools that can transparently integrate existing IT support, and in such a way that new IT systems can be easily accommodated [Larsson, 2001].

Page 189 of 366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

VI.2.1.1 The VITA Nova Project The VITA Nova project aims to contribute to a change-over to a process oriented way of viewing IT systems integration and of working in healthcare in order, in the long run, to increase quality, efficiency, and security for everyone involved: patients, relatives, healthcare personnel and other healthcare stakeholders. VITA Nova is a collaboration between the University of Skvde, and Stockholm University/KTH; the healthcare providers Skaraborg Hospital, Hentorp Primary Care, the cities of Skvde and Falkping, Capio Diagnostik AB; and the technology providers Visuera Integration AB, and Unicom Care AB [VITA Nova, 2002]. The project will focus on the patient and the patients family, and the individual healthcare provider. By studying the processes of healthcare from these perspectives it will be possible to identify the problems that the patient, patient family and professional roles experience with existing ways of working and with existing IT support. Patient and professional role are also comparatively stable over time, while the organisation, which otherwise is often taken as the basis for business studies, may be more frequently changed. More precisely, the mentioned perspectives involve: The patient and family oriented perspective stresses that processes are studied with a focus on which information that needs to accompany the patient throughout the healthcare process to get optimal treatment. The healthcare provider perspective entails that processes are analysed to identify which information healthcare personnel needs as well as how information will be made available. In order to investigate advantages and difficulties when introducing process manager technology in healthcare, a prototype in the form of a patient process for leg ulcer has been developed by means of the Visuera PM. The reason for choosing this particular illness is that the regional hospital has leg ulcer as one of its specialities and that it hence has set up a regional Leg Ulcer Centre to deal with this. There are many roles and organisations involved in the patient process for leg ulcer, i.e. at the municipal home healthcare it is the district nurse, at the primary care centre it is a physician or a nurse, at the Leg Ulcer Centre a physician or a nurse, and at Capio Diagnostik a physician, a nurse or a biomedical assistant. Capio deals with various types of tests and samples. Capio staff receives information concerning the tests electronically and may enter the results into the PM in order to have it sent back to the organisation that requested the test. VI.2.1.2 Results from the VITA Nova Study The preliminary results from the VITA Nova case study show that the leg ulcer process instance for a particular patient may be quite complex, involving several roles and autonomous organisations, and where the communication today is done manually by regular mail, fax or phone calls. The case study shows that each organisation has its routines, IT systems and even terminology, which makes integration of the complete process considerably more difficult. When the patient is transferred from one organisation to another, information about the patient is sent via regular mail or fax, or questions are answered by phone. Often the communication is triggered by some person at one of the organisations contacting a person at one of the other organisations by phone and asking for medical record information about the patient. This information is often filled in manually in particular forms that are sent by mail or fax. After that the information is entered manually into the receiving organisations information system.

Page 190 of 366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

The communication often has the character of a point-to-point solution. This means that if a patient has first been taken care of by the nurse at the municipal home healthcare unit and later at the primary healthcare centre and then at the Leg Ulcer Centre, the latter organisation first has to contact the home healthcare unit and then the primary healthcare centre in order to acquire the complete information about the patients medical history. The municipality and primary healthcare centre may use a different terminology than the Leg Ulcer Centre, which complicates the communication further. By introducing the process manager to deal with the communication between organisations and professional roles, the communication may be automated. As soon as a patient is transferred between organisations, the relevant information may be transferred electronically, and hence informing about earlier treatments at other organisational units. Furthermore, by introducing the process manager, also the processes at the various organisations will be visualised. Thereby, means for analysing the processes are created, i.e. the process manager will contribute to the development of value-creating processes by identifying potential for improvement. Terminology used when exchanging patients need to be standardised or at least differences have to be clearly identified, in order to increase comprehensibility and minimize the risk for misunderstandings. Another important advantage with the process manager is that involved personnel may be notified about visiting patients or other routine matters. This is important in connection with the communication between organisations: the risk for misunderstanding or forgetting planned treatments is minimized due to improved communication between healthcare providers. Using a process manager to integrate existing and often dissimilar systems also enables long term process monitoring and quality assessment. This is made possible by logging, in time stamped form, the information produced during the process, and such that it can be used for e.g. measuring the productivity, cost etc. of the process. However, there are several problems with introducing a process manager. First of all, patients are often dealt with in an ad hoc manner and not according to some standardised process. This entails that there may be hostility to structuring the process among individual healthcare providers. Secondly, personnel who have been working for a long time in the same organisation and who are safe in their professional role, do usually not want to be forced to work according to routines set up by someone else. Therefore, processes must be designed such that they are flexible enough to allow as much individual adaptation as possible. However, support for the integration of applications within and between healthcare providing organisations will change the way of working for healthcare staff and provide support for new forms of healthcare, new ways of working and collaborating over organisational borders. In order to protect peoples privacy, Sweden has several laws that limit the availability and the transfer of patient information between different actors and care providers. The Secrecy Act [SFS, 1992] is an example of such a law according to which the primary purpose of secrecy is to protect peoples privacy. In healthcare, secrecy applies for information about the state of health and other private circumstances of the individuals if it is not clear that the information can be revealed without any disadvantages for the private person or someone close to him. Earlier work has shown that there are clear shortcomings in how municipalities manage patient information in home healthcare concerning privacy and secrecy [hlfeldt, 2002]. Above all, it is the users insufficient knowledge about information security and the absence of security strategies and policies from the organisation that is the main cause to these shortcomings. Page 191 of 366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

VI.2.2 Insurance Partner Platform VI.2.2.1 Introduction and Context Using Internet technologies many companies develop new business models to realize a tighter integration of their business partners and customers. Insurance companies for example develop such business models either to reduce their costs of administration or to establish new channels of sales and distribution. Insurance companies have to establish a well-defined strategic position in the network of their competitors - especially when they join together to establish a common Internet platform for their sales partners, e.g. agents and brokers. The following problem scenario describes a B2B sales and distribution platform for insurance partners based on Internet technology ("Insurance Portal"). This problem scenario is a special instance of the buyer-seller-problem. The main objective of the Internet platform is to support (independent) insurance agents to reduce the cycle time and costs of administration, which arose from the interaction with different insurance companies. Additionally, agents should have more time to offer best advice to the consumer. Note: an agent is working for several (competing!) insurance companies based on commission basis. Developing and operating such a platform raises various interoperability problems/questions. These problems/questions are structured according to the following levels: (a) strategic level, (b) business level, (c) implementation level and (d) execution level. In the following, the general business model of the scenario is provided and an overview of the software architecture of the insurance platform is given. Afterwards, some typical interoperability problems/question will be described considering the before mentioned levels, the presented software architecture, and the interconnection points of the software architecture. VI.2.2.2 Business Model Figure 46 describes the business model of this scenario, i.e. how the different business participants interact with each other and how they are creating business value. The customer interacts with his sales responsible e.g. agent, broker, agencies etc. (step 1). The sales responsible uses an Internetbased platform (B2B sales and distribution platform) to execute his business processes such as offer management, order management, policy management etc. E.g. he requests certain product offers (step 2) which are calculated and returned to him (step 5), and which he then can send to his customers (step 6). The Internet-based platform, or more precisely: the corresponding company, interacts with different sub providers such as application hosting companies, security companies, customer information suppliers etc. to fulfil its job (steps 3 and 4). Additionally, it interacts with the insurance companies to exchange product data, customer data etc. (steps 3 and 4). Finally, the customer makes a contract with the insurance company, which provided the best offer, and pays his insurance fee to the insurance company (step 7). The insurance company delivers the appropriate contracts and pays the corresponding sum, if the insured events occur such as end of contract, accident, death etc. (step 8).

Page 192 of 366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

Customer

Sales Partner
(e.g. Agent, Broker, Agencies etc.)

7 2 5

Insurance Platform
(operated by platform company)

Sub Service Provider

Insurance Company

Figure 46 Business Model Insurance Partner Platform

VI.2.2.3 Software Architecture Figure 47 describes the general software architecture of the insurance partner platform. The arrows depict the different places of interoperability.
Sub Service Providers Web Browser Insurance Partner Platform Insurance Companies

Analysis and Retrieval Services

Web Server / Servlet Server

Insurance Components

Security Services Application Server / Business Services

Insurance Services

Customer Information Services

Insurance Data

Platform Database (internal data)

(Temp.) Database of External Data (e.g. products etc.)

Figure 47 General Software Architecture of Insurance Partner Platform Insurance Partner Platform: The user interface of the insurance partner platform is based on web browser technology. The access of the business functionality and the generation of the user interface is via web server / servlet server. The business functionality runs on an application server. The application server stores platform internal data in the platform database. External (and temporary) data are stored in the database for external data. Via the application server sub service providers and insurance companies are integrated into the insurance partner platform. Insurance Companies: The insurance companies provide components (e.g. product calculators, risk check modules etc.), services (e.g. printing, mailing etc.), data (e.g. customer data, contract data, product data etc.), which have to be integrated into the insurance partner platform. Page 193 of 366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

Sub Service Providers: The sub service providers provide services such as analysis and retrieval services (e.g. data analysis, management reports, statistical evaluations etc.), security services (e.g. trust centres, certificate management etc.), customer information services (e.g. credit agency services, market evaluation etc.), which have to be integrated into the insurance partner platform. VI.2.2.4 Interoperability Problems: Strategic Level At the beginning the business strategy has to established and the following (interoperability) questions had to be answered in the strategic level: Which are the processes and services (products) to be realized on the platform? Processes and services (products) are identified. On this level intra-organizational business processes (e.g. user management on the platform) and inter-organizational business processes (e.g. application and claims processes) can be distinguished. Which are the appropriate business partners to develop and run the platform? According to the required processes and services (e.g. insurance core services, consulting services, implementation and provider services) partners are involved with different contractual relationships (e.g. associate, supplier, customer, etc.). Corresponds the business plan of the project with the business plans of each partner? Each partner has to agree upon the platform strategy. For example, the standardization of strategies of competitors participating in the platform may imply the request of investigation of antitrust law. On the other hand, advantages which can be realized by one partner may damage business of another (e.g. insurance company A delivers a particular insurance policy within one day, insurance company B in seven days). VI.2.2.5 Interoperability Problems: Business Level On the business level the types of processes has to be determined. Each process has a realization state. Based on the realization state, a change request specification can be generated. The next step is to model the business processes in detail with a special focus on the involved products, interfaces between the business actors, and the involved roles of each business actor. Business processes can be divided into the following types: insurance core service processes, e.g. application processes, claims management, value adding processes, e.g. cash management processes, event management, development processes, e.g. business and software development based on the core elements "products", "processes", "organizational units" and "information technology" as described in [Bayer et al., 1999], business operations processes, e.g. process integration of business partners and additional services, e.g. legal advisor services, training and learning. The following list shows some areas of interoperability problems on the business level: Product Management: In every realization state a set of products is integrated into the platform which has new requirements for the business processes. Implications for the software development and integration of the insurance partners can be evaluated as early as possible. Certification and process integration of business partners: Each actor participating in the platform realization can be certified based on his business processes. Some criteria are complexity of interfaces (business operations as well as data flow), process benchmarks, availability and integrity. Training and Learning: Business processes can be documented online for learning the sequence of operations of core processes as well as administrative processes. Page 194 of 366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

Pricing Model: If process cost analysis is done using the business processes, well-calculated reductions of the regular license fee can be granted to the customer. If insurance companies for example want to consolidate their customer database, the platform company can reduce the cost of the business process "Customer Data Modification" to encourage the agents to reach the insurance partners objectives. Test Management: In combination with the product model, a set of test cases can be developed as a specification for testing the platform application. VI.2.2.6 Interoperability Problems: Implementation Level After finishing the requirement definition and the business description, the next level is the implementation level. The platform consists of a core service application, dynamic HTML-based user interface, complex application modules etc. On this level typical interoperability problems are: How can the different viewpoints of requirement definition be integrated, e.g. how can the metamodels of the specification models be integrated? Which implementation technologies and target platforms will be used and how will they be integrated? What are the different modules of the implementation environment and how they can be integrated? Which runtime libraries can be used and how can they be bound to the development environment? VI.2.2.7 Interoperability Problems: Execution Level The execution level of the platform project is influenced by short release cycles - especially driven by short term content as news and events and by high fluctuation of platform users. Business operation processes such as content management processes, user management, and first and second level support, are documented by exporting all required information in a process based operating instructions manual. This manual is online available for the responsible operators and support agents. Some interoperability problems on the execution level are: Data conversions: customer data, contract data, product data etc. Component integrations: how can different components (of functionality) be operated within a single business service (even if they are realized with different technologies)? How can long lasting transactions be synchronized and consistently integrated?

VI.3 Business Process Modelling Methodologies, Tools, and Languages Remark: The page http://is.twi.tudelft.nl/~hommes/toolsub.html (by Bart-Jan Hommes, Delft University of Technology) provides an extensive overview over Business Process Modeling tools (sorted by techniques).

Page 195 of 366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

VI.3.1 Methodologies and resulting application architectures In literature numerous publications about workflow development methodologies can be found, ranging from project experiences to research papers. Most of them are task-oriented, i.e. they focus on the tasks during the development process and their order. Here, the artifacts (results) produced during the development process are used as starting point. These artifacts are described in the following subsection. Afterwards, three general development methodologies are presented. They are characterized by the order in which the artifacts are produced [Junginger et al., 2000]. VI.3.1.1 Artifacts The most important artifact when developing a workflow application is of course the (final) application that goes into production. For developing the application, different types of models can be distinguished, we denote them as "graphs" (seeFigure 48), cf. [Karagiannis et al., 1996]: Business Graph: A business process model that describes the overall business process (independent of IT aspects such as the application architecture). This model should be easily understandable to "all" employees of the organization. It is usually modeled in a business modeling tool using a business process modeling language and may contain for example manual activities which are not supported by IT at all. Often the business graph is used for calculating cycle time, personnel need, etc. using mechanisms such as simulation. Execution Graph: A model-based representation of the elements of the final application. Note, that the execution graph usually consists of more than just the process definition which is finally executed by the workflow engine. Additional elements can be (depending on the application architecture): UML models, if some of the invoked applications are implemented by using an object oriented CASE tool, specific process models, e.g. reference models of an ERP system, data models, etc. Note, that the granularity of the invoked applications is an important factor which influences the design of the process definition that is executed by the workflow engine [Karagiannis et al., 1996]. Therefore, the abstraction level of the process definition usually differs from the one of the business graph. Workflow Graph: An intermediary model that helps deriving (and/or linking) the execution graph from (with) the business graph. It can be seen as an extension of the business graph, in which it is marked, which parts are to be realized by which technologies or systems. Here, the granularity of the invoked applications is considered. The workflow graph is often not made explicit in workflow projects. However, in the authors' opinion it helps in particular in large, complex applications to identify and assess integration aspects. As modeling language an extended business process modeling language can be used. Additionally, it is much more sensible to transfer to workflow graph into the used workflow management system (and not the business graph).

Page 196 of 366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

Business Graph
(business process modeling language)

Workflow Graph
(e.g. extended business process modeling language)
Process Definition

Execution Graph
(set of different modeling languages)
Legacy System, no modeling UML Models Reference Models of ERP System

Figure 48 Business Graph, Workflow Graph, and Execution Graph Currently, it is common to model the business graph in a business modeling tool and the execution graph in a whole set of tools, such as the build time components of workflow management systems, CASE tools etc. If there are back-and-forth transformation interfaces available, this helps handling the interrelationships between the models. Otherwise, organizational measures have to be taken during development to assure consistency between the models. VI.3.1.2 Workflow Development Methodologies A development methodology can be characterized by the order in which the artifacts described in the last section are created. Here, three general methodologies are distinguished: Top-Down Methodology, Bottom-Up Methodology, and Prototyping Methodology.

The following subsections explain these methodologies and discuss, under which circumstances which one can be applied. The used notation is depicted in Figure 49, cf. [Jablonski et al., 1997, pp. 146-151].

: Start : End

: Task : next task

: Result(s) : next task (optional)

: Language(s) : done by/ modeled in

: Actor(s)

Figure 49 Notation for Methodologies

Page 197 of 366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

The methodologies are presented schematically. Tasks such as tests, reviews and quality assurance are omitted. Table 3 Indicators for/against the Usage of the Methodologies shows indicators for/against the usage of each of the presented methodologies based on experience. VI.3.1.3 Top-Down Methodology The main idea of the top-down methodology is to design business graph, workflow graph, and execution graph in a straightforward manner (see Figure 50). So far it can be seen as an application of the waterfall model. However, a pure application of the waterfall model is unrealistic. During the design of the execution graph it is nearly always necessary to adapt the business and workflow graphs. Although this requires significant effort, this is an aspect that is not mentioned at all in some publications. It is important to consider in the project plan the availability of business experts during these tasks. On the other hand, an advantage of the top-down methodology is, that the business graph is usually designed without considering potential or just imagined IT restrictions, often leading to new and innovative business ideas. This might be helpful for mission critical business processes. Also for complex business processes the top-down methodology can be appropriate, because it avoids getting lost in IT details before knowing the overall picture. But to reduce re-work during the design of the execution graph, it should be aimed to stay on a appropriate level of abstraction when building the first version of the business graph, i.e. representing all necessary business aspects but not being too detailed.

Business experts

business process language

Business & IT experts

ext'd business process language

Design Business Graph

Business Graph (v.1)

Design Workflow Graph

Workflow Graph (v.1)

IT experts

Design Execution Graph Execution Graph (v.1) Business Graph (v.n) Workflow Graph (v.n) Adapt Business & Workflow Graph business process & workflow languages, UML, etc. Business & IT experts IT Developer workflow language, programming languages, etc. Implementation Final Application

Figure 50

Top-Down Methodology

Page 198 of 366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

VI.3.1.4 Bottom-Up Methodology As it names indicates, the bottom-up methodology is an inversion of the top-down methodology. It is started directly by designing the execution graph, which optionally serves afterwards as input for the design of business and workflow graph (see Figure 51). This IT-driven approach is primarily applicable, if e.g. the invoked applications already exist (and are not be changed) or if an embedded workflow management system is used, i.e. if there is not much degree of freedom in designing the execution graph. The main danger of the bottom-up methodology is, that the application fixes business aspects, which have not been reflected thoroughly.
Business & IT experts (ext'd) business process language

IT experts

workflow language optional

Derive Business & Workflow Graph

Business Graph Workflow Graph

Design Process Definition

Process Definition (as part of the Execution Graph)

Design rest of the Execution Graph

Execution Graph

IT Developer

e.g. UML & other languages

Implementation

Final Application

IT Developer

workflow language, programming languages, etc.

Figure 51 Bottom-Up Methodology

VI.3.1.5 Prototyping Methodology This methodology can be seen as the transfer of prototyping approaches from "traditional" software development to workflow application development. The final application is build incrementally whereas each step consists of a parallel design/adaptation of business, workflow, and execution graph (see Figure 52). Depending on the project circumstances and the quality of a produced prototype two types of prototypes can be distinguished: throw-away prototypes and incremental prototypes. This methodology is encouraged by appropriate mechanisms of the development environments, such as animation mechanisms of the workflow management system(s), GUI builders, back-and-forth transformation interfaces, etc. We see prototyping as a good approach for discovering technical problems much earlier than in particular with the top-down methodology.

Page 199 of 366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

Business & IT experts

business process & workflow languages, UML, etc.

IT Developer

workflow language, programming languages, etc.

Design Business, Workflow, and Execution Graph

Business Graph (v.1) Workflow Graph (v.1) Execution Graph (v.1)

Implementation

Application (v.1)

Redesign Business, Workflow, and Execution Graph

Business Graph (v.2) Workflow Graph (v.2) Execution Graph (v.2)

.........

Final Application

Business & IT experts

business process & workflow languages, UML, etc.

workflow language, programming languages, etc.

Figure 52

Prototyping Methodology

+ +

Table 3 Indicators for/against the Usage of the Methodologies Page 200 of 366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

VI.3.1.6 Workflow Application Architecture In developing workflow-based applications, not only the architecture of the WFMS itself is of importance, but also the architecture of the workflow application itself [Junginger et al., 2000][Junginger et al., 2004]. Figure 53 shows some typical workflow application architectures. Architecture a) is the simplest one. Here, a single workflow management system is the leading system, which controls all invoked applications.

a): Single Workflow Management System Workflow Management System

b): Single Workflow Management System, Horizontal Integration with Other Applications Workflow Management System Invoked Applications Other Application (e.g. Legacy System)

Invoked Applications

c): Multiple Workflow Management Systems, Vertical Integration Workflow Management System Invoked Applications Workflow Management System Invoked Applications

d): Multiple Workflow Management Systems, Horizontal Integration Workflow Workflow Management System Management System Invoked Applications Invoked Applications

Figure 53 Typical Workflow Application Architectures If for example batch-based legacy systems (and also some ERP systems) have to be integrated, this leads to architecture b) in Figure 53. The integration of such systems can be seen as triggering the other application, e.g. by providing some (application) data that is processed during the next batchrun. Therefore, it is not sensible to wait for the result before the business process is processed further. Note, that such situations imply a certain design of the process definition. Often workflow management systems are part of a larger application architecture, e.g. in e-business applications. If multiple workflow management systems are applied, they have to support interface such as the WfMC interoperability interface (interface 4 of the WfMC reference model). The architectures c) and d) in Figure 53 depict two examples. The terms "horizontal" and "vertical integration" have the same meaning as in the architectures a) and b), in WfMC publications the terms "chained sub-processes" and "nested sub-processes" are used. Another aspect that has to be taken into account, is the architecture of the used workflow management system itself. Similar to [zur Muehlen and Allen, 2000] we distinguish embedded and autonomous workflow management systems. Their schematic architectures are shown in Figure 54 (of course, both types of workflow management systems possess a number of additional components). Embedded workflow management systems are defined as systems that provide closely integrated applications, e.g. for document management. One type of embedded workflow management systems are ones, that are build on top of (document-oriented) groupware systems. Another type are ones, that are embedded in ERP systems. Here, the vendors often provide soPage 201 of 366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

called reference models, which either describe the system and can be used during implementation or even can be executed by an integrated workflow engine.

Embedded Workflow Management System Workflow Engine Integrated Applications Other Applications

Autonomous Workflow Management System Workflow Engine

Invoked Applications

Figure 54 Embedded vs. Autonomous Workflow Management Systems The architectures presented above show that usually the development environments consists besides the build time components of workflow management systems of a whole set of tools, e.g. object oriented or conventional CASE tools, web design tools, tools for the reference models of ERP systems etc. This means, that methodologies have to consider all elements of the application architecture and should not focus exclusively on the workflow part.

VI.3.1.7 Relationships to MDA and Architectures In modeling enterprises and business processes it should be noted that the result can be used in different ways. There are approaches for transforming the model into a set of interoperable application implementations (MDA), executing the abstract model in a distributed environment (distributed workflow management systems), and using the model as a contractual element for monitoring the conforming behaviour of the collaboration to the contract. Various requirements implicitly set for the operational time environment are brought up when appropriate in the following subsections. Disregard of the architecture and overall challenges aimed at, the bottom layer of the system runs services (or tasks). An essential challenge is to describe those services in such a way that properties of the composition can be analyzed, either statically as the composition is a design takes or dynamically as a conformance requirement. Various platforms can be used, ranging from distributed object and component platforms (e.g., CORBA [OMG2000], DCOM [DCOM, 2004], EJB [Roman et al., 2001], survey [Urban et al., 2001][Bichler et al., 1998]), platforms to distributed workflow engines with proprietary methods of addressing services and to service-oriented business processes. The component-based approach for B2B E-commerce is appropriate when small number of partners is involved within an enterprise [Cobb, 2001]. With respect to the development of service-oriented architecture, it is already traditional to use WSDL [WSDL, 2002] (web service description language). WSDL describes a service as a set of its visible operations, either proactive or reactive announcements, or asynchronous or synchronous interrogations. Linkage between services is specified with ports. Page 202 of 366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

Formalisms like DAML-S [DAML-S, 2001] allow extension of operation description by annotations of pre-and postconditions. The annotations allow taking into consideration the context of an operation before it is fired. The conditions can use input and output parameters of the service call, but also other context information. Although these conditions could be used directly to follow through the lifecycle of a service, the formalism does not support explicit model of state. WSCL [Banerji et al., 2002] approaches behaviour descriptions differently: it permits specification of message classes but also transitions as message pairs. Associated with the transition can be a condition over the documents exchanged in the incoming (first) message. This is a more restricted model from the above. Modeling message channels with type-safety and verifiability properties has been suggested. These proposals are based on process calculus [Honda et al., 1998][Gay and Hole, 2000][Abiteboul et al., 2000]. Comparison of expressiveness 2002][Kiepuszewski et al., 2002]. of some languages is discussed in [Kiepuszewski,

Page 203 of 366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

VI.3.2 Process models, languages, and representations

VI.3.2.1 Process modeling perspectives To support modeling, control and execution of processes, a workflow management (or business process automation) system needs information about the process and the organizational and technical environment. This information can be categorized in five perspectives [Jablonski and Bussler, 1996b]: The process perspective or control flow perspective specifies the execution-order of tasks, using different control structures (sequence, alternative, parallelism, etc.) The organizational perspective specifies the organizational structure and resources. The informational perspective or data perspective deals with control data (or workflowrelevant data, e.g. used for routing-decisions) and production data (or business data, e.g. information objects used accessed by workflow-steps). The functional perspective or task perspective describes the functionality of process steps. A task is a logical unit of work with characteristics like the set of operations to perform, a trigger, a due date, etc. The operational perspective describes the elementary operations (one task may consist of several operations). Operations may be executed manually or invoke external applications (e.g. text editor, accounting system, ).

Despite widely known (always similar) problems, research and industry failed to provide a common standard for the definition of workflow or business process models. Quite the contrary applies, as vendors implemented proprietary definition languages, tailored to the needs and features of their workflow (and business process automation) systems. Numerous workflow models have been developed based on different modeling concepts and on different representation models [Eder and Gruber, 2002]. With respect to different points of view each concept has its characteristics, strengths and weaknesses. Therefore, a process described in a particular model may be more suitable for a specific kind of consideration (e.g. conceptual comprehension) then in a different model. [Anderson et al., 1997] provides an overview, analysis, and comparative study of several representations (done during the analysis phase of the Process Specification Language PSL), which are: ACT, A Language for Process Specification (ALPS), AP213, Behavior Diagrams, Core Plan Representation (CPR), Entity Relationship (ER), Functional Flow Block Diagrams (FFBD), Gantt Charts, Generalized Activity Networks (GAN), Hierarchical Task Networks (HTN), IDEF0, IDEF3, <I-N-OVA> Constraint Model, Knowledge Interchange Format KIF, O-Plan Task Formulation, OZONE, PAR2, Part 49, PERT Networks, Petri Nets, Process Flow Representation (PFR), Process Interchange Format (PIF Core), Quirk Model, Virtual Process Modeling Language (VPML). Additionally they identified five supporting basic representations which are quite frequently used in the above listed representations: 1) AND/OR Graphs, Page 204 of 366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

2) 3) 4) 5)

Data Flow Diagrams (DFD), Directed Graphs (Digraphs), State Transition Diagrams (STD) and Tree Structures

[van der Aalst and ter Hofstede, 2003] state that every attempt to give a complete overview over all modeling languages, models and techniques is destined to fail and that: Workflow technology continues to be subjected to on-going development in its traditional application areas of business process modeling and business process coordination, and now in emergent areas of component frameworks and inter-workflow, business-to-business interaction. Addressing this broad and rather ambitious reach, a large number of workflow products, mainly workflow management systems (WFMS), are commercially available, which see a large variety of languages and concepts based on different paradigms. Several authors offer workflow-categorizations based on modeling concepts and/or language concepts: [Mentzas et. al, 2001] lists three basic categories of workflow specification languages and related techniques: Communication-based techniques, which mainly stem from work on the Conversation with Action model [Winograd and Flores, 1987]. This technique assumes that the objective of business process reengineering is to improve customer satisfaction. It reduces every action in a workflow to four phases based on communication between a customer and a performer: preparation; negotiation; performance and acceptance. Activity-based techniques, which focus on modeling the work instead of modeling the commitments among humans. Such methodologies model the tasks involved in a process and their dependencies. It should be noted that the activity-based approach is consistent with objectorientation and the object-oriented workflow system, e.g. [McCarthy and Sarin, 1993] and the object-oriented workflow system Oz in [Ben-Shaul and Kaiser, 1994]. Hybrid techniques, which can be considered as a combination of the communication-based and the activity-based techniques [Georgakopolous and Ruinkiewicz, 1997]. [zur Mhlen and Becker, 1999] distinguish five distinct groups of workflow specification languages based on observations by [Carlsen, 1997]: IPO (Input-Process-Output)-based languages: describing a workflow as a directed graph of activities. E.g. IBM MQ-Series Workflow [Leymann and Altenhuber, 1994]. Speech-Act-based approaches: which model a workflow as an interaction between (at least) two participants that follow a structured cycle of conversation. Namely the phases negotiation, acceptance, performance and review are distinguished. E.g. Action Technologies Action Workflow [Medina-Mora et al., 1992] Constraint-based modeling methods, such as Generalized Process Structure Grammar (GPSG) [Glance et al, 1996]. These approaches describe a process as a set of constraints, leaving room for flexibility that is otherwise governed by the restrictions of the IPO- or Speech-Act-based approaches. Constraint-based modeling languages are typically text-based and resemble traditional programming languages, whereas a graphical representation of these models seems difficult.

Page 205 of 366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

Role-modeling based process descriptions, such as Role Activity Diagrams (RADs) [Odeh et al., 2002]. Systems thinking and system dynamics, that are used in conjunction with the concept of learning organizations [Senge, 1990].

[Gruber, 2004] distinguishes between workflow modeling concepts and workflow representation concepts. Modeling concepts Petri-Net variants: Basically, in a Petri Net based model, activities are modeled as transitions, arcs represent dependencies between activities and places are used to model the internal states of the workflow. See van der Aalst et al., e.g. [van der Aalst and ter Hofstede, 2002] as a starting point; see also Workflow Patterns (http://tmitwww.tm.tue.nl/research/patterns). Precedence graphs: Numerous descendant methodologies like PERT and CPM are widely used, especially in project management and product development domains [Pozewaunig et al., 1997] Precedence graphs with control nodes: A precedence graph with control nodes is an above described precedence graph which is augmented with control nodes like split and join nodes in order to establish control structures such as parallel, conditional and alternative structure [Eder and Gruber, 2002][Zhao and Stohr, 1999]. State charts: State charts were originally developed for reactive systems (e.g. embedded control systems in automobiles) and have been quite successful in this area. State charts reflect the behavior of a system in that they specify the control flow between activities [Wodtke and Weikum, 1997]. Script languages: Script languages allow the description of workflows like in traditional programming languages. E.g. [Eder et al., 1997]. Rule-based languages: In rule-based languages workflows are composed by a set of rules. Each rule is composed by two parts, which are called left-hand side (lhs) and right-hand side (rhs), e.g. by lhs rhs. The program executes based on the dynamic content of the working context by firing when the lhs of a rule is enabled [Bonner, 1999][Casati et al. 1996][Kappel et al., 1995]. Representation concepts Graphical representation (e.g. workflow graph) Text based representation (e.g. programming language style) VI.3.2.2 Meta models and meta languages [Rosemann and zur Muehlen, 1998] distinguish between a) Meta data models which characterize notations that can be used for information modeling purposes and b) Meta process models which describe the modeling process using a specific method. Additionally they state that every meta model is based upon another meta model, e.g. the notation of ER-diagrams may be explained using UML. If the similarities of a number of meta models are consolidated in one universal model this model is called a reference meta model.

Page 206 of 366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

Support for heterogeneous processes (human-centered and system-centered), flexibility and reuse are important challenges for the design of process modeling languages. Therefore, process modeling languages on a high level of abstraction are needed [Gruber, 2004]. To deal with the problem of a missing standard, meta models and meta languages for the definition of processes can serve as an integration platform for the exchange of process models that are specified in proprietary languages. Research attempts to define workflow meta models can e.g. be found in [Jablonski and Bussler, 1996], [Rupietta, 1997] or [Eder and Gruber, 2002]. Their expressiveness can serve as a benchmark for the selection of a application specific modeling language and they can be used for the application-independent specification of process models that can then be transformed into the language relevant for the domain-specific context [zur Muehlen, 1999]. The exchange aspect is also addressed by the WfMCs approach to process definition interchange [WfMC-I1, 2002]: Usage of a meta-model (in their case WfMCs XPDL) as a common interchange standard that enables products to continue to support arbitrary internal representations of process definitions with an import/export function to map to/from the standard at the product boundary. A variety of different mechanisms may be used to transfer process definition data between systems according to the characteristics of the various business scenarios. In all cases the process definition must be expressed in a consistent form, which is derived from the common set of objects, relationships and attributes expressing its underlying concepts. The principles of process definition interchange are illustrated in Figure 55.

Figure 55 Process definition exchange Among the most prominent meta modeling languages are: WfMC Workflow process definition interface (WPDL / XPDL) Process specification language PSL / Process interchange format PIF Unified Modeling Language UML Business Process Modeling Notation BPMN ICAM Definition Languages IDEF Page 207 of 366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

(for detailed description see following chapters)

Remark: [zur Muehlen and Becker, 1999] provide a comparative study of WPDL, PIF, PSL, GPSG and UML; but note that things have changed since 1999, e.g. WPDL has been replaced by XPDL, PIF and PSL have been merged, etc. In recent years a lot of Web Services-based business process definition languages / interfaces emerged, e.g.: XLANG WSFL WSCI BPEL4WS For details on these topics see Part III of this report on Service oriented computing. Remark: http://xml.coverpages.org/bpm.html (Technology report on CoverPages) provides an extensive overview over Standards for Business Process Modeling, Collaboration, and Choreography (especially Web Services related specifications and standards). VI.3.2.2.1 XPDL A well-established meta-language is the Workflow Process Definition Interface (Interface 1) of the WfMCs Workflow Meta Model [WfMC-I1, 2002]. It defines the XML Process Definition Language XPDL (which quite recently replaced the Workflow Process Definition Language WPDL). The document describes two minimum meta-models which can be extended by vendorspecific implementations: The process meta-model and the package meta-model. Remark: For an overview on the WfMC and their Meta Model + Interfaces see Chapter 2.5.3. Remark: http://www.wfmc.org/standards/conformance.htm (WfMC conformance provides a list of vendors and products which support WfMC-interfaces. statement)

Process Meta-Model The meta-model identifies the basic set of entities and attributes for the exchange of process definitions. For a Process Definition the following entities must be defined, either explicitly at the level of the process definition, or by inheritance directly or via cross reference from a surrounding package: Workflow Process Activity Transition Information Workflow Participant Specification Workflow Application Declaration Workflow Relevant Data These entities contain attributes that support a common description mechanism for processes.

Page 208 of 366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

Figure 56 Process meta model Package Meta-Model Multiple process definitions are bound together in a model definition. The Package acts as a container for grouping together a number of individual process definitions and associated entity data, which is applicable to all the contained process definitions (and hence requires definition only once). The Package meta-model contains the following entity types: Workflow Process Definition Workflow Participant Specification Workflow Application Declaration Workflow Relevant Data The meta-model for the Package (see Figure 57) identifies the entities and attributes for the exchange, or storage, of process models. It defines various rules of inheritance to associate an individual process definition with entity definitions for participant specification, application declaration and workflow relevant data, which may be defined at the package level rather than at the level of individual process definitions. The Package Definition allows the specification of a number of common process definition attributes, which will then apply to all individual process definitions contained within the package. Such attributes may then be omitted from the individual process definitions. (If they are re-specified at the level of an individual process definition this local attribute value takes precedence over the global value defined at the package level.

Page 209 of 366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

Figure 57 Package model VI.3.2.2.2 PSL / PIF Remark: The PIF (Process Interchange Format) Project has recently been merged with the PSL (Process Specification Language) Project at NIST! The PIF CORE and its extensions have been incorporated into the PSL CORE and its extensions. For detailed information see http://www.mel.nist.gov/psl. The following text is an excerpt from [zur Muehlen and Becker, 1999]. PIF The Process Interchange Framework (PIF) was developed as a standardized language for the processes recorded in the MIT Process Handbook project [Malone et al., 1993]. The Process Handbook project is targeted at the collection of representative business processes from different organizations and the presentation of these processes in order to facilitate the comparison and selection of alternative processes in actual business situations. Its main purpose is the support of organizations seeking the redesign of existing processes and the support of new processes that emerge due to technological support. Within the PIF approach, processes are represented at various levels of abstraction, derived from the object-oriented concept of inheritance and dependency management as in coordination theory. The creators of PIF describe the main advantage of the concept as it allows users to explicitly represent the similarities (and differences) among related processes and to easily find or generate sensible alternatives for how a given process could be performed. [Lee et al., 1998] All constructs of the PIF Core are specified in the Knowledge Interchange Format (KIF), a language that is designed for the interchange of knowledge among separate computer systems [Genesereth, 1999]. KIF allows for an extension of existing concepts, which is important for the adding of userdefined extensions of the PIF language core. Furthermore, KIF is a proposed standard and has welldefined formal semantics, that simplify the process of defining the core PIF constructs. A process description in PIF is based on a set of frame definitions. Each of these frame definitions denotes an entity type that can be instantiated (for example TIMEPOINT or ACTIVITY), these types are arranged in a hierarchy. The hierarchy of PIF core components is depicted in Figure 58. For each Page 210 of 366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

type in PIF there exists a set of predefined attributes which define various aspects of that instance of this type. As an example, the CREATES definition has an ACTIVITY and an OBJECT attribute, the values of which give the object(s) being created and the activity which creates the object(s). Attributes are inherited from supertypes to types as well from types to their instances. An instance of the ACTIVITY frame definition for example contains the attribute Name because the type ACTIVITY inherited this attribute from its supertype ENTITY. The value of an attribute within one frame may refer to another frame. This way relationships between the instances of these frames can be represented. The Process Interchange Framework is a powerful exchange platform for process models. Due to its modular design it can easily be extended to accommodate the needs of workflow process modeling.

Figure 58 Hierarchy of PIF core components The PIF working group is exchanging ideas with the Workflow Management Coalition about making PIF and WPDL interoperable, thus paving the way for a unified interchange format.

PSL The process specification language process (PSL) is funded by the National Institute of Standards and Technology (NIST). The aim of this project is the development of a common exchange format for production enterprises, that is independent of existing applications, robust enough to represent the necessary process information for any kind of application. The ultimate goal is the support of communication between different applications based on a common understanding of their environment. PSL is designed to become an exchange format of process data. Remark: PSL's primary role is not envisioned to be a process modeling meta language; it is rather an interchange language which allows manufacturing applications to exchange discrete process data. For example, an IDEF3-based application could use PSL to exchange process models with a Petri net-based application. The core concept of PSL for the mapping between two application programs is first the mapping of each applications modeling ontology to the PSL ontology. Following this, the source application process can be represented using the Knowledge Interchange Format and transformed into a process Page 211 of 366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

that conforms to the PSL ontology. From this intermediate process a target process using the target applications ontology can be created using KIF, which in turn can be imported into the target application. Another aspect that makes PSL unique its underlying, formal ontology. All concepts in PSL are formally defined, using the Knowledge Interchange Format (KIF), to eliminate the ambiguity usually encountered when exchanging information among disparate applications. This ontology provides the backbone that enables and ensures correct translations. The basic components of PSL are: Activites. These can be generic activities, for example deterministic or nondeterministic procedures, as well as ordering functions over activities, such as creates- or precedesrelationships. Objects. These can be either resources, such as human resources or machines, or states, such as pre- or post-activity states. Timepoints. These can be used to describe the temporal relationships between activities or the durations of procedures. Similar to the PIF framework, all concepts of PSL are specified using the Knowledge Interchange Format (KIF). VI.3.2.2.3 UML The UML (Unified Modeling Language) was originally designed by and for software engineers. It is derived from a set of widely accepted and used concepts (Booch, OMT, etc.) and is by now a common standard for object-oriented modeling. Modeling of business processes and workflows is an important area in software engineering, and, given that it typically occurs very early in a project, it is one of those areas where model driven approaches definitely have a competitive edge over code driven approaches. Activity diagrams have been introduced into the UML rather late. They have since been considered mainly as a work-flow definition language, but it is also the natural choice when it comes to modeling web services, and plays an important role in specifying system level behaviors. [Strrle, 2004]. Although it is stated in the UML 1.5 specification that the modeling of processes is not in the (intended) scope of UML (see Unified Modeling Language Specification Version 1.5, page 1-8), UML 1.5 explicitly features activity diagrams AD (for business process modeling)! Nevertheless, [Odeh et al., 2002] state that the appraisement of UML AD as process modeling language ranges from great to there is a lot more to business modeling than this and that ADs do not yet present the clear focus on modeling business processes found in other approaches like RAD (Role Activity Diagrams). Usually a combination of different UML diagram types are necessary to cover all (static and dynamic) aspects of a business processes [Nttgens, 1998][zur Mhlen and Becker, 1999], which are: Use Case Diagrams. Use cases denote only the static relationship between actors and system functionality, but do not describe the temporal or logical sequence of process steps.

Page 212 of 366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

Sequence Diagrams. These diagrams depict the temporal and logical order of activities and involved participants in a swim lane-style notation. Collaboration Diagrams. Within collaboration diagrams the interaction between actors and use cases are described in terms of the messages that are sent between the different elements of the diagram. Collaboration diagrams can be seen as an extension of the use case diagrams because they allow for an ordering of messages as well as for directed relationships. Statechart Diagrams. A statechart diagram shows all possible states of a use case and the transitions between these states. Used in the context of workflow management a statechart can be used to depict the possible starting and ending points of a workflow model as well as the legal transitions between states. Activity Diagrams. Activity diagrams are variations of statecharts that display all possible paths of action between activities. While Statecharts may contain passive states, activity diagrams depict relationships between activities. The transition between two activities is only active if the preceding activity has finished and an optional guard constraint at the transition evaluates to true. Modeling elements allow for parallel branches as well as alternatives between activities. Class Diagrams: Although class diagrams describe the static structure of an information system, processes give a lot of information on objects, their structure and their relationship. [Nttgens, 1998][zur Mhlen and Becker, 1999][Strrle, 2004] also list missing features in UML, which are necessary for business process modelling : Exception handling and event handling Process hierarchy and expansion nodes Modeling of resources and their relationship to the process Insufficient differentiation between data- and control-flow in an activity diagram The emerging UML 2.0 is supposed to represent a significant update to the UML 1.5, from a business modeling point-of-view.. [Strrle, 2004] states: Compared to UML 1.5, the concrete syntax of Activity Diagrams has remained mostly the same concerning control flow. Everything else, however, has changed dramatically. The changes affect the concrete syntax of data flows, all of the abstract syntax, and, particularly, the semantics: while in UML 1.5, Activity Diagrams have been defined as a kind of State Machine Diagrams (ActivityGraph used to be a subclass of StateMachine in the Metamodel), there is now no such connection between the two: Activity replaces ActivityGraph in UML 1.5.. The standard claims that Activities are redesigned to use a Petri-like semantics instead of state machines . VI.3.2.3 Role Activity Diagrams Role Activity Diagrams originated from the study of coordination (see [Holt et al., 1983]. Designed originally for Business Process Modelling, they fit the task of modelling components, which realise business rules. RADs focus is on processes, which involve the co-ordination of inter-related activities carried out by people in organizations using a variety of tools. The notation supports four foundation classes: Roles - which represent the individual roles in a process, Actions - individual activities or actions carried out by a role, Entities - data, and structures plus collections of entities and tables of entities, and Interactions - which allow roles to communicate by object passing. Page 213 of 366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

Concurrent behaviour is modelled by giving a finite-state model for each Role and by allowing Roles to synchronise by letting them share transitions [Ould, 1995][Murdoch and McDermid, 2000]. RADs mix the organisational aspect (roles) and the behavioural aspect. The informational aspect is addressed by separate entity models. Although intuitively simple and easy to read, Role Activity Diagrams use a small number of notational primitives to express complex process behaviour (concurrent and parallel threads, choices, iterations), which can in some cases be an inconvenience. RADs have a graphical and rather simplistic notation (there are a number of concepts that are not defined in RAD - e.g. functions, actors, flows). Role Activity Diagrams are part of the STRIM (Systematic Technique for Role and Interaction Modelling) method [Ould, 1995]. STRIM encompasses the cooperation of RADs, an entity (data) model and a methodical approach. STRIM can be used in combination with certain workflow management products. VI.3.2.4 IDEF IDEF is the name of family of languages used to perform enterprise modelling and analysis (see http://www.idef.com/ and [Mayer et al., 1995], [IDEF, 1993], [Menzel and Mayer, 1998]). The IDEF (Integrated Computer-Aided Manufacturing (ICAM) DEFinition) group of methods have a military background. Originally, they have been developed by the US Air Force Program for Integrated Computer Aided Manufacturing (ICAM). The number of participants in the meetings of the IDEF user group are evidence of the widespread usage of IDEF. Currently, there are 16 IDEF methods. Of these methods, IDEF0, IDEF3, and IDEF1X (the core) are the most commonly used. Their scope covers: Functional modelling - IDEF0: The idea behind IDEF0 is to model the elements controlling the execution of a function, the actors performing the function, the objects or data consumed and produced by the function, and the relationships between business functions (shared resources and dependencies). Process modelling - IDEF3: IDEF3 is captures the workflow of a business process via process flow diagrams. These show the task sequence for processes performed by the organisation, the decision logic, describe different scenarios for performing the same business functions, and enable the analysis and improvement of the workflow. Data modelling - IDEF1X: IDEF1X is used to create logical data models and physical data models by the means of logical model diagram, multiple IDEF1X logical subject area diagrams, and multiple physical diagrams. There are five elements to the IDEF0 functional model (see Error! Reference source not found.): the activity (or process) is represented by boxes, inputs, outputs, constraints or controls on the activities, and mechanisms that carries out the activity. The inputs, control, output and mechanism arrows are also referred as ICOMs. Each activity and the ICOMs can be decomposed (or exploded) into more detailed levels of analysis. The decomposition mechanism is also indicated as a modelling technique for units of behaviour in IDEF3.

Page 214 of 366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

Figure 59 IDEF0 representation. The IDEF3 Process Description Capture Method provides a mechanism for collecting and documenting processes. There are two IDEF3 description modes, process flow diagrams and object state transition network diagrams. A process flow description captures knowledge of "how things work" in an organization, e.g., the description of what happens to a part as it flows through a sequence of manufacturing processes. The object state transition network description summarises the allowable transitions an object may undergo throughout a particular process. The IDEF3 term for elements represented by boxes is a Unit Of Behaviour (UOB). The arrows (links) tie the boxes (activities) together and define the logical flows. The smaller boxes define junctions that provide a mechanism for introducing logic to the flows. Object state transition network (OSTN) diagrams capture object-centred views of processes, which cut across the process diagrams and summarise the allowable transitions. Object states and state transition arcs are the key elements of an OSTN diagram. In OSTN diagrams, object states are represented by circles, and state transition arcs are represented by the lines connecting the circles. Other main concepts are: strong transitions, conditions, transition junctions, and elaborations. The notation used in IDEF0 and IDEF3 models is graphical. It appears that a disadvantage of IDEF is the visual appearance of IDEF diagrams (especially the IDEF0 diagrams). Presley and Liles (1995) mention that they have encountered expressions of aversion from some reviewers and end users when first presented with an IDEF0 diagram: The network of boxes and arrows, along with the size of some models, can cause many users to reject the model. In our experience, most will overcome this initial reaction if the modelling syntax is explained to them. Moreover, they state that beginner modellers might need preliminary training. The IDEF family provides support for the modelling of several architectural views. However, there are no communication mechanisms between models. The fact they are isolated hinder the visualisation of all models as interrelated elements of an architectural system. This also means that a switch between views is not possible. IDEF is widely used in the industry. This indicates that it satisfies within acceptable limits the needs of the users. The IDEF family is subject to a continuous process of development and improvement. Still, IDEF0, IDEF1x and IDEF3 are rather stable and rigid languages, and IDEF0 and IDEF1x have been published as standards of the National Institute of Standards and Technology.

Page 215 of 366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

VI.3.2.5 Business Process Modelling Notation (BPMN) The Business Process Modelling Notation (BPMN) is one of the standards being developed by the Business Process Management Initiative (BPMI). BPMI is a not-for-profit organisation, which states as its goals: (1) the specification of open standards for process design, and (2) the support of suppliers and users of business process management techniques and tools. Many organisation involved in business process modelling and management are involved in the BPMI activities. Other developments by BPMI include the Business Process Modelling Language (BPML), an XML-based metalanguage for the exchange of business process models, and BPQL, a business process query language. The BPMN standard [BPMN, 2003] specifies a graphical notation that is to serve as a common basis for a variety of business process modelling and execution languages. Mappings from BPMN to, among others, BPML and BPEL4WS (Business Process Execution Language for Web Services) have been defined. Version 1.0 of the BPMN specification has appeared in August 2003. Examples of business process notations that have been reviewed for this are UML Activity Diagrams, UML EDOC Business Processes, IDEF, ebXML BPSS, Activity-Decision Flow (ADF) Diagrams, RosettaNet, LOVeM and Event Process Chains (EPCs). As the name already indicates, BPMN is restricted to busines-level models, with a strong emphasis on process modelling; applications or infrastructure are not covered by the language. The main purpose of the BPMN is to provide a uniform notation for modelling business processes in terms of activities and their relationships.

Client

Submit claim

Receive decision

1) send damage form

2) send notification

ArchiSurance

Registration

Acceptance

Valuation

Payment

Claim

Figure 60 Example model in BPMN Currently, BPMN only defines a concrete syntax, i.e., a uniform (graphical) notation for business process modelling concepts. However, there is a formal mapping to the XML-based business process execution language BPEL4WS. A formal metamodel for BPMN does not (yet) exist. VI.3.2.6 BizzDesign Tool and Language VI.3.2.6.1 BizzDesigner BizzDesigner, the successor of the tool formerly known as Testbed Studio, is an integrated toolset for the design, analysis, and management of business process models. It originated from the Testbed Page 216 of 366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

project of the Telematica Instituut and is commercially available from BiZZdesign BV (http://www.bizzdesign.com). BizzDesigner is a process, organisation and data modelling tool. It offers its own repository for model management, and has limited functionality for information modelling. Basic model operations are straightforward; copy/paste, grouping/ungrouping etc. can be easily done within one model. Copy/paste across models can be accomplished via the library/repository. Merging/splitting is not supported. Import of models is not supported, export can be done to Cosa and Staffware workflow tools. However, the internal format that Testbed uses is XML. The results of some analyses can be exported to Excel. In addition reporting to HTML and Word formats is possible. Views are used to generate feature overviews of models. For instance, colour views emphasise certain aspects of the model. Other views generated by Testbed Studio visualise precedence relations, dataflow, or the assignment of behaviour to actors. Also structural transformations can illustrate different aspects of a model structure: e.g., an organigram shows the hierarchical structure of an organisation. A powerful concept is that of process lanes. A business process model can be automatically structured with respect to any attribute. For example, the actions can be structured into a block (sub-process) for every actor involved, showing the change in responsibility in a process, e.g. to reveal the handovers between different organisations. Alternative process lane structures are for example based on the business function associated with an activity, or whether the activity belongs to the primary or secondary part of the process. VI.3.2.6.2 Methodology BizzDesigner is accompanied by a methodology (see [BizzDesign, 2000]) that leads the user in a stepwise (waterfall) manner through the redesign process. The main phases identified by the method are innovation, analysis, redesign and migration. They are all supported by a number of modelling and analysis components enclosed in the tool (simulation, workflowmanagement systems, casedevelopment environment, process documenting tool etc.) VI.3.2.6.3 The Amber Language Amber, the language used in BizzDesigner, was developed by the Telematica Instituut. In Amber the focus is on processes in a single organisation, particularly from the financial sector. A variation of Amber, called NEML (Networked Enterprise Modelling Language), targeting interorganisational e-business processes in networks of organisations, has also been proposed [Steen et al., 2002]. Amber is mostly suited for business consultants and intended for business process and organisation modelling; consequently, it lacks the architectural perspective of information systems and the concepts related to this. Amber recognises three aspect domains: the actor domain, which describes the resources for carrying out business activities; the behaviour domain, which describes the business processes performed by the resources; the item domain, which describes the data objects handled by business processes. Amber is a graphical language. Figure 61 shows a simple example of a business process model in Amber.

Page 217 of 366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

pay

receive claim

process claim

reject claim

submit notification

Claim

Notification of rejection

Notification of acceptance

P ayment

Figure 61 Example of a business process model in Testbed. The three domains in Amber can also be seen as specific types of viewpoints. It is important to note that a complete model always contains representations of all these domains. Moreover these representations are not isolated from each other and they communicate via several mechanisms. Amber has a formal descriptions of its metamodel [Eertink et al., 1999] in UML notation. Furthermore, each concept is separately defined, using the same UML formalism. Apart from this, process models are endowed with a number of operational semantics, having different purposes such as stepwise simulation, model checking, and quantitative analysis. VI.3.2.7 ARIS ARIS ("Architecture of Integrated Information Systems", [Scheer, 1994b]) is a well-known approach to enterprise modelling. Although ARIS started as the academic research of Prof. A.W. Scheer, it has now an explicit industrial background. It is not a standard, but it is very well sold and therefore widespread. IDS Scheer AG has sold over 30000 ARIS licences worldwide. In addition to the high level architectural framework, ARIS is a business modelling method, which is supported by a software tool (ARIS Toolset). ARIS is intended to serve various purposes: documentation of existing business process types, blueprint for analysing and designing business processes and support for the design of information systems. The tool is intended for system designers. VI.3.2.7.1 ARIS Toolset Although the ARIS toolset started as part of the academic research work of Prof. A.W. Scheer [Scheer, 1992], it has since then evolved into a very successful commercial tool for enterprise modelling (http://www.ids-scheer.com/countries/corporate/english/). ARIS provides a repository system that is realised via a database server. Databases are used for the storage of models and objects. As far as security aspects are concerned, the tool handles the control of access to the databases using access privileges for users and user groups. The ARIS Model Generation Wizard offers two possible procedures: use of existing models to generate a new model, or directly select individual objects to generate a new model. Selecting, inverting the selection, copying, pasting, splitting and merging of models, part of models and objects are possible in ARIS. Moreover, the ARIS Merge component supports the combination of contents from different ARIS databases or distributing the contents of a database among different databases. With respect to import/export functionality, ARIS Export/Import allow the export data to an ASCII file and the import of data from this ASCII file. This is used for the translation of texts using ASCII files, and to make possible Page 218 of 366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

that other applications can access ARIS data via an interface. The domains covered by ARIS correspond to the areas of the ARIS House (see [Scheer, 1992]), namely: Organisation view, Data view, Process view, and Function view. Each of these views are supported by specific types of models, according to the phase of architecture development (requirements definition, design specification and implementation). The modelling language behind ARIS is EPC (event-driven process chains). The tool-specific language is described below. ARIS provides some support for qualitative and quantitative analysis, namely for process performance and strategic performance measurement (ARIS BSC, cost/time analysis, detection of process weaknesses and bottlenecks etc.), process risk assessment. quality management and dynamic process simulation/testing. ARIS also supports UML. Via the report functionality ARIS can export UML Class Diagram Modells to XML files in UML XMI notation. This file can be then imported into Rational Rose 2000 environment (or any other tool supporting XMI) using its XMI interface. For each selected Model a separate export file is created. Direct vertical interoperability is possible with Excel and SAP R/3 (only for import of data). VI.3.2.7.2 ARIS Methodology As we said, ARIS has an elaborated decomposition of an enterprise in several views: data view, function view, organization view and, to realize the connection between these views, the control view. In the organisation view the relations between enterprise units and their classification into the organizational hierarchy are modelled. The data view describes objects, their attributes and interobject relations. Furthermore the data view contains events that can initiate and control processes. The function view embodies functions that are part of processes and determined through the creation and change of objects and events. A complex function can be decomposed in more elementary ones. The product/service view is focusing on the customer and the products/services delivered to him. The task of the control view is, besides the integration of the first four views, the definition of the dynamic aspects. The most important entities here are functions and events, which are linked together to form the so-called event driven process chain (EPC). The EPC models the control flow of the business process. It can be (and usually is) extended by links to other relevant entities contributed by other views. So functions can be connected to their input and output data, which are located in the data view to model the data flow. As mentioned above each of these views is described in different levels of abstraction. The starting point is always the managerial/economic description of the enterprise domain, the requirements definition. These concepts are formulated in business terms but are strongly influenced by technical possibilities. The resulting models of this first level are laid down in semiformal diagrams. The design specification is also modelled semiformally, but uses terms of the envisioned information systems solution (i.e. it speaks about modules and transactions). The last part consists of the physical implementation description of the upper levels.

Page 219 of 366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

Organisation view
Requirements definition Design specification Implementation description

Requirements definition Design specification Implementation description

Requirements definition Design specification Implementation description

Requirements definition Design specification Implementation description

Data view

Control view
Requirements definition Design specification Implementation description

Function view

Product/Service view

Figure 62 The ARIS house VI.3.2.7.3 ARIS Language To model business processes within an enterprise model, ARIS provides a modelling language known as event-driven process chains (EPCs). An EPC is an ordered graph of events and functions. It provides various connectors that allow alternative and parallel execution of processes. The main concepts defined in ARIS are: events, functions, control flows, logical operators, organisational units, interactions, output flows, environmental data, outputs, human output, message, goal, machine, computer hardware, application software.
(Supplier) Order Processed

Manufacturing Plan

Material

Work Schedule Schedule

High Quality

Order Documents

Item Completed Manufacturing Plan Completed Completed Manufacture Item Item Item

Shop Floor

Operator

Work station

PPC System

Machine

Control CPU

Legend: Organization Flow/ Resource Flow Organization Flow/ Resource Flow Function Function Control Flow Information Flow Information Services Flow Human Human Material Output FLow Output Output PPC = Production Planning and Control Flow
Event Event Message Message EnvironEnvironmental Data mental Data Output Output Application Application Software Software

OrganizaOrganizational Unit tional Unit

Goal Goal

Machine Machine

Computer Computer Hardware Hardware

Logical Operator Operator AND" AND"

Figure 63

Events, functions and control flows in ARIS Page 220 of 366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

The ARIS Toolset includes various editors that can be used to design and edit several types of diagrams. The most important are value added chain diagrams, organisational charts, interaction diagrams, function trees, and Event-driven Process Chains (EPCs). While there is a formal definition of the syntax of EPCs, EPCs lack a precise definition of their semantics. The semantics of EPCs is given only roughly (in a verbal form) in the original publication by Scheer [Scheer 1992]. A comprehensive discussion of the semantic shortcomings of EPCs can be found in [Rittgen, 2000]. This is also the case for corresponding object models which are specified in a rudimentary metamodel. For this reason, ARIS lacks a solid formal foundation and is of limited use for the design of (application) architectures. The graphical notation of ARIS is unambiguous and easy to understand and use. While ARIS allows for various perspectives on the enterprise (the data view, the control view, the process/function view and the organisation view), the integration of these aspects remains on a low level. Therefore, the tool does not guarantee the overall integrity of interrelated models. The tailorability of ARIS is limited to business modelling, and more precisely to organisational, functional and process modelling. It is very well suited for large models. ARIS is not extensible. VI.3.2.8 MEMO MEMO is a tool supported and object oriented methodology for the analysis and (re-) design of business information systems (http://www.uni-koblenz.de/~iwi/EM/MEMO/index.html). It is based on a set of modelling languages supplemented with heuristics and techniques (most of them originating from strategic enterprise planning and organisational analysis/design). MEMO proposes three object-oriented modelling languages: MEMO-OML, MEMO-OrgML and MEMO-SML. The graphical notation and the main concepts are common to all languages. VI.3.2.8.1 MEMO Methodology MEMO also provides a method for enterprise modelling that offers a set of specialised visual modelling languages together with a process model as well as techniques and heuristics to support problem specific analysis and design. The languages allow the modeling of various interrelated aspects of an enterprise. They are integrated on a high semantic level. MEMO models have two goals: they are an instrument to develop information systems that are well integrated with a companys strategy and its organization they can be used as the foundation of an enterprise schema. Its instantiation would allow for a permanent representation of all relevant aspects of an enterprise (strategy, business processes, organisational structure, business entities, business rules etc.). The methodology is endowed with an iterative lifecycle, depicted in Figure below.

Page 221 of 366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

Feasibility Study Strategic Analysis & Design Organisation Analysis & Design Information Analysis OO Design

Maintenance

Introduction

Test

Implementation

Figure 64

MEMO lifecycle modell

MEMO distinguishes three so called perspectives - strategy, organisation and information system each of which is structured by four aspects: structure, process, resources, goals. A particular aspect within a perspective is called a focus - for instance process within information system, and one or more foci correspond to a particular model. Also, there are four types of domain analysis that are described in MEMO: the Feasibility Study, Strategic Analysis and (Re-) Design, Organizational Analysis and (re) Design, Information Analysis. VI.3.2.8.2 MEMO Languages MEMO proposes three object-oriented modelling languages: MEMO-OML, MEMO-OrgML and MEMO-SML. The graphical notation, and the main concepts are common to all languages. MEOMOrgML and MEMO-SML are particularly relevant for business process modelling. More precisely, MEMO-OrgML was designed to model a companys organisation. For this purpose it provides concepts to describe the organisational structure, business processes and resources (such as machinery and personnel) that are required to perform the business processes. The key concepts offered by MEMO-OrgML are ProcessType, ProcessUse, ContextOfProcessUse, InputSpec, OutputSpec and Event. MEMO-SML (Strategy Modelling Language) uses the following concepts: abstract strategy, abstract and total value chain, activity, business unit etc. The MEMO notation is graphical, resembling UML notation. In MEMO there is a clear separation between the modelling languages and the visual appearance of the models. The latter is taken care of by the so-called MEMO Center. The MEMO Center, basically is a user interface, having among other things, the role of providing navigation, simulation and retrieval mechanisms and of userfriendly diagramming tools. It is thus obvious that the diagrams (and the various accompanying textual editors) the user can design while using MEMO Center, are light replacements for the graphical notation used by the MEMO languages. Their sole purpose is to ease the creation and understanding of the models and to make their appearance more pleasant. The MEMO modelling languages are highly integrated and support multiple views. The integration feature (carried out by the MEMO Center) permits the communication between models created in the different languages of MEMO, and ensures the overall integrity of the enterprise architecture. MEMO has a solid formal support. A meta-metamodel defining the meta-language used to specify each of the MEMO modelling languages is provided. The integration of the MEMO languages is Page 222 of 366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

achieved via the sharing of common concepts and each of the MEMO languages has its own metamodel and with formal semantic of concepts. MEMO is not commercially available and its use is limited primarily to scientific research purposes (although it has been applied in real-life cases). VI.3.2.9 ArchiMate The ArchiMate project (http://archimate.telin.nl) is a Dutch research initiative that aims to provide concepts and techniques to support enterprise architects in the visualisation, communication and analysis of integrated architectures. The results of the project have been validated extensively in practical cases in the financial sector. The development of an enterprise architecture language that pictures various architectural domains and their relations forms the core of the project (see [Jonkers et al., 2004]). In contrast to languages for models within a domain (e.g., UML for modelling applications and the technical infrastructure, or BPMN for modelling business processes), ArchiMate describes the elements of an enterprise at a relatively high level of abstraction, and pays particular attention to the relations between these elements. It facilitates the modelling of: The global structure within each domain, showing the main elements and their dependencies, in a way that is easy to understand for non-experts of the domain. The relations between the domains. ArchiMate also focuses on bringing together more detailed models expressed in existing languages and integrating these at the appropriate level of abstraction. The language is supported by an integrated enterprise architecture workbench (which is currently available as a prototype) that allows for the integration of ArchiMate models with models expressed in other languages such as UML or BPMN. The ArchiMate language act as an umbrella language, facilitating interoperability among the various types of languages and tools that exist in this field [Lankhorst et al., 2004]. The language covers the business layer of an enterprise (e.g., the organisational structure and business processes), the application layer (e.g., application components) and the technical infrastructure layer (e.g., devices and networks), as well as relation betweens these layers. These relations model the vertical interoperability within an enterprise, i.e., the alignment between business processes and supporting applications, and the alignment between applications and technical infrastructure. Furthermore, the language distinguishes the structural, behavioural and informational aspects within each layer.
Information aspect Business layer Application layer Technology layer Information domain Data domain Behaviour aspect Product domain Process domain Structure aspect Organisation domain

Application domain

Technical infrastructure domain

Figure 65

ArchiMate framework

The business layer of the ArchiMate language appears to be most relevant within the scope of this part. The structure aspect at the business layer refers to the organisation structure, in terms of the Page 223 of 366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

actors that make up the organisation and their relationships. The central structural concept is the business actor: an active entity that performs behaviour (i.e., the subject of behaviour). A Business actor may be an individual person (e.g., a customer or an employee), but also a group of people and resources that have a permanent (or at least long-term) status within the organisation. Typical examples of the latter are a department and a business unit. Other ArchiMate concepts that belong to the structure aspect of the business layer are role, object, collaboration, interface etc. Service orientation supports current trends such as the service-based network economy and ICT integration with Web services. It is also the case of the ArchiMate language where the service concept plays a central role in the behaviour aspect of the business layer. A service is defined as a unit of functionality that some entity (e.g., organisation or department) makes available to its environment, and which has some value for certain entities in the environment. In other words, the Business layer offers products and services to external customers, which are realised in the organisation by business processes performed by business actors. Besides organisational services, this aspect also comprises concepts such as business process, business function, business activity and business interaction.
Business actor Business role

Client

Insurant

ArchiSurance

Insurer

Organisational service
Claim registration service Customer information service

assignment used by
Claims payment service

realisation Business process


Damage claiming process

access Invoice

Registration

Acceptance

Valuation

Payment

triggering

Business object

Figure 66

Example of a business layer model

Several types of relations are possible between the instances of the above-mentioned concept. Some of them (e.g., triggering, flow, access, realisation) can be used to convey dynamic behaviour, while others (e.g., grouping, assignment, association, aggregation and composition) express structural aspects . VI.3.3 Verification and validation of business process models Techniques and tools for static analysis, verification and validation of process models before deployment of business processes are necessary for ensuring the feasibility and correct behaviour of the business processes themselves. This reduces the risk of costly corrections, but can also be used for improving the processes. Analysis techniques for specification properties have long traditions. Such techniques can be adopted to formal analysis of business process models. Especially for workflows the verification of control flow (process structure) and temporal constraints are addressed in research. Data flow is another aspect, which (as far as we know) is barely addressed; e.g. UML-based verification [Strrle, 2004]. Page 224 of 366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

VI.3.3.1 Control flow

VI.3.3.1.1 Conformance Classes According to the WfMC [WfMC-I1, 2002] there are three conformance classes restricting the transitions (dependencies) between activities (tasks). The conformance classes are defined by the WfMC as follows: Non-blocked: There is no restriction for this class. Loop-blocked: The activities and transitions of a process definition form an acyclic graph. Full-blocked: For each join there is exactly one corresponding split of the same kind and vice versa. Workflows of this conformance class are also called (well) structured workflows [Eder and Gruber, 2002]. Workflows complying with the first two classes may lead to structural conflicts such as deadlocks and lack of synchronization [Sadiq and Orlowska, 1999][Lin et al., 2002][van der Aalst, 2000]. Therefore, these workflows have to be verified to ensure their correct execution, whereas workflows of the full-blocked class are per se structurally conflict-free. But note that structured workflow models are less expressive than arbitrary workflow models [Kiepuszewski et al., 1999]. VI.3.3.1.2 Structural Conflicts [Sadiq and Orlowska, 1997] identified five types of possible structural conflicts arising from syntactical errors in a workflow model: Incorrect usage: e.g. a synchronizer with only one incoming flow. Deadlocks: e.g. synchronization on two mutually exclusive alternative paths. Livelocks: e.g. iteration with no exit path. Unintentional multiple execution: e.g. merging on two concurrent paths. Active termination: e.g. concurrent paths leading to more than one final task. To avoid these types of conflicts a workflow model must adhere to specific correctness criterias (e.g. soundness-criteria in [van der Aalst, 2000]), which can be verified (automatically). VI.3.3.1.3 Verification of structural conflicts Current trends for the verification of structural conflicts use different approaches: Petri net based solutions: especially the research group around W. van der Aalst provides a lot of work on Petri net based solutions (WF-Nets; as a starting point see [van der Aalst, 2000][van der Aalst and ter Hofstede, 2002]. Their verification tool Woflan [Verbeek, 2001][van der Aalst, 1998] is a Petri net based verification tool that for example analyzes deadlock-freeness. See also http://www.daimi.au.dk/PetriNets/tools/quick.html for an overview of Petri net tools and http://www.daimi.au.dk/PetriNets/bibliographies for an extensive introduction and bibliography on Petri nets. Graph based solutions: [Sadiq and Orlowska, 1999][Lin et al., 2002] propose graph reduction techniques to verify the structure of graph-based models. [van der Aalst et al., 2002] shows that it is possible to transform a graph into a Petri net and use Petri net based techniques and tools to verify the model. [Sadiq and Orlowska, 1999] also implemented a tool, called FlowMake, with verification features. Pi-calculus based solutions Page 225 of 366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

VI.3.3.1.4 Temporal constraints Dealing with time and time constraints is crucial in designing and managing business processes. At build-time, when workflow schemas are developed and defined, workflow modelers need means to represent time-related aspects of business processes (activity durations, time constraints between activities, etc.) and check their feasibility (i.e., timing constraints do not contradict each other) [Eder et al., 1999]. Different approaches have been published, e.g. [Eder et al., 1999][Marjanovic and Orlowska, 2000][Combi and Pozzi, 2003]. Most of them define control flow models (mainly graph based) augmented with different types of excplicit time constraints. They are called explicit, because they have to be explicitly defined by process modelers. Explicit time constraints are for example: deadlines (constraining workflow- or activity durations), upper and lower bound constraints (defining maximum and minimum durations between activities) and fixed date constraints (based on calendar dates). Based on the control flow model, explicit time constraints and estimated activity durations, it is possible to calculate implicit (structural) time constraints, which define possible execution intervals for activities bounded by an earliest possible start time (based on preceding activities) and a latest allowed end time (in order to reach a succeeding deadline); cmp. also with CPM- and PERT-technique [Pozewaunig et al., 1997]. These execution intervals allow the verification of the satisfiability of time constraints, e.g. is it possible to find an execution where every time constraint can be satisfied. VI.3.3.2 Composition Key dimensions of compositions are [Hull et al., 2003]: use of bounded or unbounded queues (bounded [Hoare, 1987][Milner, 1980][Kupferman, 2001]; unbounded [Brand and Zafiropule, 1983][Jeron, 1991][Turner, 1993][Boigelot et al., 1996][Boigelot et al., 1997][Abdulla et al., 1998][Bultan et al., 2003]), perspective of open and closed environments, and topology for communication between services. Many of the current proposals, like BPEL4WS, do not actually fix these aspects, which is one of the problems in basing verification tools for the languages. Several standards and approaches have been proposed for specifying the composition of web services [Leymann, 2001][Thatte, 2001][Curbera et al., 2002a][Arkin, 2002][ebXML, 2002][Krishnan et al., 2002][Ankolenkar et al., 2001][Christophides et al., 2001][van der Aalst, 2003][Wohed et al., 2002][van der Aalst et al., 2002b]. The composition approaches can be grouped into the following topologies [Hull et al., 2003]. peer-to-peer, where independent services are coordinated into a new service using a workflow-like approach, for example with BPEL4WS service links; mediated, where new service is created to coordinate the use of some existing ones in a huband-spoke topology; languages like BPEL4WS and BPML are applicable; and brokered, where a family of peers is controlled by the broker but information is exchanged between any pair of peers; approaches of this category are seen in GSFL [Krishnan et al., 2002]] for scientific workflow and AZTEC [Christophides et al., 2001] for telecommunication.

Page 226 of 366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

The service composition analysis is basically about whether the set of services are compatible. WSDL-style signatures can be used for syntactical compatibility checking, basic format requiring exact match and more liberal notions requiring subtyping of message signatures. As the used languages do not necessarily support subtyping, the compatibility analysis may be intractable [Seidl, 1990][Pierce and Hosoya, 2001]. Behavioral signatures can be analyzed either with black box or white box approach. Black box analysis techniques have been discussed in [Clarke et al., 2000][Alfaro and Henzinger, 2001][Godefroid and Long, 1996][Brand and Zafiropule, 1983][Deutch, 1992][Boigelot et al., 1997][Ibarra, 2000][Bultan et al., 2003]; white box approaches with Petri nets in [van der Aalst, 1998] and with pi-calculus in [Pierce and Sangiorgi, 1993][Honda et al., 1998][Gay and Hole, 2000]. Coordination approaches and languages are under development for grid computing too [Krishnan et al., 2002][Workshop, 2002][Frey et al., 2001]. More structured forms of composition include BPEL4WS [Curbera et al., 2002b], BPML [Arkin, 2002], and GSFL [Krishnan et al., 2002]. VI.4 Open Source Workflow Management Systems According to [WFMC-TC-1011, 1999] a workflow management system creates and manages the executions of workflows through the use of software. There is a large number of available WfMSs, varying from research prototypes, through open source to commercial products. More than a dozen open source workflow projects are listed in [OS, 2004]. Two of them deserve for more attention. The YAWL [van der Aalst and ter Hofstede, 2004] originates in a research on workflow patterns [van der Aalst and ter Hofstede, 2002] and therefore surely will have a big impact on other products. The JBoss jBPM 2.0 [JBPM, 2004] is a flexible and extensible Java workflow management system, which became a part of JBoss [Jboss, 2004]. The JBoss/Server as the leading open source J2EE based application server also has an important impact. VI.4.1.1 YAWL Yet Another Workflow Language (YAWL) is the answer to the limitations of existing workflow management systems and workflow languages in modeling of workflow patterns described in [van der Aalst and ter Hofstede, 2002]. The evaluation of several workflow products and relevant standards (e.g. XPDL, BPML, BPEL4WS) showed that there are considerable differences in their ability to capture control flows for non-trivial workflow processes. Also theoretical models like high level Petri nets have problems supporting some patterns. This led to development of a new language, which would provide direct support for the workflow patterns identified [van der Aalst and ter Hofstede, 2004]. As a proof of concept was developed a workflow management system based on YAWL [van der Aalst et al., 2004], which is available under [YAWL, 2004]. YAWL is based on Petri nets, but extends them with the features to facilitate patterns involving multiple instances, advanced synchronization, and cancellation patterns. Although YAWL originates in Petri nets, it is a completely new language with its own semantics and specifically designed for workflow specification.

Page 227 of 366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

Figure 67 Symbols used in YAWL (Source: [van der Aalst et al., 2004]) A workflow specification in YAWL is a set of process definitions which form a hierarchy. Tasks are either atomic tasks or composite tasks. Each composite task refers to a process definition at a lower level in the hierarchy (also referred to as its decomposition). Atomic tasks form the leaves of the graph-like structure. There is one process definition without a composite task referring to it. This process definition is named the top level workflow and forms the root of the graph-like structure representing the hierarchy of process definitions. Each process definition consists of tasks (whether composite or atomic) and conditions which can be interpreted as places. Each process definition has one unique input condition and one unique output condition. In contrast to Petri nets, it is possible to connect transition-like objects like composite and atomic tasks directly to each other without using a place-like object (i.e., conditions) in-between. For the semantics this construct can be interpreted as a hidden condition, i.e., an implicit condition is added for every direct connection. Each task (either composite or atomic) can have multiple instances. YAWL also introduces ORsplits and OR-joins corresponding respectively to Pattern 6 (Multi choice) and Pattern 7 (Synchronizing merge) defined in [van der Aalst and ter Hofstede, 2002]. YAWL provides a notation for removing tokens from a specified region denoted by dashed rounded rectangles and lines. The enabling of the task that will perform the cancellation may or may not depend on the tokens within the region to be canceled. In any case, the moment this task executes, all tokens in this region are removed. This notation allows for various cancellation patterns. The YAWL implementation has a modular architecture, composed of so called YAWL services. The current implementation [YAWL] includes engine, designer, worklist handler and web service broker.

Page 228 of 366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

Figure 68 YAWL architecture (Source: [van der Aalst et al., 2004]) VI.4.1.2 JBPM The aim of Java Business Process Management (jBPM) project is to provide a flexible and extensible Java workflow management system. It is supposed to provide a very simple mechanism to start with a simple state machine, making it easy for Java developers to bundle jBPM into their projects. On the other hand it should scale to the more complex workflows and workflow patterns. In October 2004 jBPM project joined forces with JBoss to become a critical piece of its Enterprise Middleware Platform. In the documentation of jBPM the concept of activity is replaced by a state and an action. A state in a process specifies a dependency upon an external actor. At process execution time, this means that the workflow engine has to wait until the external actor notifies the WfMS that the state is finished. An action is a piece of programming logic to be executed by the WfMS upon a specified event that occurs during process execution. The WfMS initiates the execution of the action on a specified event during process execution. The jBPM process definition language (jPDL) allows to provide declarative description of the business process for a JBoss jBPM server. A workflow designer should also attach a programming logic as a set of Java classes. The authors claim to design the jBPM's internal model with the workflow patterns in mind. The state model of jBPM is based on a graph with nodes and transitions. Nodes and transitions are the main ingredients of a process definition. A state is an example of a node. The state graph, provides the structure of the process. Actions are pieces of programming logic that can be executed upon events in the process. There are three types of events : entering a node, leaving a node and taking a transition. While jBPM is calculating the next state, a number of these events will fire.

Page 229 of 366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

VI.5 Workflow-based Business Monitoring A key feature of BPMS [Karagiannis, 1995][Karagiannis et al., 1996] is the direct connection of performance evaluation to strategic decisions, operational decisions and re-engineering aspects based on the core elements. The importance of this connection is represented by concepts of current interest such as Business Activity Monitoring, Real-time Enterprise, Process- and Activity-based Costing, Process Warehouses etc. Under Workflow-based Monitoring we subsume all activities to gain and evaluate business-relevant data from executed business processes, both off- and online. VI.5.1 Key Performance Indicators KPIs are metrics measuring the performance of the area under consideration (usually one or more business processes). During application developmentwhich is part of the Resource Allocation Processit is aimed to implement the application in a way to allow an automatic measurement of the defined KPIs. The measurement itself takes place in the Workflow Process. Using the core elements of corporations introduced in the BPMS paradigm, KPIs can be characterized by defining the elements they are related to. These elements we call "Business Monitoring Objects". For example, a KPI could measure the performance of the used application (core element "IT") at certain parts of the process (core element "Process") in a specific location (core element "Organizational Structure"). Figure 69 shows typical "sub-classes" of the core elements. If you want to measure a KPI automatically, the business monitoring objects it is related to have to be represented in the used application(s).
Contract License & Patent Business Object Immaterial & Physical Resource Service

Event
are interdependent are interdependent

Role Product

Activity
implemented using

Organizational Unit

Subprocess

Process

are interdependent

Organizational Structure
enables and restricts

Skills & Competences

Process Fragment

enables and restricts

enables and restricts

Location

implemented using

Information Technology

implemented using

Security

Availability

Reliability

Workload

Performance

Figure 69 Business Monitoring Objects Another criterion to classify KPIs is the time at which they are measured and analyzed. Using the BPMS paradigm they are measured and analyzed during the Execution Process ("Business Activity Monitoring"), within the Performance Evaluation Process ("Business Intelligence"), or within the Strategic Decision Process (cf. Figure 43).

Page 230 of 366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

VI.5.2 Workflow Technology for Measuring KPIs Business processes represent the dynamic part of an organization. During their execution the other core elements of the corporation are created, changed, or even deleted. Therefore, they serve as an "umbrella" which integrates everything that happens in an organization. As already mentioned, KPIs can be easily related to business processes where they can be measured. This is the reason why workflow technology is often an enabler for measuring KPIs. E.g. the WfMC interface 5 provides a way to access runtime data [WfMC, 1998]. However, when measuring KPIs the types of business processes in which they arise have to be taken into account. For example, it is usually difficult to measure KPIs in an ad-hoc business process with a low number of executions. Therefore it has to be checked if a given KPI can be measured and if "yes" in which way. Here, it is helpful to consider different characteristics of the related business processes.
Property
Complexity Degree of Cooperation Predictability Number of executions low

Value
middle high

cooperative activities no cooperative activities synchronous asynchronous fixed, but ad-hoc fixed ad-hoc exceptions low middle high

important, but not Business value low mission critical mission critical Involved one more than one Organizations ("intraorganizational process") ("interorganizational process")

Table 4 Classification Schema for Types of Business Processes Another aspect that has to be considered is, that KPIs are usually related to the business level whereas the measurement takes place on the (technical) execution level. In [Junginger et al., 2000] these different levels are described by the artifacts "business graph" (business process model) and "execution graph" (process definition that is executed by the workflow engine). Because usually the abstraction level of the business graph differs from the one of the execution graph, the audit mechanisms of workflow management systems alone (e.g. WfMC interface 5) are usually not sufficient for measuring KPIs. VI.5.3 Business Monitoring Framework VI.5.3.1 Business Monitoring Framework: Metamodel The key concepts of BMF are "KPI", "probe," and "level". KPIs can be classified into basic KPIs and aggregated KPIs, which are built on basic KPIs. Each KPI is described by its target and as-is values and provides the measurement for one or more objectives of a corporation. The goals describe long-term, mid-term and short-term objectives. For every KPI a probe delivers as-is values by monitoring the object producing the corresponding monitoring data. The data can be delivered either online or offline by data sources such as audit trails, audit databases, application execution logs etc. [Junginger et al., 2004].

Page 231 of 366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

Strategic Level

Tactical Level

Operational Level

1..*

is assigned to

1..*

Data Source

delivers 1 1..*

Monitoring Data

accesses 1 1..*

Probe
1..* 1 observes

1 1

Level
delivers value for 1..* 1..* calculated by has

Audit Trail/Database

Workflow Data

Monitoring Object
1..*

Goal
1..* represented 1..* by described by 1..* 1..*

Execution Log

Application Data Product

As-Is Value
1..*

describes 1

Other

Other Organizational Structure Process Type Process Process Instance IT Task Target Value
1..*

KPI
describes 1

Success Factor

Basic KPI

determines 1..* 1..*

Aggregated KPI

Figure 70 Metamodel of Business Monitoring Framework VI.5.3.2 Business Monitoring Framework: Levels Figure 71 gives an overview of the different levels of the BMF. We distinguish an operational, a tactical, and a strategic level. The operational level is assigned in the BPMS paradigm to the Workflow Process. Here, supervisors monitor the execution of processes. Therefore, on the operational level the focus is on the execution of process instances. Typically, on the operational level alert and notification mechanisms are needed, if, for example, an activity instance is not executed within a given deadline. Workflow management systems usually provide mechanisms for the operational level. However, sometimes, if the KPIs are defined on application data additional mechanisms are needed. The next level is the tactical level. Here, the focus is on process definitions. A typical KPI on the tactical level is the average cycle time of a process within a given time period. This means that on the tactical level the data gathered on the operational level is aggregated. Typical users on the tactical level are business people, controllers etc. For the tactical level not all workflow management systems provide mechanisms, often additional tools are needed. Another technology typically used on the tactical level is data warehouse technology. The highest level is the strategic level. The KPIs measured on the tactical level are aggregated again. A typical approach used on the strategic level is the Balanced Scorecards (BSC) methodology [Kaplan and Norton, 1996][Kaplan and Norton, 2000]. BSC aims at a holistic view by distinguishing different perspectives. The standard perspectives are the finance perspective, the market perspective, the processes perspective, and the learning perspective. Sometimes, BSC is seen as an approach only appropriate for the whole organization. However, BSC can also be applied to parts of organizations, e.g. the IT department. From a workflow point of view, the process perspective is of special interest. Usually, here are process-focused KPIs used.

Page 232 of 366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

Strategic Level (Process Scorecard)

P1 P2

Pn

...

1:n Aggregation

Tactical Level (Process Types)


Aggregation of Instances of Process Type 1 Aggregation of Instances of Process Type 2

...
Aggregation of Instances of Process Type n

1:n Aggregation

... Operational Level (Process Instances)

... ... ...

...

Instances of Process Type 1

Instances of Process Type 2

Instances of Process Type n

1:1 Mapping

Runtime Environment (Execution Data)


Data Source 1 Data Source 2 Data Source 3

...
Data Source n

Figure 71 Levels of Business Monitoring Framework The selection of relevant parts of the generic framework to configure a company-specific monitoring approach is based on criteria and questions such as: Which management level is the addressee of monitoring information? On which level of aggregation monitoring data has to be delivered? At which point of time in process execution monitoring data has to be provided (ex ante, meanwhile, ex post)? What is the planned reaction time between monitoring data delivery and decision time of management based on the monitoring data? Which skills does the staff have concerning gathering, processing and evaluating monitoring data? Can external monitoring data (suppliers, partners, customers etc.) be integrated into the company's KPI calculation? What is the IT strategy concerning the current and future (workflow) application architecture? What is the current technical infrastructure supporting business monitoring?

VI.5.4 Animation One of the forms for monitoring business processes is that of animation. Animating ebXML Transactions with a Workflow Engine [Eshuis et al., 2003b][Eshuis et al., 2003a][Gregoire et al., 2004][Schmitt, 2004]. The overall objective is the development of an integrated tool set for supporting the modeling and validation of complex ebXML transactions. The tool set consists of an extension of a commercial UML-based CASE tool that supports the modeling of ebXML business transactions, and an animator tool that supports execution of the UML models. Page 233 of 366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

The project uses a layered approach: The business layer gives a general overview of the business transactions of a particular organization. At this layer, the global structure of each business transaction is depicted with a UML use case diagram and a UML class diagram. The use case diagram specifies the global structure of the business process underlying the business transaction. The global class diagram specifies the information manipulated in the business process. In addition, business rules can be specified in structured English. At this layer, there is no concept of message. The specification layer details a message-based structure of a business transaction. This detailed specification is needed to support the B2B automation of business transactions. The business process of the business transaction is specified with a UML activity diagram. Each message is specified with a class diagram. The activity diagram refines the use case diagram at the business layer. Each class diagram is a particular view of the global class diagram at the business layer. The message can have attached some business rule defined at the business layer in order to constrain the messages content, and some new rules relating the messages to each other. At the technical layer, the business transaction is executed using the animator based on a workflow system. The infrastructure used at the technical layer is automatically configured from the models developed at the specification layer. VI.6 Research, technologies and markets, standards VI.6.1 Projects In the historical perspective, the challenges of enterprise computing have raised step by step, also bringing new requirements for transaction support. Business process management projects briefed cover mostly enactment support, and contract-based coordination solutions. VI.6.1.1 Historical perspective on enterprise systems and transactions As a first step on collaboration architecture challenges we can see the development of enterprise systems. Two classes of systems have distinguished themselves: workflow management systems (WfMSs) and Enterprise Resource Planning Systems (ERPs). Both solutions automate the business processes, data transfer and information sharing across the enterprise. While the WfMS focus on process control flow, the ERP systems are strong in information-centric solutions and are more flexible in adapting new service components [Cardoso et al., 2004]. Research prototypes in this style include METEOR [Kochut et al., 1999], MOBILE [Jablonski, 1994], ADEPT [Reichert and Dadam, 1998], Exotica [Mohan et al., 1995], and MENTOR [Wodtke et al., 1996]; commercial products include MQSeries Workflow [MQSeries, 2004], Staffware [Staffware, 2004], TIBCO InConcert [TIBCO, 2004], and COSA Workflow [COSA, 2004]. In the late 90s, integration of WfMSs and ERP systems were topical [Schuler et al., 1999] The second step constitutes a globalization of business which makes enterprises increasingly dependent on the collaboration with others. Previously internal workflows need to be extended to comprise other enterprises. This means that the workflows must satisfy the needs of both enterprises. Hence, the management of the newly established business processes becomes critical.

Page 234 of 366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

A business process eventually became defined as multistep activity that supports an organizations mission such as manufacturing a product and processing insurance claims. Interactions between partners external business processes may be carried out based on a specific B2B standard, like EDI [UN/EDIFACT, 2003][X12, 2003], RosettaNet [RosettaNet, 2002] or bilateral agreements. B2B standards define the format and semantics of messages, bindings to communication protocols, business process conversations (e.g.joint process), security mechanisms etc. A B2B framework may have to support several B2B standards and proprietary interaction protocols [Medjahed et al., 2003]. The new generation workflow systems (IEWS, Inter-Enterprise Workflow Systems [Medjahed et al., 2003]) focus mainly on the interactions at the business process layer [Yang and Papazoglou, 2000]. Early projects focus on integration of known and small number of tightly coupled business processes while more recent projects focus on loosely coupled processes like eFlow [Casati te al., 2000a] and WISE [Lazcano et al., 2000]. An interesting project in the workflow community works around engine prototype based on YAWL [van der Aalst et al., 2004]. [Medjahed et al., 2003] discusses general requirements on business process management systems, [ORiordan, 2002] the process excecution support features. As part of the increasing need of collaboration between enterprise systems, the requirements for transaction support have increased. Traditionally, process coordination was achieved by using reliable queues. Message queuing products usually provide guaranteed message delivery, routing, security, and priority-based messaging. For reliability, transactional semantics was included. In 1990s, publish/subscribe became the preferred building block for complex process implementation and coordination. The paradigm provides dynamism to the networks of multiple sources and destinations. Events are classified either through a set of subjects (topics) or by contents of the event (details). Early on, it was recognized that business processes contained activities that would benefit using the transactional semantics, like isolation and atomicity. However, treating an entire business process as a single ACID transaction would be impractical, since business processes are of long duration, they involve many independent systems requiring costly coordination, and they often have external effects. Several models of long-running transactions have been developed to handle AACID-like properties at the business process level and to handle task failures [Elmagarmid, 1992][Jajodia and Kerschberg, 1997]; saga models [Garcia-Molina and Salem, 1987] with compensation, activitytransaction model (ATM) [Dayal et al., 1991] with nested and chained transactions in closed [Moss, 1985] and open [Weikum and Schek, 1992] form. Transactional workflow models were also presented, including ConTrans [Reuter, 1992][Reuter et al., 1997], WAMO [Eder and Liebhart, 1995][Eder and Liebhart, 1998], WIDE [Grefen et al., 1997], CREW [Mamath and Ramamritham, 1998]. In commercial products and standards the same development was seen with virtual transactions with compensation, serializability, locking, virtual isolation with HP Process Manager [Krishnamoorty and Shan, 2000]. Page 235 of 366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

VI.6.1.1.1 Business process coordination

management

systems

and

contract-based

collaboration

Multi-partner business process management (inter-organizational processes) have been under study and development in Loria for several years, gaining experience with architects and car builders and their suppliers [Banatallah et al., 2004][Perrin and Godart, 2004a][Godart et al., 2004][Perrin and Godart, 2004b][Perrin et al., 2003][Bitcheva et al., 2003][Perrin and Godart, 2003][Bhiri et al., 2003][Perrin et al., 2002]. Healthcare architecture from Sweden [Wangler et al., 2003b][Wangler et al., 2003a] introduces VITA Nova, where collaborating process managers coordinate inter-organizational processes. A process manager is responsible of message broker duties, such as handling conversions and messaging across IT systems, of measuring and optimizing the process during operation. CMI (Collaborative Management Infrastructure) [Schuster, 2000][Geppert and Tombros, 1998] introduces an architecture for inter-enterprise workflows, based on CORE engines. The CORE engine provides primitives for coordination and awareness, like defining resources, roles and generic state machines. The architecture extends the traditional workflow model by placeholder activities that are dynamically replaced at run-time. The CMI trading partners become tightlycoupled in respect to communication protocols and message formats. eFlow [Casati et al., 2000b] is a platform that supports specification, enactment and management of composite services. The service process engine responsible of enactment is composed of the scheduler, the event management and the transaction mangers. A service process broker is used to discover the actual services that can fulfill the actions required. Ashutosh Raut gives a solution to Business Process Integration [Raut and Basavaraja, 2003] that highlights the ideal methodologies for applications both inside and outside of the organisation leveraging the latest technologies like Web Services and XML messaging, see Figure 72.

Page 236 of 366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

Figure 72 Enterprise business process integration architecture[Raut and Basavaraja, 2003] Its approach uses a BPM package in order to support Business Process Integration (BPI), that contains a workflow engine which is able to import and interpret the modelled process definitions stored in a standard format (WFMC:XML or BPMI:XML). The engine executes different activities and co-ordinates the flow of information between different enterprise applications as the part of the business process. A process can access the applications hosted both inside and outside the organisation. Coordination between business processes and interleaving of local processes in an effective way have been studied, besides by standardization efforts, also in [Jung et al., 2004][Segev et al., 2003][WebV2, 2003]. Tilburg University works on contract-driven coordination and transaction management [Yang et al., 2001][Yang and Papazoglou, 2000]. Course material covers interoperable transactions, intergenerational workflow, contract management, and supply chain management. Page 237 of 366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

From the Business Process Integration and Automation (BPIA) perspective, autonomy of enterprises correlates to the fact that the systems being integrated have their own process choreography engines and execute internal business processes privately. Hanson et al. suggest a general-purpose conversation support as a solution for business process integration [Hanson et al., 2002]. Their approach separates interoperability support from business processes, for the reasons of enterprise sovereignty, different timescales of business process and interoperability technology changes, and ease of modification of business processes. The Web-Pilarcos project [Kutvonen, 2004][Kutvonen, 2002] developes B2B middleware services for inter-enterprise collaborations. A central element in the architecture is that of eCommunity contract, that captures interoperability aspects at various levels of modeling or abstraction. The range of elements reflects the ODP viewpoint concerns. The collaboration environment proposed supports two phases for eCommunity lifecycle: a breeding environment where discovery of potential partners is supported by an enhanced trading service and static interoperability tests, and an operational environment supported by monitoring and enactment of enterprise applications. The external business process description is a central element in the eCommunity contracts, both for detecting breaches at the operational time and for evolution support. The business process model is not used as source of code generation, nor executed in the abstract form. Instead, the model is used at operational time as reference for conformance to the agreed behaviour; the application services themselves are active and independent. The B2B middleware services provide concepts and facilities for managing eCommunities with dynamic properties and evolution through reflection and use of model repositories. A recent addition to the project family studies trust management in this kind of environment. Research on contract-based coordination and interleaving of business processes is well presented in [van den Heuvel and Weigand, 2000][Weigand and van den Heuvel, 1998][van den Heuvel and Weigand, 2003][Yang et al., 2001][van den Heuvel, 2003]. WISE (Workflow-based Internet SErvices) [Schuler et al., 1999][Lazcano et al., 2000] addresses process definition, enactment, monitoring and coordination in a virtual enterprise setting. The process definition component allows composition of virtual business processes from building blocks published by partners. The process model is then compiled for enactment. The process monitoring provides information for load balancing, routing, QoS, and analysis purposes. CrossFlow [Ludwig and Hoffner, 1999] uses contracts as a basis for cooperation management. The key element in the architecture is a trader or matchmaking engine that matches contract suggestions and request from potential partners. Based on the specifications in the contract, a dynamic contract and service enactment infrastructure is set up. Dynamic aspects of workflow models and the relationship between business processes and workflows has been studies in various contexts, e.g., in DWM [Meng et al., 2002 ], [Geppert and Tombros, 1998], and in ADEPT [Reichert and Dadam, 1998][Muller and Rahm, 1999]. Adaptation to failures and collaborative business process management has been discussed by [Chen and Hsu, 2001a][Chen et al., 2000][Chen and Dayal, 1997]. Reflective methods for adaptive workflow management has been discussed by [Edmond and Hofstede, 1999][Henl et al., 1999]. Page 238 of 366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

VI.6.2 Technologies and market The following list of technology solutions have been contributed to the state of the art document by INTEROP NoE partners. VI.6.2.1 Computas Computas provides the modeling tool Metis. Metis is a methodology free tool that supports various meta-models (languages/templates). In main, those are UML, BPMN and ITM (IT Management). For our work, I guess BPMN is most relevant. There is a repository, called Metis Team server. Metis ITM (Information Technology Management) is a template designed to provide all objects and relationships necessary to model and interrelate Business and Information Technology. ITM is designed with and implemented in the Metis product family by Computas AS. It was started by NCR Metis back in 1997 as the main methodology for NCR consultants to align Business strategies and Data-warehousing solutions. It has since been developed to cover applications portfolio management, enterprise architecting standards, corporate strategic planning and extensive business process modeling. The ITM template targets all aspects of enterprise architecture modelling, including business plans, operations, strategies, applications and processes. It is implemented as an extendable set of partial meta-models. The metamodels currently covers the following domains: analysis, application, competence, concept, datastore, enabling IT technology, environmental, IDEF1X, IDEF0, IT architecture, IT library logical, IT strategy, IT technology, information, location, organization, process business, process logical, product, product standard, project, reporting, resource, strategy and rule, swimlane diagram, transition, timeline diagram. However, it should be noted that all metamodel domains may easily be altered and/or extended by using the Metis metamodeling tools allowing the user to target individual key artefacts. In addition the ITM template has a UML-ITM cross domain metamodel to the Metis UML Template allowing the user to perform software modelling (UML) with full business alignment (ITM), and it is extended with BPM enhancements in order to support the newly released BPMN standard. The BPM enhancements will be BPMN-inspired extensions to the Process Logical Domain, including the concepts of events, gateways, sequence- and message flow - all to be used both for process swimlane modelling or traditional process modelling. This extension will provide the ITM template with more BPM viewstyles, and it will still be using the existing ICOM object type for integration into the rest of the enterprise architecture model. In addition, the process modelling options are extended by these enhancements to: traditional, hierarchic decomposition, processes created in swimlane diagrams and combinations of the previous two - where process breakdown structures are presented with swimlane views. VI.6.2.2 HP Labs Various collaborative process management systems have been developed. Such systems present an architecture, were public business processes are executed by a group of collaborative process mangers (CPM), earch representing a participant in the business process. Each CPM is used to Page 239 of 366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

schedule, dispatch and control tasks of the process it is responsible of, and the CPMs interoperate through a inter-CPM messaging protocol. The solution from HP Labs [Chen and Hsu, 2001b] introduces a decentralized collaborative process management architecture and an XML based Collaborative Process Definition Language CPDL. CPDL is an extension of PDL. The CPM implementation is embedded into a dynamic software agent architecture E-Carry, involving also the use of E-Speak communication infrastructure from HP. VI.6.2.3 BizTalk The BizTalk approach [BizTalk, 2004][Roxburgh, 2001] relies on a central schema repository and layered logical architecture. The schema repository provides means to publish and validate XMLbased schemas and manage their evaluation and relationships. The architecture consists of application layer, BFC server layer (BizTalk Framework Compliant), and transport layer. Applications communicate with each other by sending business documents through the BFC servers. The BFC servers can message using multiple communication protocols amongst themselves. BizTalk orchestration is proposed for inter-enterprise process execution. The centralized schema repository provides schema validation and control, but is not scalable. See also the architectural BizTalk presentation in section IV.3.1. VI.6.2.4 Popkin Software System Architect by Popkin Software [Popkin, 2004] is a comprehensive and powerful modeling solution designed to provide all of the tools necessary for development of successful enterprise systems. It is the only tool to integrate, in one multi-user product, industry-leading support for all major areas of modeling, including business modeling, object-oriented and component modeling with UML, relational data modeling, network architecture design, and structured analysis and design. System Architect provides a rich set of business modeling diagrams,in oder to capture the entire enterprise from various business perspectives -- from high-level business objectives and organizational makeup, through event-driven business process and functional modeling, organizational modeling, design of the applications and databases that make the business run, to network architecture of where everything is. System Architect supports the new Business Process Modeling Notation (BPMN) notation as well as the IDEF methodology. Moreover it provides extensive support for UML, the industry standard for analysis and design of software systems and applications, XML design functionality and BPEL generation from BPMN Business Process Diagram. VI.6.3 Standards Standards valid across the various parts are collectively described in part VIII. This section contains particular business process and workflow oriented standards.

Page 240 of 366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

VI.6.3.1 Workflow reference model by WfMC The reference model is briefly described and evaluated by WfMC [Hollingsworth, 2004] as follows. "Business process management systems is a middleware system that provides a (central) point of control for defining business processes and orchestrating their execution [Hollingsworth, 1994][Jablonski and Bussler, 1996a][Sheth et al., 1997]. The process manager records the execution state of the process and routes request to component applications or human agents to execute tasks. Enterprise scale systems typically provide transactional semantics and support for backward and forward recovery of business processes. The business process managers rely on underlying message brokers, transactional queue managers, or publish/subscribe middleware." In order to achieve uniformity and interoperability aspects of workflow management, the Workflow Management Coalition (WfMC) has defined the Workflow Reference Model [Hollingsworth, 1994]. The model includes a standardized set of interfaces and data interchange formats between workflow system components. However, it provides little support for inter-enterprise business processes as it is. Figure 73 illustrates the essential interfaces of the reference model. Of special interest in this subsection are the interfaces 4 and 5.

Page 241 of 366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

Figure 73

Reference model for workflow management [84].

Additional standardization efforts based on WfMC reference model include OMG Workflow Management Facility that developed the jointFlow standard [JoinFlow, 1998]. This has further induces development of Simple Workflow Access Protocol SWAP [Bolcer and Kaiser, 1999] for instantiation, control and monitoring of workflow processes, and Wf-XML message set [Wf-XML, 2003] for defining data contents. From the centralized workflow engine solutions, the focus was then moved onto Distributed Workflow Systems (DWSs) where the workflow specification is split into sub-workflows, each encompassing all the activities that are to be executed by a given entity within an organization [Muth et al., 1998]. DWSs require each organization to deploy a full execution engine, and to adopt the same workflow model. The DWSs assume a tight coupling model among the subworkflows and tight coordination of the global workflow. Thus the cost of establishing new relationships is signification as business processes must be modeled and deployed across all participants. The model is applicable in single organization needs [Medjahed et al., 2003]. "Open architectected process managers combine predefined workflow with ad hoc changes, use database and repository technologies for information sharing and persistence, use middleware technology for notification, distribution, and application invocation, and take advantage of object Page 242 of 366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

oriented technologies to provide customization. They focus on means for optimizing resources,enforcing policies and providing monitoring and audit trail services." True inter-enterprise business process management (IEW) becomes possible when public and private processes are separated [Bussler, 2001][Dayal et al., 2001]. This separation makes it possible to add interaction protocols, back-end applications, or partners without changing private business processes. On the other hand, local changes have no impact on public processes. There is a large number of XML-based frameworks for B2B interactions. As these frameworks overlap and compete with each other, the issue of interoperability shifts from the level of applications to the level of standards [Medjahed et al., 2003]. A trading partner has to deal with several standards at the same time. As a solution, B2B protocol and integration engines has been suggested, to execute actual message exchanges according to various standards [Bussler, 2001]. "Collaborative process framework is proposed to extend the centralized process management technology, by involving multiple parties to each play a role in the process. The process definition needs to be commonly agreed business interaction protocol, and the process execution is performed collaboratively by multiple engines [Chen and Hsu, 2001a]. The collaborative process framework trust on the development of process-compliant services at enterprises. Enterprises determine the roles they wish to play in some processes and develop role process specifications and corresponding internal execution control, including invocation or dispatching of local services. The process-compliant role specifications can be published as web services, and local services can then be accessed directly through the local collaborative process manager. Each execution of a collaborative process, or a local process instance, consists of a set of peer process instances run by the Collaborative Process Managers (CPMs) of participating partners. These peer instances conform in behaviour to the specification of the role set forth in the common process definition, but may have private process data and subprocesses. The CPMs interoperate through an inter-CPM protocol to exchange the process data (or documents) and to inform each other of the progress of the process execution. The CPMs are capable of enforcing the common business process specifications, they can differ significantly in functionality, such as support for internal data flow, local service integration, and nested sub flows.

Page 243 of 366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

Figure 74 ebXML Architecture 3. Collaborative business process execution framework Although it was from early versions of the reference model set as a goal that models should be constructed as abstract views of business processes, and separated from implementation technologies, the concrete bindings introduced were initially relatively low level programming view bindings. Currently, work is done to use more appropriate, service oriented abstractions as targets. The following is adapted from [Dayal et al., 2001]. The reference model takes it as a norm to address ongoing change to business processes. The lifecycle view provides three phases: modeling and defining, operation and implementation, and analysis and improvement. The business process management components supporting these phases are grouped around a process repository. The process repository supports design and modeling tools fro business processes, audit and analysis tools to support feedback for improving process definitions. The lifecycle is also supported by control and interoperability components using the process repository. Page 244 of 366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

The reference model identifies three viewpoints on the business process management systems: views on processes, organization, and information. The process view discusses how information is consumed, generated and transformed; and how organizational roles and responsibilities are governed. Organizational view discusses the roles of processes, and access and ownership of information. The reference model recognizes dataclasses for workflow control, workflow data, and application data, but is weak on information marshaling within a process. The reference model presents a functional component model of business process management, to identify points where product interoperability is required. The BPM component model addresses derivation of process models in terms of interfaces, process fragments, choreographies, organizational models, and information models. The facilities for this work include conceptual model notation, process repository for storing executable models, and models for services, resources and information. In addition, the model addresses enactment of processes in a service delivery environment. In the conceptual model, more urgent need is to position processes within end-to-end process delivery chains, involving interfaces with processes within other organizational domains. Examples of these chains include customer relationships, and subcontract service organization. As a resolution, end-to-end processes are considered to be combinations of reusable process fragments, for which separation between internal and external views is made. The internal view defines the actual, internal behaviour, involving resources used for enactment. The external view provides the externally visible behaviour of a black box. The external views of fragments together need to form a consistent end-to-end process, a consistent messaging choreography. One of the challenges of the BPM reference model is that of turning the conceptual model into executable models, as different products have different internal representation structures and large amount of local detail needs to be developed for the internal model. For this challenge, XPDL has been developed. In turn, the executable models need to be instantiated to running services by business process management systems (BPMS). Here, service definitions with interface definition, resource access points, permissions, message types, enter the picture. Applicable work has been done with web services environments and other service oriented architectures. The runtime message exchanges between process fragments at different domains is represented by choreographies, i.e. service interaction models. Interactions between process fragments need to be modeled from the process and the context data semantics points of view. Wf-XML has been developed for this kind of models [Wf-XML, 2003]. For the internal process definitions a number of standards provide means to represent process flow, event and decision points; some standards also provides for specifying resource associations. Different approaches have been used, that can essentially be categorized into the following two. Transition based representation are typically derived from Petri Net methodology. Instead of transitions, activities with pre- and postconditions can be used. Role Activity Diagrams define processes though actions associated to abstract roles. A topical problem still is, how to define a common representation of business processes so that translation between process and role based modes could be automated. Work of this category has Page 245 of 366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

been performed in the development of IDIF, UML, RADSs, PIF, PSL [ISO/DIS, 2004][PSL, 2004], WPDL [WPDL, 2004], XLANG [Thatte, 2001], BPML [Arkin, 2002], BPEL4WS [Curbera et al., 2002b]. Rules evaluation facilities are enhancing as flexible rules processing components are implemented and become interfaced to BPM engines. This supports complex evaluations related to transitions and resources. The illustration in Figure 75 classifies standards related to the WfMc reference model. The two leftmost columns discuss process definition phases, addressing internal and external process definitions. As common facilities, UML and BPMN are shown. Internal process semantics can be captured by XPDL , BPEL4WS [Curbera et al., 2002b] or BPML [Arkin, 2002]. External process definition involves definition of interaction by BPEL4WS [Curbera et al., 2002b], BPML [Arkin, 2002], BPSS, WSCL or WSCI [Arkin et al., 2002]; interoperability semantics by Wf-XML; endpoint definition by WSDL [WSDL, 2002], and data format agreement by XML. The two rightmost columns discuss process execution aspects, again using the internal and external views separately. In the external view, discovery of services is addressed by UDDI/DISCO [UDDI, 2003; definitions of B2B process schemas by CPA/CPP, and RosettaNET PIPs [RosettaNet, 2004a]; and runtime B2B interaction by Wf-XML [Wf-XML, 2003]. In the internal view, discussion covers process state notations by WfMC Process & Activity Models , audit formats by WfMC IF5 , runtime interaction syntax by BPQL, and runtime interaction functions by WfMC WAPI.

Figure 75 Classification of workflow management standards [Hollingsworth, 2004]. Page 246 of 366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

VI.7 Issues, gaps, priorities, conclusions When entering the field of business process modeling (BPM), one is confronted with an overwhelming number of tools and modeling languages. Often these languages and tools have very little in common. In most of the cases, the conceptual domains that are covered differ from language to language. Some emphasize elements of workflow in the models, others concentrate on quantitative analysis, again others try to integrate business processes and supporting information technology. Moreover, software tools are an important success factor for a language; some of the most popular languages (e.g. ARIS [Scheer, 1994b][Scheer, 1994a]) are proprietary to a specific tool. It is clear that none of them has succeeded to become "the standard language". Overall, there are a number of aspects on which almost all of these languages score low: the relations between domains (views) are poorly defined: the models created in different views are not integrated; most languages and notations are not standardized; many languages have a weak formal support; most languages miss the overall architectural vision on en enterprise; most of the business process modeling languages focus on modeling the internal business processes and pay little attention to interoperability issues. It has been shown that current workflow and coordination systems are not flexible enough to support the needs of cooperative, virtual enterprises. The needs are essentially as follows: respect of the autonomy of each partner (public/private), coordination of complex interactions existing in a multi-partners context (services composition/orchestration, advanced transactions models,...); use of widely accepted technologies in order to facilitate the integration and the interoperability, composition and integration of processes or process fragments, and control of the overall process by contracts monitoring.

It cannot be assumed that all potential runtime interactions could be predefined in Choreographies. Thus, a generic interoperability protocol is fundamental for agreeing on a choreography, assigning, invoking and terminating. The focus of traditional workflow management systems is on enactment of processes. There is little support for diagnostics. Few workflow management systems support simulation, verification and validation of process designs. Few systems support collection and interpretation of real-time data. [van der Aalst et al., 2003]

Page 247 of 366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

VII Enterprise Interoperability for non-functional aspects


VII.1 Introduction to Non-Functional Aspects VII.1.1 What are Non-Functional Aspects? So, what are non-functional aspects? The underlying motivation for introducing them is the need for separation of concerns; there is generally a business process view of an activity, concentrating on the main behaviour of an enterprise and the applications that support it, but there are also nonfunctional aspects that concentrate on other areas that can be specified largely independently. The most common examples given are probably quality of service and security requirements. These normally get introduced by describing them as constraints that modify the basic system behaviour, but without themselves requiring extra behaviour to be added. This is a bit vague, particularly if you ask what adding extra behaviour means; for example, is constraining or refining an action an addition? It depends on how significant the change is. Non-functional aspects typically address how well the behaviour is performed (or should be performed). In other words, if an observable effect of a system action can be quantified (implying that there is more to say than whether the behaviour is done or not-done), the non-functional aspect of that behavior can be described. It is important to remember, however, that saying above that the specification of a non-functional aspect can be specified largely independently of the functional behaviour does not imply that these aspects can be neglected. The separation of concerns allows structuring of the design work, but it is important that all the functional aspects are considered from the beginning of any system design, and that the resultant design should be considered and reviewed as a whole. VII.1.1.1 Relation to MDA It may be easier to see what is going on by considering NFA in an model-driven context, because there we bring out the abstract model of the business process for special attention. So the NFA part of the model is separated from the basic business process model by an intentional choice made by the designer. Any changes in behaviour to satisfy NFA requirements appear as a result of the transformations applied to unify the NFA model with the Computation Independent Model (CIM), which, in MDA is the main model describing the business process. However, this picture is a bit too simple. We generally want to place non-functional requirements on some particular elements of the CIM giving, for example, performance of particular service elements, or non-repudiability of chosen groupings of data items. So the CIM is likely to contain labelling of elements with NFA assertions. Thus the behaviour may itself need to be structured in preparation for the merger with the NFA, making sure that suitable hooks or decision points are made visible. In interoperation, we are likely to be bringing together different systems with distinct, and possibly changing non-functional requirements. It would, therefore, be a mistake to think that the MDA technologies can be applied to build systems that automatically satisfy the NFA requirements. Many of the relevant decisions need to be taken dynamically when communication is initiated, not when the system is built. What MDA offers is the ability to support a foreseen range of requirements, and the mechanisms for negotiation and support of the specific requirements on each occasion. If the Page 248 of 366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

expected range of requirements changes, the MDA transformation process can be repeated to generate negotiation mechanisms operating over the wider range of options. VII.1.1.2 A taxonomy of non-functional aspects In speaking of non-functional aspects, we imply that there are an enumerable set of aspects, and that any particular requirement relates to just one of them. This is untrue; a single user requirement may impact on several aspects, and different communities use incompatible taxonomies. In particular, the communities working on Quality of Service and on Security both claim large and overlapping parts of the problem space as their own. One could work in terms of just a few broad aspects, but we choose to structure this report on a finer scale taxonomy, and choose to distinguish, for example, security, trust and availability, even though a security expert might well make common currency of them all. Although this fine-scale approach gives each section a clearer focus, it means that the placement of some of the pieces of work of broader scope is somewhat arbitrary, but not, we hope, misleading. VII.1.1.3 Architectural frameworks A number of proposals have been made for the organization of the process of combining functional and non-functional aspects by using a suitable architectural framework to represent the different aspects. These include [318] [164] [3] [206] and [437]. Other authors have concentrated on the methodological aspects of design using non-functional aspects [131] [110] [119] [118] [181] [331] or on patterns for supporting them [202]. The work in [385] is particularly relevant here, in that it aims to create an ontology for the description of interoperation. Non-functional aspects at one abstraction level may correspond to a functional aspect or solution at a lower abstraction level. An example of this principle is the non-functional requirement to perform a service within the specified delay threshold in a distributed system. This could be resolved by means of compression to reduce the amount of data transported over the connection links. This solution implies the need to include adequate compression and decompression components (thus, adding functionality to the detailed view of the system). VII.1.1.4 Combining Functional and Non-functional Aspects Once the functional and non-functional aspects of a design have been identified, the system building process needs to combine them to create a single, consistent, solution with all the required properties. There are two ways of approaching this: the various models can be combined by performing a unification process, of the kind used to unify the ODP viewpoint models [55], and the result used to generate an implementation; implementations of the various models can be produced and then combined using techniques like Aspect Oriented Programming [171] [426] [149] [134] [323] [291].

Some specific applications of the ideas of non-functional requirements can be found in [442], which discusses interorganizational work-flow, [156], applied to SCM, and [201], which considers middleware-based applications.

Page 249 of 366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

VII.1.2 Modelling Non-Functional Aspects Most of the work on modelling non functional aspects takes an object-based approach, adding constraints to a description of object behaviour. The modelling of NFA using the MIKE approach to the development of knowledge-based systems is described in [319]. VII.1.2.1 Contracts and composition We can identify roles for several types of model for non-functional aspects. First, for each kind of aspect, we will need basic models for representing the basic semantics of the aspect a performance model, a security model, and so on. Such models define the metrics used to express the requirements associated with the aspect. These models then need to be combined so that the properties of systems can be described in terms of the subsystems that make them up, and this process of composition is intimately related to describing the properties needed for interoperability. The analysis of the compositional properties of NFAs can be structured by using the idea of a contract expressing the exchange of obligations between an object and its environment; the composition process can be expressed as the creation of a more general contract from the individual simpler contracts expressed from the points of view of the objects participating in the composition. VII.1.2.2 The ODP QoS Reference Model One particular way of expressing how non-functional requirements are affected by distribution is given in ODP. ODP provides a standardized reference model that can be used to understand how non-functional aspects can be composed. The standards are [241] [238] [239] [242] and further formalization of key concepts can be found in [351] [352]. One part of the reference model is a submodel concerned with quality of service [247]. One of the key ideas in this framework is that we can regard non-functional properties as governed by a contract; each component in a system is party to such a contract, which states its obligations and expectations the principles are described in [316], based on [300]. A component should meet its own obligations for as long as its expectations on other parts of the system (i.e. on its environment) are satisfied. However, if some other component departs from its obligations, or a failure occurs, such that the expectations of the component we are considering are not met, it is then operating outside its design envelope and all bets are off; the components required behaviour becomes undefined. This is basically a variation on the familiar pre- and post-condition structure. Once this basic idea of the interplay of expectations and obligations is established, though, the framework goes on to consider a variety of situations in which components are composed and gives the rules governing different composition operators. These rules can take different forms, which can in turn be used to imply a classification of the various non-functional properties. For example, time delay in performing some sequential workflow is normally the sum of the delays introduced by each participant, but delay for a step that consists of parallel branches within a workflow is normally the maximum of the delays of any of the branches. On the other hand, throughput in a sequence of communicating components is generally the minimum of that supported by the individual components, while throughput of a parallel construct may, depending on other properties of the composition, be the sum of the capabilities of the components. Measures of probability of successful operation typically compose as the product of the properties of success of their subcomponents.

Page 250 of 366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

VII.1.2.3 NFA and Interoperability How do the NFAs relate to interoperability? For a feature to be important in determining interoperability, it must impact the shared properties of the model of the communication it must form part of the abstract model that needs to be shared, because otherwise independent decisions about NFAs could be made differently in the various domains. This may or may not be simple. For example, stating a transit delay between points in two communicating domains requires a common view of where the endpoints are, and this must be agreed because it places a constraint on for how long the views of the state of the world from each endpoint can be expected to remain different. This involves internal mechanisms to ensure that the required delay is met, but there should not be a great deal of argument as to whether the target has been met or not. On the other hand, a requirement that a data item is kept secret and not released to others requires an agreement as to who the communicating parties are; if one sees itself as a single object, but the other sees a refinement in which there is a primary recipient and some associated objects, the two sides might not agree as to whether the secrecy requirement has been met. Again, what matters is the level of abstraction of the shared view. We would also need to check the refinement of every step on the path to see that there are no possibilities of leakage. One would need to be able to distinguish, for example, between a trusted interceptor introduced to perform format conversion and a malicious man-in-the-middle attack. Once the targets for NFAs have been agreed in the abstract model, suitable supporting mechanisms need to be incorporated as part of the model transformation process. This would be likely to involve the use of templates and information sources specific to the NFA concerned and consistent with local resource provision policies. We will also need to be concerned with function interaction between constraints representing functional and non-functional aspects. VII.1.3 Work on Interoperability applied to NFA Although there has been a great deal written about non-functional aspects of systems, particularly relating to quality of service, there is comparatively little directed at the specific problems of managing non-functional aspects in the interoperability of enterprise systems [309]. However, some useful framework material exists, and one particular piece of work that provides a basis for analysis of the field is the ISO ODP Quality of Service Model [247] [310]. From the standpoint of establishing interoperability, the first step is to ensure that the participants do indeed share a single agreed abstract model that forms the basis for preserving the necessary aspects of semantics in the communication that takes place. This will include agreement of the model defining the interpretation and metrics for non-functional aspects. The next step is the negotiation of targets for the resulting composite system that will satisfy the goals of the participants. Having done this, the division of responsibilities between the participants must be agreed, particularly for aspects with properties that are loosely additive, and some imply so kind of trade-off. However, metrics differ in the way they are modified when systems are combined. VII.1.3.1 How different non-functional metrics compose We can use these different kinds of composition properties to provide a classification of nonfunctional aspects because they correspond to the different kinds of interaction needed to support interoperability. Restricting the discussion, for the moment, to the interoperation of just two Page 251 of 366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

systems, we can consider the set of goals for the resultant composite system perhaps that operations must be performed within a given time, at a given rate, or with a given level of privacy or possibility of non-repudiation. Interoperability is considered successful in this respect if the target for the composite system is met. Some such properties can be improved by application of suitable mechanisms; error rate can be reduced by using an error-correcting code, for example. Others, such as delay arising from transmission of light in an optic fibre, cannot easily be avoided. We can therefore classify non-functional aspects as either negotiable or non-negotiable (that is, limited to reporting properties that cannot be improved). Turning now to the management of composition itself, we can see that aspects can also be classified by the nature of their composition rules. We can distinguish: 1. aspects in which the composition of two components with the same target will yield a composite with that target. Throughput is one example; another is privacy. Once the target value is agreed, it immediately becomes the value applicable to the subsystems separately. 2. aspects in which the resultant property is the sum of the values offered by the components, such as transit delay or error probability (assuming independent errors). In this case the mechanism providing interoperability must negotiate the contribution each subsystem can be allowed to contribute to the sum, and there must be agreed policies for doing so. 3. aspects in which composition relaxes a requirement, such as availability where the two interoperating subsystems provide alternative sources of service. Although the requirements are being relaxed in this case, there is still a need to negotiate where the benefit from the relaxation falls. Thus, in summary, there is essentially no additional negotiation to support interoperability in case (a), negotiation is essential in case (b) and just offers the possibility of optimization in case (c). We also need to consider the scope and duration of the agreements reached between the interoperating systems. These may be at the stage the framework for interoperation is established, by standardization or by negotiation between the partners, or they may be more localized in time or space. They may correspond to a specific set of interoperating partners and to a single session or activity, or they may be longer lasting. VII.1.3.2 On negotiation and other supporting mechanisms Almost all the main problems in interoperability arise from the establishment and maintenance of shared knowledge; the shared context on which communication is based must be created and extended as necessary to meet changing communication goals. In doing this, we can classify the interoperability issues related to non-functional aspects in terms of: the common mechanisms for establishing and managing interoperability: how interoperability targets are set does negotiation take place at the time the agreement to interoperate is made or at the initialization of each activity, or dynamically as activities progress? whether there is a subsidiary piece of negotiation to decide the balance of responsibilities between partners for the particular aspect and instance of interoperability?

the specific kinds of non-functional aspect being managed performance, security and so on. Page 252 of 366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

In general, this depends on some form of distributed negotiation, and the form of negotiation can vary from simple declaration by the parties to elaborate goal seeking. This may involve multi-step negotiation [470] or negotiation based on worth or utility functions [296], [315]. One particular form of multi-stage negotiation of particular importance for the federation of enterprises is the agreement and monitored execution of contracts [314] [317]. In these circumstances, the contract represents the agreement that guides a particular aspect of interoperation, and may be parameterized to express quality targets or service levels [26]. More discussion of the use of enterprise models to guide system configuration can be found in [306] and [158]. Related standardization includes [252] and [255]. Another specific area for negotiation is the agreement of data exchange formats [38]. VII.1.4 Aspect-oriented Software Development for dealing with NFA A difficulty with non-functional aspects is their crosscutting nature. They are hard to modularize. Aspect-oriented Software development (AOSD) has proven to be beneficial in dealing with these requirements since the main goal is to facilitate the modularization crosscutting requirements [289] [404] [439] [144] [462]. In this section we summarize some of the efforts that have been done in AOSD to address Security, Quality of Service, Service Oriented Computing, Enterprise Application Integration and Component Negotiation. VII.1.4.1 Security AOSD has been applied to several security areas see a practical overview of potential areas see [89]. [144] focuses on application-level access control. [91] has investigated implementing encryption with Aspects. Aspects have also been used to introduce security features such as message authenticity in legacy applications [303] and Software Tampering has been addressed with Aspects [166]. Attempts have also been made to construct generic security programming libraries [222]. Integrating control of dataflow using AOSD has been addressed by [293]. VII.1.4.2 Quality of Service Measuring Quality of Service is also a crosscutting requirement. [77] discusses how AOSD concepts were used to implement various QoS aspects in a CORBA context. [81] also discuss an example of how performance metrics can easily be instrumented into an existing application using AOSD. [136] presents a domain-specific AOSD approach for handling QoS aspects in a distributed environment. [195] present a component model, COMQUAD, where non-functional properties, such as performance characteristics, can be expressed separately in profiles for the components. VII.1.4.3 Component Based Software Development AOSD approaches are also common in the Component Based Software development area. An example is JasCo [425], an AOSD approach for component infrastructure. VII.1.4.4 Web Services - Service Oriented Computing In this area [440] describes how AOSD can be applied to modularize the management of Web Services using JasCo. VII.1.4.5 Enterprise Application Integration Enterprise Application Integration (EAI) is about how to integrate Enterprise Systems often through extensive use of middleware. In this context implementers often have to write adaptors for various Page 253 of 366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

ERP-systems in order to hook into the middleware, for example, when using message brokers or stub-based object middleware. [395] argues that AOSD is a very good way to reduce the complexity of such projects (e g 95% reduction in code size). He even claims that EAI-projects are doomed to failure without AOSD. VII.2 Technical Review of Specific Aspects The remainder of this report looks in detail at the various main categories of non-functional aspects. For each of these, we examine the general nature of the aspect, and then discuss in separate subsections the way it has been modelled, the problems arising from composing and unifying specifications, and note any particular examples of tools or applications using the aspect. VII.2.1 Quality of Service The term Quality of Service is used with a variety of different meanings. Some authors include practically all non-functional aspects under quality of service, while others restrict it to the control of timely and accurate communication, particularly in situations where the shared resources supporting the communication are limited and must be shared to meet a variety of requirements. We use it here in this limited sense, deferring discussion of security and reliability until later sections. There is an enormous body of literature on quality of service in communications, surveyed, for example, by [102], [13] and, for multimedia systems by [446]. [397], [53] and [328] are just a sample of the regular conferences and workshops concentrating on quality of service. Examples of approaches to quality of service in object-based systems can be found in [73] [74] [213] [468] [467] and [93]. Many research groups have gone further to proposed specific architectures for the management of quality of service in a distributed environment [7] [377] [400] [130] [141] [321]. Several of the architectures proposed are based on the ODP reference model [60] [4] and [9]. Some research has extended consideration from communication to component architectures, such as [23] [401]. [97] concentrates on off the shelf components, and [405] on hard performance aspects of them. Further technical details of QoS methods and mechanisms would justify a study in its own right. Representative examples can be found in [452] [257] [258] [220] and [85] for communication networks; [127] for mobile communications; and [178] for multimedia. VII.2.1.1 Modelling QoS A good review of QoS modelling approaches can be found in [7]. There are standardized vocabularies defined in [256] and [236]. Representative approaches to QoS modelling can be found in [41] [187] [320] [393] [423]. [79] structures the modelling requirements for multimedia by using a hierarchy of contracts. Several specific proposals for QoS specification languages have been produced. These are generally adjunct languages for use in combination with an established form of behavioural or functional specification. A number of proposals are reviewed in [269]. Two of the best known are QML [173] [174] [175] and CQML [6] [5]. UML has also been used as a vehicle for QoS modelling [42] [8], there has been some standardization work within the OMG [358] [357] [356]. Page 254 of 366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

Different authors have applied a wide range of modelling techniques: [389] uses actors; [304] uses SDL; [365], [21] and [366] concentrate on real time aspects; [410] [411] and [225] concentrate on the specific problems of expressing service level agreements. Particular parts of the specification problem are dealt with in [302], which concentrates on parameterization, and [355], which focuses on the properties of audio and video streams. [368] looks at the problems of manipulating QoS specifications at run-time. Orchestration languages, such as [217], introduce quality of service via constraints or targets. The ODP QoS framework has already been introduced above. The project is not currently active, but its partial results are at [247]. The formal modelling of QoS in ODP, particularly in the computational viewpoint, is presented in detail in [420] [420] [316] [94] [59] [179] [165]. The modelling of QoS management in terms of the manipulation of explicit binding objects in this work is a powerful technique which could be applied to the analysis of interoperability requirements [248]. Most of the existing work on QoS assumes that it is managed within some general purpose infrastructure. Some examples of QoS-aware Middleware are given in [288] [399] [398] [152] [209] [128] [76] [449] [371] [406] [106] [332] [54]. However, newer approaches to integration have been taken: [81] takes an aspect-oriented approach, and [47] [390] both use an MDD approach. The separation of QoS concerns within an infrastructure is one of the most widely explored uses of policy-based management [313] [441] [87]. It is supported by an active standardization programme within the IETF [229] [230]. However, the incorporation of policies from different authorities leads to problems of policy conflict [82]. The strong association of QoS with communication infrastructure has lead to a well-developed body of work on dynamic negotiation of targets, discussed in [204] [208] [210] [129] [72] [370] [386] [381] [454] [296] and many more. However, the work on division of responsibilities is less well developed; in telecommunications infrastructure, arrangements for interoperability are generally negotiated in advance on a bilateral basis. A flexible approach to trading is discussed in [284] and [298]. VII.2.1.2 Using Quality of Service Multimedia applications take a more distributed view [108] [22]. The work in [157] [159] raises the level of abstraction by working in terms of stream types and trading, and [350] introduces the concept of a QoS broker; traders and brokers would help provide scalable solutions to interoperability. Examples of the use of QoS in specific application areas can be found in: [104] for workflow; [191] [453] [305] for multimedia systems; [151] to support teaching, and [436] for .NET applications. There is growing interest in the application of policy management to QoS if storage subsystems [434] [18]. VII.2.2 Security Nobody should consider entering into the systems aspects of security without reading Security Engineering [36], which gives a unique view of the way security issues and solutions interrelate. A widely recognized work in the area is the Common Criteria (CC), developed by the US National Institute of Standards and Technology, and adopted by ISO [249] [250] [251]. Although CC does Page 255 of 366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

not deal with software architecture aspects of security per se, it is an influential reference for security requirements to IT systems in general. The Figure 76 illustrates CCs conceptual model for security.

Figure 76 The Common Criteria Conceptual Model for Security. Stakeholders are people that possess or otherwise place a value on an asset. However, there are threat agents that may attempt to damage, steal or otherwise establish threats to those assets through attacks. Similarly, threats give rise to risks. In an effort to reduce their assets vulnerability to those threats, stakeholders impose countermeasures to better protect, or reduce the exposure of the assets to threats and thus reducing the risks for that asset. Residual vulnerability will lead to risks. As part of the .NET documentation, Microsoft describes a pragmatic approach to security design [334]. Among a range of useful guidelines, it describes STRIDE, a threat model that includes the following threat elements: a) spoofing, b) tampering, c) repudiation, d) information disclosure, e) denial of service and f) elevation of privilege. Security implies that a certain level of control can be exercised with respect to which actors are allowed to perform operations in the system. In other words, we want to control which subjects are allowed to perform which operations on which objects. Implementing this kind of control function is non-trivial. Managing relationships of the type (subject, operation, object) is called authorization management [392]. This is a complex problem area as its solution is heavily dependent upon many characteristics. Example characteristics that influence the solution are: a) the number of subjects compared to the number of objects, b) the usage of the assets (i.e. the objects) and c) the frequency of changes to the authorizations. [19] looks at the architectural aspects of how secure systems are distributed and a trusted base established, and is therefore strongly relevant for interoperability. Architectural aspects of security are also discussed in [396] [466] and [379]. Page 256 of 366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

VII.2.2.1 Modelling security Security is an area where considerable use has been made of special logics to capture the semantics and support modelling. Examples where this approach has been used for access control and propagation of permissions can be found in [11] [49] [182] [192]. As in the QoS area, the two approaches to integration of the aspect have been used. Aspect-Oriented methods are used in [91] [146] [145] [147] [447] and [443], while model-driven techniques are discussed in [15] [58] [43] and [308]. Another approach has been to concentrate on language specification, particularly to express security policies; see [132] [221] [276] [455]. Most models of security make use of some variation on the concept of a domain to represent groups of entities with mutual trust or similar levels of permission, and concept of domain is a natural starting point for discussion of interoperability. Examples of this approach are found in [415] and [196]. There are many different kinds of security infrastructure, particularly in the access control area. Unifying infrastructures with different underlying models, such as token based [139] [228], role based [183] or certificate based [189] [428] [464] [17] [185] [188] [384] [62] [363] is challenging and an area where more research is needed. Proposals for unification in this area, such as [353] [63] [61] [450] [37] concentrate on the transport and classification of security information, rather than interoperability between models, or avoid the issue by requiring specific lower level mechanisms [354] [16]. A range of lower level solutions can be found in [67] [84] [233] [424] [409] [48] [339]. There is a need to identify ways for the dynamic application of policies in a distributed environment [95] [416]. There has recently been particular interest in the area of security for web services A general review of the problems and directions in this area can be found in [430] and more detailed discussion and analysis in [325] [394] [170] [211] [66] [78] [96] [135] [286] [346] [361] [438] [362]. The industry road map in [333] is particularly significant. Recently, there has been interest in the analysis of security provision as a contest between defender and attacker, by applying game theory [117]. VII.2.3 Trust While the security issues in the previous section focused on control of access, of use and flow of information, it was assumed that it was clear what roles an entity should in principle be allowed to play. In loosely coupled systems, the status of the entities involved is not so clear, and trust plays an important part in deciding what the goals of the security mechanisms should be. A good general review of this area can be found in [200]. An introduction to the basic concepts used can be found in [281] [112] [113] [275]. Different types of trust are distinguished in [1] [271] [138]. [203] discusses how trust is established, and [337] introduces the idea of trust contracts.

Page 257 of 366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

VII.2.3.1 Modelling Trust Work on trust is supported by development of Models [273], Metrics [267] [324] [274] and associated theories [272] [326] [264] [380]. Theories of trust are placed in a model framework in [154] and related to narrative actions in [292]. More focused work has been carried out in relating trust to specific kinds of system. Trust in agent environments is discussed in [364] [463] and [176]. Trust in the formation of virtual organizations is examined in [214] and from a social point of view in [282] and [180]. Specific issues of trust in virtual teams can be found in [259] and virtual communities in [40] and [39]. Of more immediate practical concern is the establishment of trust in e-commerce, discussed in [260] [283] [407] [270] [337] [148] [341] and [347]. The closely related issue of trust between web services is discussed in [295] [223] [100] [115] and [25]. VII.2.3.2 Technologies and Tools for supporting Trust The first requirement for establishing a trust network is for the establishment of a trusted base [431] [20] [224] [12] On this, trust management systems and procedures must be established [374] [375] [297]. Technologies of specific interest here include the PICS Rating Services and Rating Systems [342] [278] [150] and the Keynote management system [44] [46] [83] [68] [86] [45]. More general consideration of principles for decentralized trust are discussed in [69] and environmental guidelines in [329]. General issues of compliance can be found in [71]. Finally, discussion of the trust issues in digital signature and certificate systems is covered in [285] and [62] respectively. VII.2.3.3 Using Trust The way these general concepts apply to specific application areas is discussed in [70], for telecommunications in [218], and for a range of medical applications in [418] [33] [27] [29] [31] [35] [32] [34] [28] [30] [2]. VII.2.4 Enterprise Digital Rights and Policy Management Enterprise Digital Rights Management (eDRM) has now acquired its "raison dtre" is no longer arguable [57][388]. As opposed to Media DRM, which is somewhat older and focuses on the multimedia entertainment industry, eDRM shares the same technical background and technologies in addressing the issue of persistent protection of content and governed content usage. Although most of the underlying technologies and models in the two fields are the same, these two domains are now clearly distinct as their market characteristics and usage patterns are radically different. In the enterprise sector, recent studies reveal for example that by 2006, 20% of Global 2000 organizations will use Digital Rights Management technology (META Group, 2004). Moreover, Price Waterhouse Coopers has estimated that businesses lost between $53 billion and $59 billion to intellectual-property theft from July 1, 2000, to June 30, 2001. Some more recent estimates report up to $300 billion. A 2001 FBI Crime Survey states that proprietary information theft caused the greatest financial damage of all security failures. Originally based on works of Mori [338] [344] and Cox[123] [124] on superdistribution and Steffik[421] on Digital Property Rights Languages (DPRL) at Xerox PARC, this field became more prominent with the advent of the Internet during the 1990s especially with InterTrust[234] and

Page 258 of 366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

ContentGuard (spin off from Xerox) and later with a host of other players among which IBM, Microsoft are among the most prominent. These technologies have now matured to a degree where commercial deployments have been undertaken; the industry has finally recognized that the issue is by far greater than just the protection of multimedia content. As BIS start breaking the barriers of corporate intranets, and business processes start spanning multiple corporate structures, the issue of persistent content protection, rule based content access and usage metering are appearing as key requirements. As a result, in order to stress the difference and broaden the scope of this technology, the term "Digital Policy Management" (DPM) was coined to emphasize the strategic dimension of this field. DRM/DPM and trust computing will be key components of the evolution towards next generation enterprise architectures. Of course, this field overlaps with security issues, but we must distinguish between two security levels. These are firstly, the transport level, and secondly the persistent protection and rule or usage based access levels. The first level is now well known and established. It is factored-in in almost all Internet based applications. It essentially provides confidentiality, authentication, integrity and nonrepudiation during the transport of data among the communicating parties (i.e., while traveling over open networks such as the Internet) or within corporate firewalls. The second level deals with DRM/DPM. Persistent protection addresses the issue of securing the content after it has reached its destination. In other words, protecting the content persistently, including when on persistent storage; it can be seen as the "last mile" of the security stack. It requires that the application accessing this content be DRM/DPM enabled in order not only to be able to decrypt the content but also to interpret the rules governing its usage. This is where trust computing appears in the picture in order to be able to guarantee a chain of trust from the rendering software down to and including the hardware ensuring that all layers are trustworthy and have not been tampered with. Recent developments in this field include the TCP Alliance led by Intel (Trusted Computing Platform Alliance) [432] Microsofts Palladium (Carroll 2002) [14] operating system security initiative and its more recent flavor: Next Generation Secure Computing Base (NGSCB) [336]. One of the major problems that hampered broader and faster adoption of DRM was the lack of standards and the totally incompatible proprietary solutions that were available (e.g., Microsoft, InterTrust, ContentGuard, IBM, etc.). Interoperability of DRM systems will be the key success factor for DRM broad adoption. Recent developments in this field are extremely encouraging in particular with respect to standards. ISO has just ratified MPEG-REL (ISO/IEC 21000-5:2004) Rights Expression Language [253] [121]. This is based on XrML (ContentGuard) and was developed within MPEG-21 [235] [114]. Another encouraging standard has been ratified by ISO to address the issue of rights interoperability and semantics through RDD (Rights Data Dictionary) [254] [383]. Other initiatives include ODRL and OMA. Such initiatives are instrumental in this field and represent a prerequisite for broad adoption and further work in our research. More recently, the Coral group [122] was formed, gathering industry leaders in the entertainment and consumer electronics sector, based on work done by InterTrust [88] [290] [234], to promote interoperability between DRM technologies used in the consumer media market. Corals goal is to create a common technology framework for content, device, and service providers, regardless of the DRM technologies they use. This open technology framework will enable a simple and consistent digital entertainment experience for consumers.

Page 259 of 366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

VII.2.4.1 Digital Policy Management: a strategic management issue As briefly mentioned, policy management is the key strategic issue for the Enterprise. While transport level security is now commonplace both within corporate intranets and over the Internet nothing is in place to address the issues of persistent protection of content and the management of policies and rights governing content use independently from where it resides. Companies need to be able to define and control who, how, where, when, what and under which condition information can be accessed and used at all time. This applies, for example, to financial statements and reports, design documents, technical specifications, proposals, contracts, legal documents, emails, CRM, etc. This also includes the information provided by databases and application servers, which are dynamically generated and do not exist statically, but are the result of specific queries also themselves bound to usage rights and policies. For example in budget forecasting and simulations we also need to apply usage rules to the resulting reports generated by simulation tools. DRM has now become mainstream technology addressing these issues, which are shaping the future of corporate content management. Information is a corporate asset and is therefore bound to corporate policies. Today, however, no technical means are in place to enforce this by providing upfront prevention of disclosure or misuse (accidental or malicious) of the content once it has reached a laptop or a removable media such as a CD. Global corporate information asset management is among the next major challenges facing the enterprise and its Chief Security / Information / Compliance Officers. Their role, combined with these technologies under the shadow of a stricter regulatory environment, will be instrumental in defining and managing the policies and rules persistently governing the use and access to corporate data and processes. The problem here is that there are basically three levels to be considered: the legal environment, specific regulatory frameworks, often specific to a particular sector, and internal corporate policies, none of which are today instrumented. Their specifications often reside in dusty books and their implementation is often left to rule of thumb and experience. Consequently, this field is cruelly lacking in models to capture, specify, express, represent, and manage these policies prior to any technical DRM project and deployment in a corporate environment. As a result, Digital Policy Management is of a strategic nature and must be initiated and driven by corporate managers and not IT/IS people. It is exactly at this point that Enterprise DRM meets Enterprise DPM, thus requiring managers to address the issue in an interdisciplinary space between technology and management science. Modeling tools addressing this space in a durable way (i.e., not based on proprietary approaches and tools) are simply non-existent today. The industry and research is currently highly focused on interoperability as a key success factor of DRM/DPM. Many technologies and tools are now available but there is a critical lack of interoperability among them. Examples include Microsoft, with several incompatible initiatives in the media and enterprise sector such as Rights Management Services (RMS) [335]. This is probably the most advanced system available now to address this issue in a scalable way. Usage scenarios, case studies and global deployments are appearing (Pharmaceutical, Telecoms, Financial and Banking sectors, etc.). Project INDICARE (The INformed DIalogue about Consumer Acceptability of DRM Solutions in Europe) [376] is an EU funded project under the eContent program aiming at raising awareness, helping reconcile heterogeneous interests of multiple players, and supporting the emergence of a common European position with regard to consumer and user issues concerning DRM solutions. A variety of further views on DRM can be found at [56]. This is also an area where there are many social issues, as evidenced by the discussion in [160] [391] [120] [143] [168] and [177]. Page 260 of 366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

VII.2.5 Performance The main thrust of work on performance is the establishment of suitable metrics and the establishment of confidence that system designs meet targets expressed in terms of these metrics. Once a design has been produced, performance problems can seldom be fixed by adding functions (although caches are a counter-example) and generally the solution lies in redesign. It is therefore particularly important here that problems are detected early in the design cycle. This is where the main value of model-based performance prediction lies, which starts at the level of architectural models. Performance analysis can serve several purposes. In the first place it is often used for the optimization of, for example, processes or systems, by quantifying the effect of alternative design choices. Similarly, it can be used to obtain measures to support impact-of-change analysis: what is the quantitative effect of changes in a design? A third application of quantitative analysis is capacity planning: for example, how many people should fulfil a certain role to finish the processes on time, or how should the infrastructure be dimensioned (processing, storage and network capacity) given an expected workload? VII.2.5.1 Views on Performance Architectural models can be structured in several ways, resulting in different views on these models. These views are aimed at different stakeholders and their concerns [227]. Also in the context of the performance of a system, a number of views can be discerned, each with their own performance measures (see also [263] and [231]): user/customer view (stakeholders: customer; user of an application or system): the response time is the time between issuing a request and receiving the result; for example, the time between the moment that a customer arrives at a counter and the moment of completion of the service, or the time between sending a letter and receiving an answer. Also in the supporting IT-applications the response time plays an important role; a well-known example is the (mean) time between a database query and the presentation of its results. process view (stakeholders: process owner; operational manager): completion time is the time required to complete one instance of a process (possibly involving multiple customers, orders, products etc., as opposed to the response time, which is defined as the time to complete one request). product view (stakeholders: product manager; operational manager): processing time is the amount of time that actual work is performed on the realization of a certain product or result, that is, the response time without waiting times. The processing time can be orders of magnitude lower than the response time. In a computer system, an example of the processing time is the actual time that the CPU is busy. system view (stakeholders: system owner/manager): throughput is the number of transactions or requests that a system completes per time unit (for example, the average number of customers that is served per hour). resource view (stakeholder: resource manager; capacity planner): utilization is the percentage of the operational time that a resource is busy. On the one hand, the utilization is a measure for the effectiveness with which a resource is used. On the other hand, a high utilization can be an indication of the fact that the resource is a potential bottleneck, and that increasing that resources capacity (or adding an extra resource) can lead to a relatively high performance improvement. Page 261 of 366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

Process view Completion time Response time User view

Product view Processing time

Utilisation Resource view

Throughput System view

Figure 77 Different views of performance. Figure 77 summarises the different views on performance. Performance measures belonging to the different views are interrelated, and may be in conflict when trying to optimize the performance of a system. For example, a higher throughput leads to a higher resource utilization, which may be favourable from a resource managers point of view; however, this generally leads to an increase in the response times, which is unfavourable from a users point of view. Therefore, when aiming to optimize the performance of a system, it is important to have a clear picture of the appropriate point of view from which performance measures should be optimized. VII.2.5.2 Modelling Performance Although several software tools exist to model (enterprise) architectures, hardly any attention is paid to the analysis of their quantitative aspects. Enterprise architecture covers a wide range of aspects, from the technical infrastructure layer (for example, computer hardware and networks), through software applications running on top of the infrastructure, to business processes supported by these applications. Within each of these layers, quantitative analysis techniques can be applied, which often require detailed models as input. In this section, we will only be able to give a global impression of analysis approaches for each of these layers. Enterprise architecture is specifically concerned with how the different aspects and layers interoperate. Also from a quantitative perspective the layers are interrelated: higher layers impose a workload on lower layers, while the performance characteristics of the lower layers directly influence the performance of the higher layers. However, techniques that cover quantitative analysis throughout this whole stack hardly exist, although the structuring approaches in [459] or [307] are relevant. In [231] a first step in this direction is made by presenting an approach to propagate quantitative input parameters and calculated performance measures through a (layered, serviceoriented) architectural model. It complements existing detailed performance analysis techniques, which can be plugged in to provide the performance results for the nodes in the models. Compositionality of models plays a central role in architecture. In the context of performance analysis of architectures, compositionality of analysis results may also be a useful property. This means that the performance of a system as a whole can be expressed in terms of the performance of Page 262 of 366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

its components. Stochastic extensions of process algebras [212] are often advocated as a tool for compositional performance analysis. However, process algebra-based approaches to performance analysis are fairly computation-intensive, because they still suffer from a state space explosion. VII.2.5.2.1 Infrastructure Layer Traditionally, approaches to performance evaluation of computer systems [261] and communication systems [207] have a strong focus on the infrastructure domain. Queuing models, for example, describe the characteristics of the (hardware) resources in a system, while the workload imposed by the applications is captured by an abstract stochastic arrival process. In this area, a broad range of performance analysis techniques have been proposed. There are very efficient static techniques that offer relatively inaccurate first estimates or bounds for the performance. Analytical solutions of queuing models are more accurate but also more computationintensive, while they still impose certain restrictions on the models. With detailed quantitative simulations, any model can be analyzed with arbitrary accuracy, although this presumes that accurate input parameters are available. Performance can be expressed in many ways, including by using process algebras [90] Petri nets [465] abstract data types [111] and aspects [81]. Also, a lot of literature exists about performance studies of specific hardware configurations, sometimes extended to the system software and middleware level. Most of these approaches have in common that they are based on detailed models and require detailed input data. VII.2.5.2.2 Application Layer Performance engineering of software applications [408] is a much newer discipline compared to the traditional techniques described above. A number of papers consider performance of software architectures at a global level. Bosch and Grahn [75] present some first observations about the performance characteristics of a number of often-occurring architectural styles. Performance issues in the context of the SAAM method [280] for scenario-based analysis are considered in [311]. Another direction of research address the approaches that have been proposed to derive queuing models from a software architecture described in an architecture description language (ADL). The method described by Spitznagel and Garlan [403] is restricted to a number of popular architectural styles (for example, the distributed message passing style but not the pipe and filter style). Other similar approaches are described in [10] and [461]. Although UML is the leading language for software modelling, it is not particularly suitable to express performance aspects. However, there have been a number of attempts to express performance requirements as decorations on UML [456] [279] [435] [457] [137]. Also, some specific UML profiles have been developed to deal with schedulability, performance and time specification. An example of such a profile is specified in [359], covering real-time systems modeling and predictability. VII.2.5.2.3 Business Layer Several business process modelling tools provide some support for quantitative analysis through discrete-event simulation. Also, general-purpose simulation tool such as Arena or ExSpect (based on high-level Petri nets) are often used for this purpose. A drawback of simulation is that it requires detailed input data, and for inexperienced users it may be difficult to use and to correctly interpret the results. BizzDesigner (http://www.bizzdesign.com/) offers, in addition to simulation, a number Page 263 of 366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

of analytical methods. They include completion time and critical path analysis of business processes [262] and queuing model analysis [277]. Petri nets (and several of its variations) are fairly popular in business process modelling, either to directly model processes or as an underlying formalism for other languages. They offer possibilities for performance analysis based on simulation, as described above, but they also allow for analytical solutions (which are, however, fairly computationintensive). Business process analysis with stochastic Petri nets is the subject of, among others, [414]. VII.2.6 Reliability and Availability Some authors consider reliability and availability as aspects of security or quality of service, but they have distinctive properties that lead us to distinguish them from these aspects. The general area of reliability is considered in terms of contracts in [294] and for safety critical systems in [365]. The particular requirements for web services are supported by [451]. The use of UML for expressing fault tolerance is covered in [357] [356]. Reliability is one of the wide range of areas to which aspect-orientation has been applied [81] [163]. The related issue of replication, and architectures for it are described in [155]. A survey of models for distributed transactions can be found in [65]. Software quality and the associated metrics are closely related to the assessment of reliability. Some of the frameworks and metrics for software quality can be found in [226] [237] [243] [244] [245] [246] [240] [169] [167] [287] [367] [369] [445]. VII.2.7 Business Value Value chain analysis gained popularity through the writings of Porter [373] and has since evolved to include a wide variety of models (for example, Barnes [51]). Although the original purpose of a value chain was to identify the fundamental value-creating processes involved in producing a product or service within a firm, the concept has since been broadened and is often used to describe an entire industry. An industry-level value chain serves as a model of the industry whereby processes are considered independent of the firms that may or may not engage in them. Despite these strengths, critics (for example, [433]) note that the chain metaphor masks the importance of horizontal aspects of a firms processes, particularly their relationships with other firms. Such criticisms have led to the development of alternative conceptualizations such as stakeholder value chains, business webs [433], value nets [299] and value constellations [349]. The extension of the value chain concept to that of a value net or web coincides with the general trend towards greater attention to network concepts in the strategic management literature [193]. By definition, plural organizations with various roles and functions create an organizational network by pursuing a collective set of objectives [133]. Inter-organizational networks, relations between firms, come in many forms, such as business groups [198], cooperative and governance networks [460], constellations [265], network enterprises [105], and strategic networks [193]. These various forms can be differentiated based on the patterns of interaction in exchanges among the members, as well as the flows of resources between them [265]. A more dynamic approach specifically directed towards the evolution of networks, seen as complex systems, is discussed by Monge and Contractor [330]. Page 264 of 366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

There is a shift towards providing information, products and services by networks consisting of collaborating sub-units of organizations and/or cooperating organizations (for example, [419]). An organizing concept currently gaining prominence is that of service orientation. The idea of systems (applications or components) delivering services to other systems and their users is really starting to catch on in software engineering. In the service economy, enterprises no longer convert raw materials into finished goods, but they deliver services to their customers by combining and adding value to bought-in services. As a consequence, the borders of organizations are becoming more transparent and (information) services can be offered by sub-units of organizations, by single organizations or by collaborations between companies, through ICT enabled value networks. Therefore, management and marketing literature is increasingly focusing on the value proposition of service innovation, design and management (for example, see [172], or [190]. Of course, services and the accompanying open, XML based standards are heralded for delivering true interoperability at the information technology level [422]. However, service orientation also promotes interoperability at higher semantic levels by minimizing the requirements for shared understanding: a service description and a protocol of collaboration and negotiation are the only requirements for shared understanding between a service provider and a service user. Therefore, interoperability can be regarded as one of the most important enablers of value networks (see [312]). Moreover, by focusing on service interoperability, many opportunities for re-use of functionality will arise, resulting in more efficient use of existing resources. In addition, outsourcing and competition between service providers will also result in a reduction of costs. From a more macroscopic point of view, costs will be reduced as a result of more efficient distribution of services in value networks. Thus, organizations mix their assets at different proportions and introduce new value propositions and business models [64], [322], [340]. Ballon, Helmus and Pas [80] list some of the variations found in models with respect to the focus or range of customer group, the function or goal in the value chain, the description of the roles of the actors involved in value creation, and the type of services they use. Maitland, Van de Kar, When de Montalvo, and Bouwman [345] base their distinction on the types of services being offered and classify the models according to their value web complexity and level of intermediation. For an extensive survey on business models, value webs, design and metrics systems the reader can refer to [92]. VII.2.7.1 Conceptual modelling of complex value systems There are number of modelling approaches that try to capture the value aspects of complex value systems (for example, [109]). Starting from the idea of value constellations that assume that a number of actors (even the end-consumer can be involved) produce valuable objects, Gordijn [194] is proposing the model-based, multi-viewpoint and economic value aware "e3-value" ontology (see Figure 78 for an example model). Based on this ontology, [469] defines a framework for goaloriented requirements engineering for e-services and show how to compose new business models from a number value patterns they identify.

Page 265 of 366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

Figure 78 An e3-value model. The second approach we briefly present here is part of the language developed within the ArchiMate project [268]. Although primarily intended for the modelling of complex serviceoriented enterprise architectures, the language also defines "higher-level business concepts", providing a way to link the operational side of an organization to its business goals, the information that is processed, and to the products and services that an organization offers to its customers. Figure 79 shows a fragment of the metamodel of the language, showing how the value concept links is related to the other concepts. The value of a product or service is defined as that which makes some party appreciate it. Value can go two ways: it may apply to what a party gets by selling or making available some product or service, or to what a party gets by buying or obtaining access to it. Value is often expressed in terms of money, but it has long been recognized that non-monetary value also is essential to business, for example, practical/functional value (including the right to use a service), and the value of information or knowledge. Though value can hold internally for some system or organizational unit, it is most typically applied to external appreciation of goods, services, information, knowledge, or money, normally as part of some sort of customer-provider relationship.
Value

Product Organisational service

Contract Business process/function

Figure 79 Part of the ArchiMate metamodel.

Page 266 of 366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

VII.2.7.2 Complex value systems and customer value For complex value systems, the generation and delivery of value to the users becomes a mutual interest. Based on their internal resources and capabilities, they adjust their functional contribution in the development of customer value. Their operation in this framework is based on the exchange of information, products, services and financial assets. Hence, organizations become dependent on each other strategically, functionally and financially. Continuous and repetitive interactions lead to the emergence of relationships between firms, which might become institutionalized through legal agreements and contracts. The interrelationships between the actors can exist at various levels, for example, communications, information flows and revenue flows ([345]). Complex value systems, value webs have to strive for supporting customer processes to the maximum possible extent when thinking of improving customer value (based on [199]). On the other hand each service is associated with costs. Each company in the web will choose which value (and ultimately which part of the end-user value) it will offer or in other words which parts of the customer process it will support. So, the values and cost are formed by various organizations performing roles that contribute to the value being offered to the customer through the e-service. The value web model appropriates various concepts of economic and information systems theory. Markets, hierarchies, networks and information technology are woven into an intricate web of relations to make this possible (Selz, [402]). According to Selz the main characteristics of the model are a value web broker that acts as central coordinator, an endeavour to gain proximity to the final consumer, and an integration of upstream activities. This integration is either coordinated with market platforms or with hierarchical mechanisms. Complementary to this approach, Bouwman and Van de Wijngaert [99] argue that the customer value as intended and delivered can only be tested by empirical research with consumers as unit of analysis. They opt for policy capturing as research method that can be useful to assess the potential success of a service that is not yet on the market. For services that are primarily directed to end-users (for example, mobile services) an important concept to be considered is also the consumer value. In marketing literature (for example, [216] and [219]), consumer value is often stated in terms of a value equation that depends on the sacrifices (for example, expenses) a consumer has to make in order to consume the product and receipts (for example, benefits) they experience from consuming the product. Gordijn [194] argues that in an eservices setting "Holbrooks consumer value framework can be practically used to identify the valuable aspects of a product or service from the viewpoint of an end consumer". In the area of mobile services, the customer value of is to a large extent stated in terms of any place, any time. However, Crisler, Anneroth, Aftelak and Pulil [101] assume that research into user behaviour, across classes of applications (for example, context aware), broad user groups and in specific application domains may help to define applications that offer value to end-users. Camponovo and Pigneur [126] describe the value each actor within the mobile business domain is going to deliver, which ultimately will add up to the final customer value. VII.2.7.3 Complex value systems and organizational arrangements Of more interest are relationships between what we might call structural participants in the value networks. The balance of theory suggests that there are many motivations for firms to assume such structural roles ranging from simple opportunism to requirements for new technological and market knowledge but that the solidity of the relationship will depend largely upon social and Page 267 of 366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

institutional antecedents. Depending upon which actor(s) contribute key assets in the creation of value and the operating risks involved [299], a different configuration of actors is likely to result, some taking structural, integrative roles in the alliance and others taking supporting, facilitating roles. Structural partners make up the core of the network while contributing and support partners are loosely linked to the network. As firms create products and services and engage customers in value exchanges, partners are playing an important role and require careful management [186]. In literature little attention is paid to what kind of resources should be shared in value webs and how they are organized. Although there are several resource typologies (tangible-intangible resources [197]; physical, human and organizational capital resources [50]; financial, technological, physical and managerial [140], property-based and knowledge-based [343]) these typologies are rather general. In our view access to critical resources is the key element in deciding which actors to incorporate. Critical resources for value webs that use the Internet are: access to the Internet and/or mobile infrastructure, to content, to content developers, aggregators and hosting providers, to software and application platforms, to customers, customer data, billing, customer support and management, based on the type of service providers of specific technology-related services, for instance mobile, location or positioning applications. Some of the resources may be found within a single organization, whereas for others more than one organization may be needed. [125] highlights the increasing importance that the organizations in the mobile business market attach to building partnerships. Participants in the mobile business markets need to work together in a large number of areas. Even separate mobile network operators, who are congenital competitors, resort to sharing their network infrastructures due to a discreet mutual interest in speeding up investments and roll-out [107]. Members of value webs cooperate in the development of enabling technologies, the integration of corporate information systems and the development of middleware solutions, open platforms and standards [125]. VII.2.7.4 Complex value systems and financial arrangements An important question is how investments are arranged within complex value networks. Organizations that are connected through intended relationships and interdependencies consider risk sharing, solving common problems, and acquiring access to complementary knowledge to be major motivators for collective investments. Inter-organizational investments require explicit articulation and collective agreement on the terms of investment and timing [133] [339]. The share of each participant and the corresponding partnership ratio must be defined. Three topics should be paid attention to when discussing financial arrangements with regard to 3rdG Business models, i.e. investments decisions, revenue models and pricing. With regard to investment decisions we advocate that attention should be paid to investment portfolios taking the life cycle of a service into account. From an investment appraisal perspective, a business model can be expressed in terms of a portfolio comprising particular rewards at a price of threatening risks [382]. Existing analyses are mainly directed to revenue models or pricing schemes. Olla and Patel [360] for instance present an overview of the revenue models that are used by what they describe as portal type actors within a value web for mobile services. Their overview is more or less a specification of more general revenue models for the mobile (portal) domain. Revenue models are based on pricing schemes. Pricing is a difficult topic within the mobile-services domain. Although pricing theory is extensive, pricing of innovative services is under-researched and problematic [266]. Page 268 of 366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

VII.2.7.5 Complex value systems, performance and metrics Vesalainen [444] has developed a measurement instrument for measuring the (economic) performance and impact of virtual or networked organizations, starting from the central organization (the point of gravity in a network, the organization that holds control over access to the customer and has most roles combined within their own organization). Vesalainens approach offers a number of interesting indicators. He distinguishes between structural and social links (organizational integration) on the one hand and commercial exchange and strategic integration (business integration) on the other and proposes a number of measurable concepts. For an extensive survey of e-metrics systems the reader can refer to Bouwman and van den Ham ([98]. VII.2.7.6 Other approaches to business value A further survey of current writings on the business value of web services can be found in [413]. Other material can be found in [24] [52] [103] [116] [142] [161] [162] [184] [205] [215] [327] [378] [387] [412] [427] [429] [301] [458]. This view of the literature illustrates a strong perception in the business community that web services are seen as the basis for interoperability. There has recently been a growing level of interest in the creation of an electronic economy for resources in providing self regulating resource management in autonomous computing systems and grids [348] [417] [153]. This may in future converge with commercial assessment of the business value of services. VII.3 Current state of activities VII.3.1 Projects Projects of particular relevance to this report are described in this section. They are ordered according to the relevant sub-sections in the previous section. VII.3.1.1 PILARCOS - University of Helsinki - Negotiation PILARCOS stands for the production and integration of large component systems. The ODCE group at the University of Helsinki works on development of B2B middleware that provides concepts and facilities for applications to form dynamic collaborations. The work started by development of ODP/OMG style service trading facility and participation in the standards development. The Pilarcos project produced an enhanced trading service where roles of a business network model are populated by offers of interoperable services. The collaboration establishment phase also requires support of type repositories and business network model repositories with simple ontological properties. The mechanisms developed are suited to capture NFA interoperability aspects together with functional and process-aware features. See http://www.cs.helsinki.fi/research/odce/ VII.3.1.2 Web-Pilarcos - University of Helsinki - Contract Management The web-Pilarcos project enhances the inter-enterprise collaboration support further at the time of operational. The focus of this project has been in collaboration contract management during the operation of the collaboration. The operational time environment incorporates a monitoring system that triggers coordination events when functional or nonfunctional behaviour of the collaboration participants are not met. See technical reports for 2004 at http://www.cs.helsinki.fi/research/odce/

Page 269 of 366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

VII.3.1.3 Design Support Environments for Distributed Systems - University of Kent Quality of service This three-year project aimed to extend facilities for the design of multimedia distributed systems, to ensure that they can meet the needs of complex systems, which will include the use of stream communication, multicasting and Quality of Service (QoS) constraints. The work augments the design environment with descriptions in sufficiently precise notations to enable assessments of designs to be made based on fitness for purpose, performance and functionality. See http://www.cs.kent.ac.uk/projects/dse4ds/ VII.3.1.4 Deriving Authority from Security Policy - University of Kent - Security In recent years, there has been a great deal of interest in the research community in the development of various forms of policy-based management. The common theme in this work is the expression of the required behaviour as a set of rules or policies in as abstract a form as possible, and in such a way that dynamic changes to the policies can be made without disrupting the running of the infrastructure. Suitable tools are used to translate from the policies to the low level constraints and decisions that are needed within the infrastructure to put them into effect. In providing security, there is a need to consider both network and middleware control mechanisms together in order to take account of the interaction of role or identity based measures with countermeasures to denial of service attacks. Denial of service needs to be countered as early in the communication process as possible, and with the minimum cost in resource terms. Authentication, on the other hand, needs to be substantially end-to-end in scope. Currently, they are generally seen as supported by independent mechanisms, and the firewall configuration is generally quite static because of the costs of configuration design and deployment. A policy-based approach allows configuration management to be more automated, and thus would allow shorter-term knowledge of service use derived from authentication to be exploited in a more agile, and so more effective, firewall configuration process. See http://www.cs.kent.ac.uk/projects/policy/ VII.3.1.5 TUBE (trust based on evidence) - University of Helsinki - Trust The TUBE project opens a series of projects on trust management [448]. The project aims to define a trust management architecture that addresses application level needs; the architecture addresses the trust concept, expressing and managing trust information, and system management facilities that apply trust information. The architecture especially focuses on detecting misbehaviour and contractual breaches in virtual enterprises. Further in the project series, middleware level services for counteractions will be demonstrated. VII.3.1.6 Archimate - Telematica Instituut - Performance The ArchiMate project is a Dutch research initiative that aims to provide concepts and techniques to support enterprise architects in the visualization, communication and analysis of integrated architectures [268]. The core results of the project are: A language for integrated modelling of enterprise architectures. One of the specific aims of the language is to offer a way to integrate detailed design models, specified in modelling languages specific to a certain domain (for example, business process modelling languages or application design languages such as UML). Mechanisms to create (stakeholder-specific) views and visualizations of enterprise architecture models. Techniques for (functional and quantitative) analysis of enterprise architecture models.

Page 270 of 366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

The quantitative analysis approach in particular is relevant in the context of non-functional aspects. The ArchiMate approach to performance (and cost) analysis of enterprise architecture models ([231] and [232]) is based on the composition of performance results and the propagation of quantitative properties through a model.
performance measures (horizontal) Customers Business services Business processes Application services workload Applications Infrastructural services Technical infrastructure performance measures (vertical)

Figure 80 A layered performance model in ArchiMate. In layered, service-based architectures, we propose a technique for "vertical" performance analysis that proceeds in two phases (see Figure 80). The workloads, imposed by higher layers (for example, customers), are propagated to the lower layers of the architecture. Subsequently, performance measures (such as resource utilizations and response times, but also costs) propagate from the lower to the higher layers. This approach provides a global analysis framework; existing detailed analysis techniques (for example, queuing analysis or simulation) can be "plugged in". Also, performance measures derived in the "bottom-up" analysis phase can be used as input for "horizontal" analysis of, for example, completion times of business processes. Quantitative attributes can be assigned to the different architectural elements. Three types of quantities can be distinguished: (1) input parameters; (2) performance requirements; (3) calculated performance measures. Based on the input parameters, analysis yields the calculated performance measures. These results can be checked against the performance requirements (or constraints) that have been set. Note that a strict separation between these categories of quantities cannot always be made: in certain cases, the performance measures that are calculated in one analysis phase may be used as input parameters to a next analysis phase, resulting in compositional performance analysis. See http://archimate.telin.nl VII.3.1.7 PERMABASE - University of Kent - Performance The PERformance Modelling for Atm Based Applications and SErvices (PERMABASE) project is concerned with bringing the advantages of performance modelling into the realm of the distributed system designer. Performance model generation is of primary concern within a project, however, the majority of system designers are not performance experts. The system hardware and software Page 271 of 366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

designers should be primarily concerned with the process of designing the system, and not spending their time and energy on the generation of a performance model. The PERMABASE approach is to automatically generate performance models directly from the system design model created as part of the normal design process. See http://www.cs.kent.ac.uk/projects/permabase/index.html VII.3.2 Standards The following sections list key standards relating to each of the aspects considered. Many are concerned with expressing parts of the models needed to negotiate interoperability, but were not drafted with this aim in mind. The selection of these standards is based on their having been identified as supporting specific aspects of interoperation; there is no claim that the list is exhaustive in general terms. VII.3.2.1 Quality of service standards OMG UML Profile for QoS and Fault Tolerance OMG Interworking between CORBA and TMN Systems OMG Real-Time CORBA (Dynamic Scheduling) OMG Real-Time CORBA (Static Scheduling) OMG Domain Specifications - Telecoms - Audio / Visual Streams W3C System and Environment Framework VII.3.2.2 Security standards OMG Authorization Token Layer Acquisition Service (ATLAS) OMG Common Secure Interoperability (CSIv2) Addresses the requirements of CORBA security for interoperable authentication, delegation, and privileges. OMG CORBA Services - Security Service OMG Domain Services - Resource Access Decision Facility OMG Domain Specifications - Healthcare - Person Identification Service (PIDS) OMG Domain Specifications - Security - Public Key Infrastructure (PKI) OMG Domain Specifications - Healthcare - Resource Access Decision (RAD) OMG Domain Specifications - Telecoms - Telecom Service and Access Subscription (TSAS) OMG Domain Specifications - Transportation - Surveillance User Interface IETF ID PKI Attribute Certificate Policy extension IETF RFC 3546 Transport Layer Security (TLS) Extensions IETF RFC 2459 Internet X.509 Public Key Infrastructure Certificate and CRL Profile IETF RFC 2743 Generic Security Service Application Program Interface Version 2, Update 1 X.501-93 Information technology - Open Systems Interconnection - The Directory: Models ISO/IEC 15408-1:1999 Information technology Security techniques Evaluation criteria for IT security Part 1: Introduction and general model JTC1/SC27 ISO/IEC 15408-2:1999 Information technology Security techniques Evaluation criteria for IT security Part 2: Security functional requirements JTC1/SC27 ISO/IEC 15408-3:1999 Information technology Security techniques Evaluation criteria for IT security Part 3: Security assurance requirements ISO/IEC TR 15446 Information technology - Security techniques - Guide for the production of Protection Profiles and Security Targets W3C XML Encryption Syntax and Processing Page 272 of 366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

W3C Decryption Transform for XML Signature W3C XML-Signature XPath Filter 2.0 W3C Exclusive XML Canonicalization Version 1.0 W3C XML-Signature Syntax and Processing W3C The Platform for Privacy Preferences 1.1 (P3P1.1) Specification VII.3.2.3 Trust standards W3C PICS 1.1 Rating Services and Rating Systems and Their Machine Readable Descriptions VII.3.2.4 Digital Rights Management standards W3C PICS 1.1 Rating Services and Rating Systems and Their Machine Readable Descriptions VII.3.2.5 Performance standards OMG UML Profile for Schedulability, Performance and Time VII.3.2.6 Reliability and Availability standards OMG CORBA Fault Tolerance W3C QA Framework: Specification Guidelines VII.3.2.7 Business value standards OMG Reusable Asset Specification (RAS) VII.3.2.8 Related architectural standards ISO/IEC 10746-1:1998 Information technology Open Distributed Processing Reference model: Overview ISO/IEC 10746-2:1996 Information technology Open Distributed Processing Reference model: Foundations ISO/IEC 10746-3:1996 Information technology Open Distributed Processing Reference Model: Architecture ISO/IEC 10746-4:1998 Information technology Open Distributed Processing Reference Model: Architectural semantics OMG Domain Specifications - Electronic Commerce - Negotiation Facility ISO/IEC 15944-1:2002 Information technology Business agreement semantic descriptive techniques Part 1: Operational aspects of Open-EDI for implementation VII.4 Issues VII.4.1 Gap analysis The general picture that emerges from the review above is of a considerable amount of activity, but with different levels of maturity and completeness for the various aspects, and work to be done in providing a stronger integrating framework. This section identifies the areas where work is needed to unify and complete the picture. The most challenging requirement is for a unifying reference model expressing the way interoperability depends on the existence of shared abstract models and the common framework for refining these models during the establishment of a shared environment for interoperation. What is Page 273 of 366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

needed is a sufficiently general framework for capturing the common features without overly constraining specific cases. Within this framework, there is a need for common core of model information exchange and negotiation mechanisms. Common mechanisms of this kind exist for specific aspects (see [61] [37], for example), but these are semantically weak and not sufficiently general. Creating a suitable generic core would require a joint effort between ontology and infrastructure experts. Each aspect needs a corresponding detailed model of the information to be negotiated and managed for interworking purposes. The state of the art in the different aspects is uneven, with the maturity and coverage of the conceptual models varying considerably. There is also considerable divergence in the style and use of modelling notation found in the different aspects. With increasing potential for the use of model driven development to generate the infrastructure support for negotiation about the specific aspects, there is great benefit if the style of modelling adopted in each aspect is similar, so that elements of a library of aspect-specific metamodels can be combined. Considering the specific aspects: quality of service has the richest set of existing models, particularly in the quality areas of major concern to the telecommunications industry; however, an integration effort is still needed to present them in a uniform way. security is more challenging, because the target properties are more difficult to quantify. The most common approach at present is to establish a level of compatibility by negotiating use of infrastructure components or specific algorithms by name. However, this still leaves the possibility of mis-matches of objective or scope, and providing a uniform model of an enterprise-oriented view of security is very challenging. trust is, from the modellers perspective, a simpler and more constrained problem than other aspects of security, but it is a less mature field. A core might be constructed from the quantification and compositionality described in [372], but there would need to be further work to establish the necessary scope. enterprise digital rights management poses a challenge because most of the work to date is focused on specific mechanisms, rather than negotiable properties, and a change of focus is needed to support automation of interoperability. performance is again an area where there has been considerable work on metrics and on simulation techniques. The main problem from the point of view of interoperability is that the techniques used on individual components are generally not simply composable, and the resultant negotiation space is therefore likely to be very richly structured. reliability and availability as with quality of service above, there are a range of existing metrics, but work is needed to put them into a reusable form. Derivation of end-to-end properties again has the problems noted under performance above. business value is normally described in terms of more abstract models, and the main challenge is in integration with the other, more technically oriented aspects. Page 274 of 366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

Once a suitable family of models and mechanisms exists, there will still remain all the issues relating to making interoperability effective in real organizations. Work is needed on the deployment, management, conformance testing and type-approval of all the mechanisms needed to support non-functional aspects. VII.4.2 Priorities and Conclusions In summary, then, the highest priority for work in NFA is the creation of a common framework defining the way interoperability depends on the integration and resolution of models and mechanisms supporting a wide range of aspects. Supporting this, we need a comprehensive library of aspect metamodels and a rich set of negotiation and integration mechanisms that will allow a commonly applicable interoperability model of the aspects to be constructed dynamically as needed. These are strategic goals that are likely to guide research for a considerable period of time.

Page 275 of 366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

VIII
VIII.1

Resources
Conferences, workshops and events

Conferences/Workshops MDD and CBSE CONFERENCE NAME <<UML>> (Name Models as of 2005) change DATES to from 1998 Web Page www.umlconference.org www.edocconference.org http://www.ecoop.org/ http://www.oopsla.org/

EDOC, the Enterprise Computing from 1995 Conference ECOOP, Europan Conference on From 1987 OO Programing OOPSLA VIII.1.1 Conferences - SOA PLACE AND DATE Denver, Colorado, April 4-8, 2005
Denver, Colorado, April 4-8, 2005

From 1986

CONFERENCE NAME
2005 IPDPS Conference: 19th IEEE International Parallel & Distributed Processing Symposium

Web Page http://www.ipdps.org/ipdps2005/index.html

2005 IPDPS Conference: 19th International Parallel & Distributed Processing Symposium 2nd Int Workshop On Databases, Information Systems and Peer-to-Peer Computing 4rth Int Scientific Workshop on Global and PEER-TO-PEER Computing From Theory to Practice AAC-GEVO2004: Int Workshop on Agents and Autonomic Computing and Grid Enabled Virtual Organizations

http://www.ipdps.org/ipdps2005/index.html

Toronto, Canada, Aug. 29-30 2004

http://horizonless.ddns.comp.nus.edu.sg/dbisp2p04 /

Chicago, USA, Apr. 20-21, 2004

http://www.lri.fr/~fci/GP2PC-04.htm

Wuhan, China, Oct. 21- http://grid.hust.edu.cn/gcc2004/download/aacgevo04-cfp.pdf 24 2004

Page 276 of 366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

AAMAS04: 3rd Int Conf on Autonomous Agents and Multiagent Systems AP2PC 2004: 3rd Int Workshop on Agents and Peer-to-Peer Computing CAiSE05: 17th Conf on Advanced Information Systems Engineering CCGRID05: 5th IEEE/ACM International Symposium on Cluster Computing and the Grid Cluster 2004 DEXA 2004: 15th International Conference on Database and Expert Systems Applications DISC 2004: 18th Annual Conf on Distributed Computing DPSN 2004: 1st Int Workshop on Data Processing and Storage Networking: Towards Grid Computing EC-Web 2004: 5th Int Conf on Electronic Commerce and Web Technologies EDOC'2004: 8th IEEE International Enterprise Distributed Object Computing Conference EEE'05: The 2005 IEEE International Conference on e-Technology, eCommerce and e-Service ETECH04: OReilly Emerging Technology Conference ETNGRID-2004:

New York, Jul. 19-23, 2004 New York, Jul. 19, 2004
Porto, Portugal, Jun. 1317, 2005

http://www.aamas-conference.org/

http://p2p.ingce.unibo.it/

http://www.fe.up.pt/caise2005/

Chicago, USA Apr. 19-22, 2004

http://www.cs.cf.ac.uk/ccgrid2005/ http://www.ccgrid.org/

San Diego, California, Sept. 20-23, 2004 Zaragoza, Spain, Aug. 30 Sep. 3, 2004

http://grail.sdsc.edu/cluster2004/ http://www.dexa.org/dexa2004/

Amsterdam, the Netherlands, Oct. 4-7, 2004

http://homepages.cwi.nl/~paulv/disc04/

Athens, Greece, May 14, 2004

http://www.ece.ntua.gr/networking2004/dpsn04.ht ml

Zaragoza, Spain, Aug. 30 Sep. 3, 2004

www.dexa.org/dexa2004/index.php?include=cfp/e c-web.html

Monterey, California, USA, Sept. 20-24 2004

http://www.edocconference.org/

Hong Kong, Mar. 29 Apr. 1, 2005

http://www.comp.hkbu.edu.hk/~eee05/home/

San Diego, CA, Feb. 912, 2004


Modena, Italy, Jun. 14-

http://conferences.oreillynet.com/etech/

http://www.diit.unict.it/users/csanto/etngrid04/ Page 277 of 366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

16, 2004 Workshop on Emerging Technologies for Next generation GRID

EWMDA-2: 2nd European Workshop on Model Driven Architecture (MDA) with an emphasis on Methodologies and Transformations GCC 2004: The Third International Conference on Grid and Cooperative Computing GLOBE04: 1st Int Workshop: Grid and Peer-to-Peer Computing Impacts on Large Scale Heterogenous Distributed Database Systems GRID 2004: 5th IEEE/ACM Int Workshop on Grid Computing Gt04: Grid Today Conference: The First Major Conference and Exhibition to Focus on Business Applications of Grid Computing HICSS-38: 38th Hawaii Int Conf on System Sciences
HPDC-13: 13th Int Symposium on High Performance Distributed Computing

Canterbury, England, Sep. 7-8, 2004

http://www.cs.kent.ac.uk/projects/kmf/mdaworksh op/

Wuhan, China, Oct. 21- http://grid.hust.edu.cn/gcc2004/gcc2004.htm 24, 2004

Zaragoza, Spain, Aug. 30 Sep. 3, 2004

http://www.irit.fr/globe2004

Pittsburgh, USA, Nov. 8, 2004


Philadelphia, Pennsylvania, USA, May 24-26, 2004

http://www.gridbus.org/grid2004/

http://www.gridtoday.com/04/conference/

Hilton Waikoloa Village Big Island, Hawaii, Jan. 3-6, 2005 Honolulu, Hawaii USA, Jun. 4-6 2004

http://www.hicss.hawaii.edu/ http://cui.unige.ch/OSG/hicss38/

http://www.hpdc.org/ http://hpdc13.cs.ucsb.edu/

ICSOC 04: 2nd Int Conf on Service Oriented Computing ICWS 2004: International Conference on Web Services
IPTPS05: The 4th Annual International Workshop on Peer-To-Peer Systems

New York, Nov. 15-19, 2004 San Diego, California, USA, July 6-9, 2004 New York, USA, Feb. 24, 25

http://www.icsoc.org/

http://conferences.computer.org/icws/2004/

http://iptps05.cs.cornell.edu/ Page 278 of 366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

MASS 2004: 1st IEEE Int Conf on Mobile Ad-hoc and Sensor Systems MP2P'05: 2nd Int Workshop on Mobile Peer-to-Peer Computing Networking 2004: The 3rd IFIP-TC6 Networking Conference OTM 2004: The Move Federated Conferences 2004 () Cyprus, October 27th 2004 P2P&DB 2004: International Workshop on Peer-to-Peer Computing & DataBases P2P2004: 4th IEEE Int Conf on Peer-to-Peer Computing P2P2004: 4th IEEE Int Conf on Peer-to-Peer Computing PerCom 2005: 3rd Annual IEEE Int Conf on Pervasive Computing and Communications PPGaMS'04: 1st Int Workshop on Programming Paradigms for Grids and Metacomputing Systems SCC 2004: 2004 IEEE International Conference on Services Computing VLDB 2004: 30th Int Conf on Very Large Data Bases WI'04: The 2004

Florida, USA, Oct. 2427, 2004

http://www.ececs.uc.edu/~cdmc/mass/

Hawaii, Mar. 12 2005

http://www.cs.unc.edu/mp2p/

Athens, Greece, May 9- http://www.ece.ntua.gr/networking2004/ 14, 2004


Agia Napa, Cyprus, Oct. 25 - 29 2004

http://www.cs.rmit.edu.au/fedconf/

Heraklion, Greece, March 14, 2004

http://pc-erato2.iei.pi.cnr.it/meghini/p2p/

Zurich, Aug. 25-27, 2004 Zurich, Aug. 25-27, 2004 Hawaii, March 8-12 2005

http://femto.org/p2p2004/

http://femto.org/p2p2004/

http://www.percom.org/

Krakw, Poland, Jun. 8, 2004

http://www.mathcs.emory.edu/dcl/meetings/ppgam s2004/

Shanghai, China, Sep. 15 - 18, 2004 Toronto, Canada, Aug. 29-30 2004 Beijing, China, Sep.

http://conferences.computer.org/scc/2004/

http://www.vldb04.org/

http://www.comp.hkbu.edu.hk/WI04/ Page 279 of 366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

IEEE/WIC/ACM International Conference on Web Intelligence WISE 2004: 5th Int Conf on Web Inf Systems Engineering WWW13: 13th World Wide Web Conference

20-24, 2004

Brisbane, Australia, Nov. 22-24, 2004 New York City, May 17-22 May, 2004

http://www.itee.uq.edu.au/~wise04/

http://www.www2004.org/

VIII.1.2 Conferences, Workshops and Journals Agents Some conferences and workshops that are dedicated to the field of agents are: - AAMAS International Joint Conference on Autonomous Agents and Multi-agent Systems - EUMAS European Workshop on Multi-agent Systems - CEEMAS Central and Eastern European Conference on Multi-agent Systems - PRIMA Pacific Rim International Workshop on Multi-agent Systems - IAT IEEE/WIC/ACM International Conference on Intelligent Agents Technology A journal which is dedicated to the area of multi-agent systems is: - Autonomous Agents and Multi-agent Systems, Eds: Katya Sycara and Michael Wooldridge, (http://www.kluweronline.com/issn/1387-2532) VIII.1.3 COURSE NAME Seminar "Peer-to-peer Information Systems Managing Web Services Using Apache Web Server Bundle (WIB300) Managing Your Web Services Deployment Sun Grid Engine Technology Overview Courses - SOA ORGANISATION Max-Planck Institute for Informatics (MPI), University of Saarland, Germany Sun Microsystems Web Page http://www.mpisb.mpg.de/units/ag5/teaching/ws04_05/p2p-seminar.htm

http://suned.sun.com/WLC/registration/wlcwebtech_live. html

Sun Microsystems

http://suned.sun.com/HQ/courses/WJT-3502-90.html

Sun Microsystems

http://suned.sun.com/US/catalog/courses/WE-1650-90.html

Page 280 of 366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

Administering and Supporting N1 Grid Engine 6 Creating Grid Services using Java and the Globus Toolkit 3 IBM Grid Computing Developing Web Services with Java and IBM Web Services Toolkit Developing Web Services with Java and IBM Web Services Toolkit Developing XML Web Services Using Microsoft ASP.NET Developing XML Web Services Using Microsoft ASP.NET Training Course Developing XML Web Services Using Microsoft ASP.NET Web Services Using C# and

Sun Microsystems

http://suned.sun.com/US/catalog/courses/WE-1600-90.html

Stilo Content Engineering

http://www.stilo.com/services/creatinggrids.html

IBM Kantega Academy

http://www-1.ibm.com/grid/grid_education.shtml http://www.kantega.no/kurs/kursliste/kursinfo.asp?thisId=10626 66052

InferData

http://www.inferdata.com/training/webservices/webservicesibm. html

InferData

http://www.inferdata.com/training/dotnet/2524.html

Inventio Consulting

http://www.inventioconsulting.co.uk/developing_xml_web_servi ces_usin.htm

Dow Jones Training Services

http://www.dowjones.com/training/outlines/2524.htm

Object Innovations

http://www.objectinnovations.com/CourseOutlines/418.html

Page 281 of 366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

ASP.NET VIII.1.4 EVENT NAME GGF12 12th Global Grid Forum Fall 2004 Internet2 Member Meeting Grid Computing For Your Enterprise
Grid Computing-- Fact or Fantasy?

Related Events - SOA PLACE, DATE Brussels, Sep. 20-23, 2004 Web Page http://www.gridforum.org/Meetings/GGF11/GGF12.htm

Austin, Sep. 27 - http://events.internet2.edu/events-future.cfm?type=1 30

Baltimore, USA, Mar. 6-7 2004

http://www.marcusevans.com/events/CFEventinfo.asp?EventID=83 28

MIT/Stanford Venture Lab (VLAB), Stanford, USA, May 18, 2004

http://www.vlab.org/204.cfm?eventID=44

VIII.2

Journals, Books, Reports, Links

Page 282 of 366

Journals - MDD and Components JOURNAL NAME PUBLISHER Web Page www.ieee.org

Software and Systems Modelling IEEE (SoSyM) MDA Journal VIII.2.1 Journals - SOA JOURNAL NAME e-Service Journal International Journal of Business Process Integration and Management (IJBPIM) Journal of Intelligent Information Systems Journal of the ACM Online P2PJournal The International Journal of Web Services Research (JWSR) Web Services Journal VIII.2.2 PUBLISHER Indiana University Press Inderscience Publishers BP Trends

http://www.bptrends.com/

Web Page http://www.e-sj.org/ https://www.inderscience.com/ browse/index.php?journalID=115

Kluwer Academic Publishers ACM

http://www.isse.gmu.edu/JIIS/ http://www.acm.org/jacm/ http://www.p2pjournal.com http://www.ideagroup.com/JOURNALS/details.asp?id=4138

Idea Group Publishing/Information Science Publishing

Sys Con Media

http://www.sys-con.com/webservices/

Books - SOA AUTHORS Michael, M. Abbas, A. Pfister, G. PUBLISHER, YEAR Sybex Inc, 2001 Charles Ricer Media, 2003 Prentice Hall, 1998 SAMS, 2002

BOOK TITLE Discovering P2P Grid Computing: A Practical Guide In Search of Clusters (2nd Edition) JXTA: Java P2P Programming

Daniel Brookshier, Darren Govoni, Navaneeth Krishnan, Juan Carlos Soto Mastering JXTA: Building Java Gradecki, J. Peer-to-Peer Applications P2P: How Peer-to-Peer Fattah, H. Technology Is Revolutionizing the Way We Do Business

Wiley, 2002 Dearborn Trade, 2002

Peer to Peer: Collaboration and

Leuf, B.

Addison-Wesley Pub Co, 2002

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

Sharing over the Internet Peer-to-Peer Harnessing the Power of Disruptive Technologies Peer-To-Peer Application Development: Cracking the Code Peer-to-Peer Computing: Technologies for Sharing and Collaborating on the Net Peer-to-Peer: Building Secure, Scalable, and Manageable Networks The Grid: Blueprint for a New Computing Infrastructure Web Services: Concepts, Architectures and Applications

Oram, A.

O'Reilly & Associates, 2001

Dreamtech Software Team

IDG/Hungry Minds, 2001

Barkai, D.

Intel Press, 2002

Moore, D., Hebeler, J.,

McGraw-Hill Osborne Media, 2001 Morgan Kaufmann, 1998 Springer Verlag, 2004

Foster, I., Kesselman, K. Alonso, G., Casati, F., Kuno, H., Machiraju, V.

VIII.2.3 State-of-the-art

reports - Agents

Huhns and Singh give an overview of the entire agent field [27]. Michael Wooldridge, in collaboration with some other authors (Jennings, Ciancarini, etc.) has written many articles and books in the agent field [72,70]. Ferber wrote an undergraduate book that gives a good introduction to multiagent systems [18]. Applications of agent systems can be found in [32]. The following articles and books provide a good overview of agents and multi-agent systems: H.S. Nwana, Software Agents: an Overview, Knowledge Engineering Review, Vol. 11, No. 3, 1996. This article provides an overview of agents and taxonomy of agents. Bradshaw, J., Software Agents, MIT Press, Cambridge MA, 1997. This book contains a good collection of articles on agents. [81]: This article provides a roadmap. [62]: A good overview book.

VIII.2.4 Useful links Agents http://agents.umbc.edu/ Information, resources newsletters and mailing lists relating to intelligent information agents, intentional agents, software agents, softbots, knowbots, infobots, etc. Published and maintained by the Tom Finin, UMBC.

284/366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

http://www.csc.liv.ac.uk/~mjw/links/ Maintained by Prof. Wooldridge, University of Liverpool, UK. Contains links to a number of agent related information sources, subject areas, other peoples webpages, conferences, etc. http://www.fipa.org/ FIPA is a non-profit organization aimed at producing standards for the interoperation of heterogeneous software agents. http://www.daml.org/ The DARPA Agent Markup Language (DAML) Program officially began in August 2000. The goal of the DAML effort is to develop a language and tools to facilitate the concept of the Semantic Web. Mark Greaves is the DARPA (http://www.darpa.mil) Program Manager for DAML. http://auml.org/ The Agent UML website. Multiagent systems (MAS) are often characterized as extensions of object-oriented systems. This overly simplified view has often troubled system designers as they try to capture the unique features of MAS systems using OO tools. In response, an agent-based unified modeling language (AUML) is being developed. http://fipa.org/ The FIPA Modeling TC (http://www.fipa.org/activities/modeling.html) goal is to be domain independent. Currently, it will examine those areas where it has expertise: service-oriented architecture (SOA), business process management (BPM), simulation, realtime, AOSE, robotics, information systems. Other areas will be examined over time as further expertise becomes available. http://www.agentlink.org/ AgentLink III is the new European Co-ordination Action for Agent Based Computing, a network of researchers and developers with a common interest in agent technology. Launched on 1st January 2004, it follows on from AgentLink II , and will continue to provide resources and information on Agent-Based research across Europe. http://www.agentcities.org/ The first 14 nodes in the Agentcities network were launched on October 30th 2001 by the 5th Framework IST funded project Agentcities. The network consists of set of software systems (platforms) connected to the public Internet. Each of these platforms hosts agent systems capable of communicating with the outside world using standard communication mechanisms (interaction protocols, agent languages, standard content expressions, domain ontologies and standard message transport mechanisms such as HTTP).

VIII.3

I. Interoperability Research Challenges

This discussion describes some challenges related to Networked Enterprises and software, and continues with some related challenges for standardisation. The Explosive Growth At a time when the complexity of creating e-Business solutions has never been greater, e-Business has evolved through two distinct phases: "brochure ware" on Web sites, and developing interactive e-Commerce. Now, global e-Business is evolving again as companies form business-to-business links with their suppliers and customers, calling for new solutions while e-Business traffic becomes heavier, more critical and more complex. Successfully managing the implementation of e-Business

285/366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

practices and information management techniques in this environment requires an intensive cooperative effort between the private sector and the client base. The existing Software Interoperability shows a potential in affecting business performance at the level of enterprises as well as the networked enterprises. There is still a large gap though in collaboration between the individual-group level and the enterprise and inter-organizational network level. Bottlenecks seem to appear in enterprise systems (ERP, CRM, e.t.c.) because the existing software coordinate tasks but do not support the individual or group working with them. Further more, most of the existing software is designed for hierarchical organisations, thus creates conflicts in networked organisations and constrains the effectiveness of the desired interoperability. On the other hand, existing collaborative software lack integration, flexibility and tailorability and do not yet sufficiently support the management of business processes and transaction environments at the level of the networked enterprise. The professional challenge The number one professional challenge now facing e-business professionals is staying current in ebusiness and Web technologies, according to a recent survey taken by the e-Business Communication Association (eBCA), a professional membership organization for e-business practitioners. Of the professionals surveyed, 66% claimed that staying up to date on technology was the most difficult professional challenge they currently face. Not only were survey respondents concerned about staying current with technology, but they also are struggling with how to implement technology in the most effective way in their organizations. 62% of respondents cited access to best practice information as the second most direct professional challenge they are facing. Rounding out the list of the top five e-business challenges are: Acquiring training/skills: 47% of respondents said that obtaining adequate training was a major professional challenge. Finding information: 33% of respondents claimed that the difficulty of finding information is one of their biggest e-business challenges. Lack of internal support: 30% of respondents said that they dont get enough support from within their e-businesses. The IT Industry approach The IT industry worldwide is working on generating technologies and practices designed to address growing e-Business implementation challenges aiming at bringing customer and vendor e-Business technology priorities closer together. There is a need for the IT industry to advocate the e-Business architectural directions, interoperable building blocks and common procedures that will be the basis for future e-Business Internet development. It is believed that key e-Business implementation issues facing today's business decision-makers, I.T. managers and customers include:

Using XML and other technologies to transform traditional business practices into eBusiness practices. Integrating existing business systems with new e-Business solutions and architectures. Improving the exchange of e-Business information over multiple devices (mobile phones to PCs, etc.) 286/366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

Addressing latency / concurrency on Intranets. Establishing secure systems that don't require users to continually log-on to systems.

The SME and Very Small Enterprises (VSE) Market

SME generate a significant amount of employment and turnover in Europe. Statistics show that although there are more than 17 million SMEs in Europe, 93% of them have less than 10 employees. In the context of Enterprise Interoperability (EI), a special effort is to be put for setting up the framework and guidelines for the production of the next generation Enterprise Software for SME in Europe and the on-coming EU member states, where SME and VSE prevail, by addressing major issues such as

the information gap (Digital Divide) , the guidance of VSEs in understanding the legal and regulatory framework for interoperability between enterprises the user requirements for the exploitation of low threshold technologies (web services, ontology, intelligent agents) critical in enabling interoperability between enterprises the user requirements for producing light, comprehensive, interoperable and low cost Enterprise software.

Interoperability issues for the SME & VSE business software development There is a persistent need to provide SME and VSE with the benefits of Enterprise Integration and Systems Interoperability, by taking into account the specific needs, abilities and potential of smaller enterprises. While defining the basic concept, processes and final objectives for Enterprise Interoperability (EI) in general one should for see how to scale down the effort addressing smaller business entities and modifying the overall of EI approach to them. Standards and means for in-depth application of integration and interoperability among different software systems and variant small enterprises communities / chains, in the context and within the ability of SME and VSE should be provided, too. Furthermore, one should apply the SME/VSE oriented approach for Enterprise Integration in such a way that will enable the technology and culture transfer to enterprises of new EUmember states. Interoperability challenges for Enterprise Architecture (E.A.) At a time when current enterprise architectures serve documentary purposes, supporting disjoint IT development, delivery, integration, management, investment and business alignment, providing new approaches by developing on demand just in time solutions seems to be the most often current practice Architecture models are descriptive and static, and are maintained by experts in each domain concerned. There is a great need for autonomous architectures to support on-demand business, real-time enterprises, and evolutionary innovative projects, and ultimately a highly flexible intelligent infrastructure for enterprise interoperability. E.A. should support on-demand evolutionary 287/366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

computing environments with self-adjusting and self-organizing architectural models and platforms at Business, Knowledge and ICT layers. Enterprise Architecture (EA) receives worldwide concern, and new model driven approaches to EA and model generating solutions for execution and performance management are being prototyped by the help of Agents, Web services, Standards and AKM technology, which should be brought together in symphony. Standards used or experimented at the time are CIMOSA, GERAM, PERA and CEN 12204, and more. At the same level of priority the issue of Continuous Business Solutions Management should be dealt with in a manner that processes must be available as services, work process tasks. New services simplifying change, configuration and history management must be developed. The challenge here seems to be in the re-composition and re-activation of past solutions depending always on the context of each case. Research effort should be put on new model generated solutions management and repository services for effective re-composition and re-activation and establishing standards for repository services for solutions management.

At a second priority level, issues such as Customer Solutions Delivery and Support, Transition Between Inter-Organisational and Intra-Organisational Tasks, Organization Supporting Network Infrastructures, Simulation, Verification, Performance Monitoring, and Forecasting of Networked Organisations and workflow verification, Building and Operating Registries should be resolved. The desired situation Methodology and technology to implement delivery, deployment and management processes as tasks described and managed as part of the Intelligent Infrastructure, Enterprise Systems that allow to model and execute varying performers of enterprise tasks and processes. The extended enterprise needs to be simulated, verified, monitored, and forecasted in order to perform. Such operations would identify delays, bottlenecks, etc, and would consequently enable to take appropriate measures upfront and during operations to address possible issues. Service Provisioning Networked Organisations (SPNO) should be developed defining roles with Competence and Skills (C&S) profile and mutual services provision as the basis for Networked Organizations supported by Intelligent Infrastructures. New Enterprise Systems should allow varying performers to model and execute enterprise tasks and processes for the transition between Inter-Organisational and Intra-Organisational tasks. For the purpose of all that new scalable, reliable and unified system of registries should provide access to identification and description schemas and build a basis for ubiquitous eBusiness.

Research & Technology Challenges - Standardization For the purpose of Customer Solutions Delivery and Support issue, research should be put on some twelve core work processes implemented as tasks modeled by actions and rules. Work management, execution and monitoring methods must be developed as an integral part of the Intelligent 288/366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

Infrastructure. Most of these work processes and their implementing tasks can be standardized. Transition between Inter-Organisational and Intra-Organisational Tasks issue, the research gap appears to be in Models describing a task as externally accessible and workflow management system extensions should be developed in order to support this model. A standard way to describe tasks should be established. Simulation, Verification, Performance Monitoring, and Forecasting of Networked Organisations- workflow verification issue, the research gap seems to be on distributed enterprise services models and architectures area where standards on enterprise modeling and interfaces are lacking, too. Service Provisioning Networked Organisations (SPNO), the desired situation is defining roles with Competence and Skills (C&S) profile and mutual services provision as the basis for Networked Organizations supported by Intelligent Infrastructures. Research effort is to be put in modelling the SNO as part of the EKA and then adding services to customise, adapt, extend and manage. New EKA meta-models and Visual Language are needed while teams, services and team-roles can be standardised Building and Operating Registries, there is a need for scalable, reliable and unified system of registries that provide access to identification and description schemas and build a basis for ubiquitous eBusiness. Linking and unifying existing registries is to be researched and a scalable and distributed registry system to be developed. Standards for registry interfaces and data structures for objects and services, are lacking.

VIII.3.1

Challenges for standardization

A B2B standard is defined as guidelines for how communication and information sent between organizations should be structured and managed. A study presented in [SdPe 2003] has revealed four challenges regarding B2B standards that need to be dealt with in the future. Challenge 1: Facilitate standards usage and implementation Several factors contribute to making standards usage complex: ambiguity in standards specifications; too much flexibility in the standards leading to a risk of failing to achieve interoperability; and that there is a lack of knowledge in organizations concerning what standards are, how they work, and how to use them in an optimal way. Challenge 2: Develop small application packages to support standards usage Software developers must join forces and focus on developing small application packages to support standards usage in organizations. Existing support is not good enough and needs improvement, particularly for SMEs. If using the application packages is simple, the chance of increasing the organizational in-house knowledge about standards becomes higher. Challenge 3: Increase the level of knowledge about standards in businesses Traditionally, organisations and in particular SMEs have been weak with respect to training and education. This weakness covers everything from identifying educational needs to implementing the education and evaluating its outcome. There is thus a clear need to educate organisational staff in how standards work and in how to use them. Challenge 4: Balancing between the need to modify and the need to standardize 289/366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

This challenge concerns how, if possible, to make standard specifications less flexible. Today standards are almost always modified, due to the differences in requirements and needs of different organizations. It is therefore a challenge how to make specifications that ensure inter-operability, while still enable modifications to be made. The balance between the need to modify and the need to standardize clearly require more research.

VIII.4

Standardization organisations and activities

Standards Development Organizations (SDO) may exist in several forms. One common definition of a standards body is as being: recognised at national, regional or international level, that has as a principal function by virtue of its statutes, the preparation, approval or adoption of standards that are made available to the public. [ISO/IEC 1996, p.18] This refers to formal SDOs, such as ISO, CEN/CENELEC, DIN and ANSI. Not all of these organizations develop the standards themselves, rather, they develop procedures for how to create a standard, and approve standards submitted to them by other SDOs. In contrast to the formal organisations come industry initiatives, developed by various consortia. A consortium consists of a group of industrial companies. They have no formal standards setting accreditation, but rather work to achieve de facto standards. Regardless of origin or formal status, what they have in common is the strive for consensus and the recognition of the need for standards in interoperability. This chapter will present SDOs identified as particular important to SOA, namely the Global Grid Forum (GGF), the World Wide Web Consortium (W3C) and the Organization for the Advancement of Structured Information Standards (OASIS). Rosettanet was added as an example of standardisation efforts on a more semantic level, i.e. where particular types of business interactions are standardised. The following is a list of interoperability related standards witch have some relevance for different parts of this documents. In addition we will describe some of the most relevant standardization organizations. ISO 16100 Manufacturing software capability profiling for interoperability ISO TC184/SC5/WG4 ISO NWIP Manufacturing Process Interoperability ISO TC 184/SC5/WG1 OAGIS - Open Applications Group Integration Specifications OAG (Open Application Group) ISO 20242 Application service interface (WD:2002) ISO TC 184.SC5/WG6 ISO 15745 Open System application integration framework (FDIS:2002) ISO TC 184/SC5/WG5 ISO 16668 - Basic Semantic Register - Rules, Guidelines and Methodology ISO TC 154/WG1

Standards concerning application integration for IT based services are mainly built on the ISO seven layer model. The main approaches to improve software interoperability are [Chen and Vernadat, 2002]: 290/366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

Enterprise Model Execution and Integration Services (EMEIS) [EMEIS, 1995]; Open Distributed Processing Reference Model [ISO/IEC, 1996]; Manufacturing Automation Programming Environment (MAPLE) [MAPLE, 1996]; Open Group Technical Reference Model TOGAF; OMG OMA; and OAGIS Open Applications Group Integration Specification [IS, 2001].

Apart from the market of Workflow Management Systems and ongoing projects different efforts are ongoing to form bodies and non-profit organisations to represent interests of vendors or customers. Some are initiated by the industry (e.g. the workflow management Coalition), some are the efforts of non-profit organisations such as the CEFACT (ebXML). Among these standards are: WfMC (The Workflow Management Coalition) (http://www.wfmc.org/) that aims for interoperability of workflow technologies by developing and publishing relevant standards and keeping up a reference model on workflow management, modeling and enactment systems; WARIA (http://www.waria.com/), the Workflow And Reengineering International Association aims to identify and clarify issues that are common to users of workflow, electronic commerce and those who are in the process of reengineering their organizations. WARIA works closely with WfMC and BPMI. The e-workflow workflow portal (http://www.e-workflow.org/) is a joint service of WARIA and WfMC; BPMI (Business Process Management Initiative); AIIM that presents itself at www.aiim.org; Workflow and Reengineering International Association ; ABPMP.org; BPTrends; eBiz ; OASIS; ebXML (Electronic Business XML) (http://www.ebxml.org/); business transaction protocol BTP [53]; and RosettaNet (http://www.rosettanet.org/).

Currently, there are several new developments in the area of process modeling, partly in the context of Web services. The Business Process Management Initiative (www.bpmi.org) proposes standards such as the XML-based Business Process Modelling Language BPML and the standardized graphical notation BPMN (Business Process Management Initiative, 2003) [BPMN, 2003]. Currently, these languages do not have a formal metamodel and the concepts and relations have not been defined in a strict way. Currently, the Object Management Group (OMG) collects proposals for a standardized Business Process Definition Metamodel BPDM. The proposal by Frank et al. (2004) [Frank et al., 2004] includes concepts for collaboration and joint activities. In addition to these languages, which are aimed at "design-time" modeling of business processes, there are business process execution languages such as BPEL4WS (Thatte (ed.), 2003) [Thatte, 2003] - but also the older workflow languages - aimed at "runtime" modeling. UDDI [UDDI, 2003] and WSDL [WSDL, 2002] can be used for service discovery and assembly. Richer formalisms for self-description of services and behaviour include DAML-S [DAML-S, 2001][Antolenkar et al., 2001]. 291/366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

Common and published business process and information exchange models are provided by the following: The eCO framework [eCO, 2004] provides facilities to businesses to discover and access services regardless of the e-commerce standards and protocols each potential partner adopts. The eCO framework introduces xCBL (XML Common Business Library) to define business documents. Some core documents have been defined, and business partners can use and extend these. In addition, the eCO framework describes Business Interface Definitions (BIDs) as sets of documents accepted and produced. A BID does not prescribe a global business process. cXML (Commerce XML) [cXML, 2004] addresses contents and process levels. At the content level, a set of XML DTDs describe procurement documents. It provides common elements for product catalogs, suppliers, etc. At the business process level a generic procurement protocol is defined, including product selection, order request, and order routing. VIII.4.1 OMG Object Management Group

The Object Management Group (http://www.omg.org/) is a non-profit consortium created in 1989 with the purpose of promoting theory and practice of object technology in distributed computing systems. In particular, it aims to reduce the complexity, lower the costs, and hasten the introduction of new software applications. Originally formed by 12 companies (IBM, BNR Europe Ltd., Expersoft Corp., ICL plc, Iona Technologies Ltd., DEC, Hewlett-Packard, HyperDesk Corp., NCR, Novell USG, Object Design Inc., and SunSoft), OMG membership has grow to over 800 member in 2004. The one notably absent vendor is Microsoft, which decided to create its own Object Request Broker system called Distributed Common Object Management (DCOM) instead of using the distributed solutions developed by the OMG. With this decision, Microsoft made the achievement of a unified and universal standard far more difficult to achieve. At the present, Microsoft is discouraging the use of DCOM, and is pushing its technologies towards SOAP. The OMG goals are: a) to promote the theory and practice of Object Technology (OT) for the development of distributed computing systems; b) to promote standardised object software; c) to specify industry standards for integrating distributed heterogeneous applications and components; and d) to develop a standardized global object-oriented architecture, based on objects and component technologies, for distributed application integration, guaranteeing the reusability, portability and interoperability of the software components in distributed heterogeneous computing environments. According to the OMG, the advantages of basing information systems development on high-quality standards are several: Synergy. To synergize the work being done in creating domain applications and distributed object components into a cooperative industry effort. Interoperability. To make independently developed domain objects interoperable with a minimum of effort. Federation of systems. To allow diverse research information systems to be integrated.

292/366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

Ease of use. To make the information understandable in the terms used by researchers and administrators and easily meet the needs of research organizations. Open market. To foster an open market in domain object related components, both in pre-built software objects and in tools for using and building domain software objects.

The OMG realizes its goals through creating standards which allow interoperability and portability of distributed object oriented applications. They do not produce software, devices or implementation guidelines, only specifications which are put together using ideas of OMG members who respond to Requests For Information (RFI) and Requests For Proposals (RFP). The strength and the relevance of this approach come from the fact that most of the major software companies interested in distributed object oriented development are among OMG members. The OMG has also worked successfully to create a marketplaces for the technologies based on its standardization efforts. The most ambitious product developed by the OMG is the Common Object Request Broker Architecture (CORBA), a good demonstration of its effectiveness in getting vendors to agree on object standards in a short period of time.

293/366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

VIII.4.2

W3C - The World Wide Web Consortium

Information in this section is gathered from [W3C] unless otherwise stated. The World Wide Web Consortium was founded at the Massachusetts Institute of Technology, Laboratory for Computer Science [MIT/LCS] in collaboration with CERN, with support from DARPA and the European Commission. In 2003, ERCIM (European Research Consortium in Informatics and Mathematics) took over the role of European W3C Host from INRIA that hosted since 1995. In Asia W3C is hosted by Keio University of Japan (Shonan Fujisawa Campus) since 1996. W3C has a number of Offices worldwide. W3C's technologies will help make the Web a robust, scalable, and adaptive infrastructure for a world of information. To understand how W3C pursues this mission, it is useful to understand the Consortium's goals and driving principles. By promoting interoperability and encouraging an open forum for discussion, W3C commits to leading the technical evolution of the Web. W3C's long term goals for the Web are: Universal Access: To make the Web accessible to all by promoting technologies that take into account the vast differences in culture, languages, education, ability, material resources, access devices, and physical limitations of users on all continents; Semantic Web: To develop a software environment that permits each user to make the best use of the resources available on the Web; Web of Trust: To guide the Web's development with careful consideration for the novel legal, commercial, and social issues raised by this technology.

294/366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

W3C has published more than forty Recommendations. Each Recommendation builds on the previous, and is designed so that it may be integrated with future specifications. The current Web is an application built on top of the Internet and has inherited the fundamental design principles interoperability, evolution and decentralization. W3C is in the process of transforming the architecture of the initial Web (essentially HTML, URIs, and HTTP) into the architecture of tomorrow's Web, built atop the solid foundation provided by XML.

Figure 81 Initial Web alongside the Web of tomorrow [http://www.w3.org/Consortium/] W3C Activities and other work are organised into four domains: Architecture domain: develops the underlying technologies of the Web Interaction domain: seeks to improve user interaction with the Web, and to facilitate single Web authoring to benefit users and content providers alike. It also works on formats and languages that will present information to users with accuracy, beauty, and a higher level of control. Technology and society domain: seeks to develop Web infrastructure to address social, legal, and public policy concerns. Web accessibility initiative (WAI): W3C's commitment to lead the Web to its full potential includes promoting a high degree of usability for people with disabilities. The Web Accessibility Initiative (WAI), is pursuing accessibility of the Web through five primary areas of work: technology, guidelines, tools, education and outreach, and research and development. Through investment and active participation in W3C Activities, the Members ensure the strength and direction of the Consortium. Members include vendors of technology products and services, content providers, corporate users, research laboratories, standards bodies, and governments, all of whom work to reach consensus on a direction for the Web. These organizations are typically investing significant resources into the web, in developing software products, in developing information products, or most commonly in its use as an enabling medium for their business or 295/366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

activity. There has been a strong desire that the stability of the Web should be maintained by a competent authority, and many prospective W3C Members have expressed their desire to provide funding in support of that effort. W3C is thus financed primary by its Members and, to a lesser extent, by public funds. W3C Membership is available to all organizations. VIII.4.3 Global Grid Forum (GGF) The Global Grid Forum [GGF] is a community-initiated forum of thousands of individuals from industry and research leading the global standardization effort for grid computing. GGF's primary objectives are to promote and support the development, deployment, and implementation of Grid technologies and applications via the creation and documentation of "best practices" - technical specifications, user experiences, and implementation guidelines. GGF efforts are also aimed at the development of a broadly based Integrated Grid Architecture that can serve to guide the research, development, and deployment activities of the emerging Grid communities. Defining such architecture will advance the Grid agenda through the broad deployment and adoption of fundamental basic services and by sharing code among different applications with common requirements. GGF participants come from over 400 organizations in over 50 countries, with financial and in-kind support coming from GGF Sponsor Members including technology producers and consumers, as well as academic and federal research institutions. Currently the GGF Sponsor Members list includes among others: Argonne National Laboratory, NASA, UK e-Science program, IBM, Intel, Hewlett-Packard, Microsoft Research, Silicon Graphics, Sun, Cisco, Oracle and Fujitsu. GGF steers a large number of Working Groups (WGs) which are activated in the following general areas: Applications and Programming Environments Architecture Data Information Systems and Performance Peer-to-Peer: Desktop Grids Scheduling and Resource Management Security VIII.4.4 Peer-to-Peer Working Group (P2Pwg)

The Peer-to-Peer Working Group [P2Pwg] aims to investigate and explore the many aspects of peer-to-peer and focuses on best practices, trends and collaboration efforts. Working Group Mission To provide a forum for reporting on recent occurrences and future trends within the peer-topeer and distributed computing space. The forum may occur at the regular member meetings, at Joint Techs meetings or at specific workshops designated for the topic. To be a clearinghouse for collaborative opportunities within the higher education community and between that community and corporate entities as new peer-to-peer and distributed computing applications and tools are investigated. 296/366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

To provide best practices documents for both resource management as well as innovative uses of peer-to-peer technologies. To provide a central repository for resources and documents related to all aspects of peer-topeer computing OASIS

VIII.4.5

Organization for the Advancement of Structured Information Standards (OASIS) [OASIS, 2004] is a not-for-profit, global consortium that drives the development, convergence and adoption of ebusiness standards. The consortium produces more Web services standards than any other organization along with standards for security, e-business, and standardization efforts in the public sector and for application-specific markets. OASIS was founded in 1993 under the name SGML Open as a consortium of vendors and users devoted to developing guidelines for interoperability among products that support the Standard Generalized Markup Language (SGML). OASIS changed its name in 1998 to reflect an expanded scope of technical work, including the Extensible Markup Language (XML) and other related standards. OASIS has more than 3,000 participants representing over 600 organizations and individual members in 100 countries. The technical work of OASIS is driven by the members, who form technical committees (TCs) based on proposals from members. The TCs set their own agendas and schedules. OASIS provides the guidance, process, and infrastructure necessary to enable the members to do the work. OASIS has adopted a Technical Committee Process to govern technical work, and provides a vendor-neutral home and an open, democratic process for this work; this gives all interested parties, regardless of their standing in a specific industry, an equal voice in the creation of technical work. OASIS members have formed TCs in a number of areas including the following: Horizontal and e-business framework, Web Services, Security, Public Sector, Vertical industry applications. OASIS encourages its TCs to consider how the work they are doing relates to work being done by other organizations, and to establish liaison relationships where practical; OASIS prefers to see interoperable specifications than competing specifications. Related TCs for the NoE INTEROP are among others Business-Centric Methodology, Business Transactions, ebXML Business Processes, Universal Business Language, or Web Services Business Process. Relevant recommendations include BPEL (Business process execution language) for Web services is an XML-based language (http://www.coverpages.org/wsbpel20031204.pdf) and BCM, the OASIS methodology, http://www.businesscentricmethodology.com/navigation/worspace.html. VIII.4.6 ebXML ebXML (Electronic Business using eXtensible Markup Language) is an inititive of the OASIS group and the United Nations agency CEFACT. OASIS is a non-profit consortium with the goal to promote IT efforts that support the realisation of e-Businesses. ebXML is a suite of specifications with the motivation to lower the barriers for establishing e-businesses on an international level 297/366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

regardless of their geographical distribution. The specification covers business processes, the notation of data components, a registry and repositories and furthermore the messaging and collaboration between the involved actors. See also the architectural ebXML presentation in section IV.3.2. ebXML BPSS (Business Process Specification Schema) is part of the ebXML B2B suite of specifications, which also includes core specifications for reliable and secure messaging over SOAP, collaboration agreements and profiles, and a registry. BPSS is used for describing public processes; the orchestration of the transactions is defined using a control flow expressed as UML activity graphs. Data flow descriptions are not directly supported. There is explicit support for a number of NFA features: authentication, acknowledgments, non-repudiation and timeouts. BPSS defines a number of possible exceptions and prescribe their effect and communication style. BPSS provides no support for internal execution semantics. The ebXML aims to provide an open XML-based infrastructure, enabling the global use of electronic business information in an interoperable, secure and consistent manner by all parties. As background information, UN/CEFACT and OASIS have joined forces to standardise XML business specifications. They have established ebXML to develop a technical framework that will enable the consistent use of XML for the exchange of all electronic business data. One objective for ebXML is to lower the barrier of entry to e-business, particularly with respect to small- and medium-sized enterprises (SMEs) and developing countries. ebXML is public, and claims to provide the only globally developed open XML-based standard built on a rich heritage of electronic business experience. The responsible development organisations are the United Nations body for Trade Facilitation and Electronic Business (UN/CEFACT) and the Organisation for the Advancement of Structured Information Standards (OASIS). Examples of collaborators are ABB Business Systems, PeopleSoft, Swisscom Ltd, Kraft Foods Inc, and Amazon.com. The target area for ebXML is exchange of all electronic business data, with a SME focus. ebXML was started in 1999 as an initiative of OASIS and the United Nations/ECE agency CEFACT. The original project envisioned and delivered five layers of substantive data specification, including XML standards for: Business processes, Core data components, Collaboration protocol agreements, Messaging, Registries and repositories. VIII.4.7 UN/CEFACT The UN/CEFACT (United Nations Centre for Trade Facilitation and Electronic Business) [Johannesson et al., 2000a] BCF (business collaboration framework) captures business and administrative process knowledge for engabling the development of low cost software components for use of SMEs.

298/366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

VIII.4.8 BPMI BPMI.org (Business Process Management Initiative) is an independent organization devoted to the development of open specifications for the management of electronic business processes that span multiple applications, and business partners over the Internet. The BPMN [BPMN, 2004] [BPMN, 2002] is an attempt by BPMI.org [BPMI, 2004] to provide businesses with the capability of defining and understanding their internal and external complex business procedures through a Business Process Diagram that is readily understandable by all business users, from the business analysts that create the initial drafts of the processes, to the technical developers responsible for implementing the technology that perform those processes, and finally, to the business people who manage and monitor those processes. BPMN is also supported with an internal model that will enable the generation of executable BPEL4WS from its Business Level notation. Thus, BPMN creates a standardized bridge for the gap between the business process design and process implementation. BPML [BPML, 2002] (Business Process Management Language) describes comprehensive control and data flow constructs. It supports transactions with compensating activities, exception handling and timeouts. It does not address B2B requirements such as authentication or non-repudiation. BPML is a metalanguage for the modelling of business processes. It provides an abstract execution model for collaborative transactional business processes. Business processes are viewed as composed of a common public interface and provide implementations supporting it by process participants. The public interface of BPML processes are described as ebXML business processes or RosettaNet PIPs. BPML represents business processes as the interleaving of control flow, data flow and event flow. BPQL (Business Process Query Language) is a management interface to a business process management infrastructure that includes a process execution facility and process deployment facility.

VIII.4.9 RosettaNet RosettaNet [RosettaNet,2004b][Leweis,2000][RosettaNet,2002], [RosettaNet,2001][Kak and Sotero, 2002] is a non-profit consortium focusing on the IT, electronic components and semiconductor manufacturing industries. The work is directed towards creating and implementing industry-wide e-business standards. RosettaNet is named after the Rosetta Stone, which, carved with the same message in three languages, led to the understanding of hieroglyphics. RosettaNet, like the Stone, is attempting to break language barriers and make history. The RosettaNet consortium work to create and implement industry-wide e-business process standards, and offers a non-proprietary, public solution. The aim of RosettaNet is to establish common standard processes for the electronic sharing of business information. This is done to provide real-time information, efficient e-business processes, dynamic trading-partner relationships and new business opportunities. RosettaNet PIPs are systemto-system XML-based dialogues that define business processes between supply chain partners. Each PIP includes a technical specification based on the RosettaNet Implementation Framework (RNIF), 299/366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

a Message Guideline document with a PIP-specific version of the Business Dictionary and an XML Message Guideline document. Specifications are downloadable. RosettaNet characterizes the three focus areas as follows: Information Technology (IT) encompasses the creation, production, distribution, purchase and sale of IT components, products, accessories and packaged solutions - in the areas of communication, memory, multimedia, networking, storage and computer hardware, software, systems and peripherals - complemented by a host of supporting and electronic services. The Electronic Components (EC) industry is fueled by the growth of Information Technology (IT) and the increasing reliance on electronic components - semiconductors, passive components, electrical and electronic connectors, and interconnect products and systems - in multiple industries, including IT; consumer electronics; automotive and transportation; telecommunications; office automation and data processing; electrical appliances, equipment and power; industrial systems and equipment; medical equipment; entertainment; aerospace and the military. A major constituent of the Electronic Components (EC) supply chain, the Semiconductor Manufacturing (SM) industry provides essential solutions, including microprocessors, integrated circuits, chipsets, memory, logic devices, and related products, to the EC and Information Technology (IT) industries as well as those they serve. RosettaNet collaborators come from all three focus areas (IT, electronic components and semiconductor manufacturing). The number of members today exceeds 500 companies.

300/366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

IX Bibliography References
IX.1 Bibliography Interoperability Framework

[ID03] IDEAS D1.1 part C on SoA on Architecture&Platforms, IDEAS Roadmap project - Deliverable D1.1 Part C, Architectures & Platforms State of the Art. (June 2003). Available at http://www.ideas-roadmap.net [CEN02] CEN/ISS report on Architectures used in IDEAS D1.1 [EC93] ECMA: Reference Model for Frameworks of Software Engineering Environments, 3rd ed. . Technical Report NIST 500-211, ECMA TR/55, 1993

IX.2 Bibliography Model Driven Development


[AEH+99] Andries, M. ; Engels, G. ; Habel, A. ; Hoffmann, B. ; Kreowski, H. J. ; Kuske, S. ; Plump, D. ; Schuerr, A. ; Taentzer, G.: Graph Transformation for Specification and Programming. In: Science of Computer Programming 34 (1999), August, Nr. 1, p. 154 Bottoni, P. ; Koch, M. ; Parisi-Presicce, F. ; Taentzer, G.: A Visualization of OCL Using Collaborations. In: Gogolla, M. (Hrsg.) ; Kobryn, C. (Hrsg.): UML 2001 - The Unified Modeling Language. Modeling Languages, Concepts, and Tools. 4th International Conference, Toronto, Canada. Heidelberg : Springer Verlag, October 2001 (LNCS 2185). ISBN 3540426671 Bottoni, P. ; Koch, M. ; Parisi-Presicce, F. ; Taentzer, G.: Working on OCL with Graph Transformation. In: APPLIGRAPH Workshop on Applied Graph Transformation (AGT 2002), Grenoble, France, 2002, p. 110 Budinsky, F., Steinberg, D., Merks, E., Ellersick, R., Grose, T.J. Eclipse Modeling Framework: A Developers Guide Chapter 2. Available at: http://www.awprofessional.com/content/images/0131425420/samplechapter/budinskych02.pdf C4ISR Architecture Working Group (1997), C4ISR Architecture Framework Version 2.0, US Department of Defense, Dec. 18, 1997. http://www.c3i.osd.mil/org/cio/i3/AWG_Digital_Library/pdfdocs/fw.pdf Czarnecki, K. ; Helsen, S. Classification of Model Transformation Approaches. The Third OOPSLA Workshop on Domain-Specific Modeling, OOPSLA 2003 Workshop, Anaheim, CA, USA http://www.softmetaware.com/oopsla2003/czarnecki.pdf. October 2003 Charlesworth, I. Application Developments Environments Technology Audit: Interactive Objects ArcStyler. ArcStyler whitepaper. Available at: http://www.iosoftware.com/as_support/brochures/ArcStyler_TECH_RPS_1098.pdf Csertn, G. ; Huszerl, G. ; Majzik, I. ; Pap, Z. ; Pataricza, A. ; Varr., D.: VIATRA - Visual Automated Transformations for Formal Verification of UML Models. In: International Conference on Software Engineering (ASE 2002), Edinburgh, Scotland, IEEE Computer Society, September 2002. ISBN 07695 17366, p. 267270 Steve Cook, Domain Specific Modelling and MDA, MDA Journal, Jan 2004. Demuth, B. ; Hussmann, H. ; Obermaier, S. ; Whittle, J. (Hrsg.). Experiments With XMI Based Transformations of Software Models. WTUML: Workshop on Transformations in UML, ETAPS 2001

[BKPPT01]

[BKPPT02]

[Budinsky]

[C4ISR]

[CH03]

[Charlesworth]

[CHM+02]

[Cook jan04] [DHO01]

301/366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

Satellite Event, Genova, Italy. April 2001 [Douglass] Douglass, B. P. Model Driven Architecture and Rhapsody. I-Logix Rhasody whitepaper. Available at: http://www.ilogix.com/whitepaper_PDFs/whitepapers.cfm?pdffile=MDAandRhapsody.pdf Eclispse Organization. Eclispe.org. Available at: http://www.eclipse.org/ Ehrig, H. ; Engels, G. ; Kreowski, H.-J. ; Rozenberg, G.: Handbook of Graph Grammars and Computing by Graph Transformation: Applications, Languages and Tools. World Scientific Pub Co, October 1999. ISBN 9810240201 NoE INTEROP WP9 Intelligent Networks and Management of Distributed Systems (iVS) Technical University of Berlin 5/7 Eclispse Organization. Generating an EMF Model.. EMF Document. Available at: http://download.eclipse.org/tools/emf/scripts/docs.php?doc=tutorials/clibmod/clibmod.html Eclispse Organization. The EMF.Edit Framework Ovierview.. EMF Document. Available at: http://download.eclipse.org/tools/emf/scripts/docs.php?doc=references/overview/EMF.Edit.html Eclispse Organization. The Eclipse Modeling Framework (EMF) Overview. EMF Document. Available at: http://download.eclipse.org/tools/emf/scripts/docs.php?doc=references/overview/EMF.html Kings College London, University of York. An Evaluation of Computer OptimalJ Professional Edition as an MDA Tool. CompuwareCorportation OptimalJ whitepaper. Available at: http://www.compuware.com/dl/kings_mda.pdf Fischer, T. ; Niere, J. ; Torunski, L. ; Zndorf, A.: Story Diagrams: A New Graph Rewrite Language Based on the Unified Modeling Language and Java. In: Ehrig, H. (Hrsg.) ; Engels, G. (Hrsg.) ; Kreowski, H.-J. (Hrsg.) ; Rozenberg, G. (Hrsg.): Theory and Application of Graph Transformations, 6th International Workshop (TAGT98), Paderborn, Germany. Heidelberg : Springer Verlag, November 1998 (LNCS 1764). ISBN 3 540672036, p. 296309 D. Frankel. Model Driven Architecture: Applying MDA to Enterprise Computing. Wiley. January 2003. ISBN: 0471319201. Gardner, T. Model-Driven Metadata Integration using MOF 2.0 and Eclipse. OMG MDA Implementers Workshop. Available at: http://www.omg.org/news/meetings/workshops/MDA_2003-2_Manual/13_Gardner.pdf IFIP-IFAC Task Force (1999), GERAM: Generalised Enterprise Reference Architecture and Methodology, Version 1.6.3, March 1999 (Published also as Annex to ISO WD15704). http://www.fe.up.pt/~jjpf/isf2000/v1_6_3.html Gery, E., Harel, D., and Palachi, E. Rhapsody: A Complete Life-Cycle Model-Based Development System. ILogix Rhasody whitepaper. Available at: http://www.ilogix.com/whitepaper_PDFs/Rhapsody-ifm.pdf Garey, M. R. ; Johnson, D. S.: Computers and Intractability: A Guide to the Theory of NP-Completeness. W. H. Freeman Company, November 1990. ISBN 0716710455 J. Greenfield and K. Short. Software Factories: Assembling Applications with Patterns, Frameworks, Models and Tools. John Wiley and Sons. 2004. Heckel, R. ; Engels, G.: Graph Transformation and Visual Modeling Techniques. In: BEATCS 71 (2000), June, p. 186203 Ho, W. M. ; Jzquel, J.-M. ; Guennec, A. ; Pennaneach, F.: UMLAUT: An Extendible UML Transformation Framework. In: 14th International Conference on Automated Software Engineering (ASE 99), Florida, US, IEEE Computer Society, 1999. ISBN 0769504159, p. 275278 Interactive Objects ArcStyler Product Tour. ArcStyler Documentation. Available at: http://www.iosoftware.com/products/arcstyler_product_tour.jsp Interactive Objects Extensible MDA-Cartridges. ArcStyler Documentation. Available at: http://www.io-

[Eclipse] [EEKR99]

[EMF Model]

[EMF.Edit]

[EMF]

[Evaluation OptimalJ]

[FNTZ98]

[Frankel]

[Gardener dec03]

[GERAM]

[Gery, Harel, Palachi] [GJ90]

[Greefield, Short]

[HE00]

[HJGP99]

[IO-ArcStyler]

[IO-Cartridges]

302/366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

software.com/products/arcstyler_cartridges.jsp [IO-Software] Interactive Objects ArcStyler: Leading the Way in Model Driven Architecture. ArcStyler Documentation. Available at: http://www.io-software.com/products/arcstyler_overview.jsp Open Distributed Processing - Reference Model - Part 1: Overview, ITU Recommendation X.901 | ISO/IEC 10746-1, International Telecommunication Union, 1998. Open Distributed Processing - Reference Model - Part 2: Foundations, ITU Recommendation X.902 | ISO/IEC 10746-2, International Telecommunication Union, 1996. Open Distributed Processing - Reference Model - Part 3: Architecture, ITU Recommendation X.903 | ISO/IEC 10746-3, International Telecommunication Union, 1996. Open Distributed Processing - Reference Model - Part 4: Architectural Semantics, ITU Recommendation X.904 | ISO/IEC 10746-4, International Telecommunication Union, 1998. Jones, S. P.: Tackling the awkward squad: monadic input/output, concurrency, exceptions, and foreignlanguage calls in Haskell. In: Engineering theories of software construction (2001), p. 4796. ISBN 1586 031724 Kovse, J. ; Harder, T.: Generic XMI-Based UML Model Transformations. In: Bruel, J.-M. (Hrsg.) ; Bellahsne, Z. (Hrsg.): Int. Conf. on Object-Oriented Information Systems (OOIS02), Montpellier, France. Heidelberg : Springer Verlag, September 2002 (LNCS 2426), p. 192198 Anneke Kleppe, Jos Warmer, Wim Bast. MDA Explained: The Model Driven Architecture--Practice and Promise. Addison-Wesley Professional; 1st edition. April 2003. ISBN: 032119442X. Kiesner, C. ; Taentzer, G. ; Winkelmann, J.: A Visual Notation of the Object Constraint Language / TU-Berlin. 2002 ( No. 2002/23). technical report Lwe, M. ; Beyer, M.: AGG - An Implementation of Algebraic Graph Rewriting. In: Rewriting Techniques and Applications, 5th International Conference (RTA-93), Montreal, Canada. Heidelberg : Springer Verlag, June 1993 (LNCS 690). ISBN 3540568689, p. 451456 The MDA Journal. http://www.davidfrankelconsulting.com/MDAJournal.htm. G. Booch, A. Brown, S. Iyengar, J. Rumbaugh, B. Selic, "An MDA Manifesto," In D. Frankel (ed.), MDA Journal, Business Process Trends, May 2004 Stephen J. Mellor, Kendall Scott, Axel Uhl, Dirk Weise. MDA Distilled (Addison-Wesley Object Technology Series). Addison-Wesley Professional. March 2004. ISBN: 0201788918. Modelbased.Net MDA Tools. Available at: http://www.modelbased.net/mda_tools.html Objects, Interactive. ArcStyler. http://www.io-software.com/. November 2003 OMG, MDA Guide Version 1.0.1, Joaquin Miller and Jishnu Mukerji. (eds.), 2003, www.omg.org. OMG: UML 2.0 Infrastructure Specification / Object Management Group.2003. OMG formal document 0309-15 OMG: XML Metadata Interchange (XMI) Specification / Object Management Group. 2003. OMG formal document 03-05-02 CompuwareCorporation. Compuware OptimalJ standardizeds on Object Management Groups Model Driven Architecture. CompuwareCorportation Documentation. Available at: http://www.compuware.com/products/optimalj/1812_ENG_HTML.htm CompuwareCorporation. OptimalJ Product Preview. CompuwareCorportation Documentation. Available at: http://www.compuware.com/products/optimalj/1821_eng_html.htm

[ISO/IEC 107461] [ISO/IEC 107462] [ISO/IEC 107463] [ISO/IEC 107464] [Jon01]

[KH02]

[Kleppe, Warmer, Bast] [KTW02]

[LB93]

[MDA Journal] [MDA Manifesto]

[Mellor etal]

[Modelbased] [Obj03] [OMG MDA] [OMG03d]

[OMG03i]

[OptimalJ MDA]

[OptimalJ Product Preview]

303/366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

[PBG01]

Peltier, M. ; Bezivin, J. ; Guillaume, G. ; Whittle, J. (Hrsg.). MTRANS, a general framework based on XSLT, for model transformations. WTUML: NoE INTEROP WP9 Intelligent Networks and Management of Distributed Systems (iVS) Technical University of Berlin 6/7 Workshop on Transformations in UML, ETAPS 2001 Satellite Event, Genova,Italy. April 2001 Python. The Python Homepage. http://www.python.org. November 2003 Peltier, M. ; Ziserman, F. ; Bezivin, J. On levels of model transformation. XML Europe 2000, Paris, France I-Logix. Rhapsody: Model-Driven Development with UML 2.0 and Beyond. I-Logix Rhapsody Documentation. Available at: http://www.ilogix.com/rhapsody/rhapsody.cfm Sarkar, S.: Model-Driven Programming using XSLT. In: XML Journal 3 (2002), August, Nr. 8, p. 4251 Sunye, G. ; Pennaneach, F. ; Ho, W. M. ; Guennec, A. ; Jzquel, J.-M.:Using UML Action Semantics for executable modeling and beyond. In: Dittrich, K. R. (Hrsg.) ; Geppert, A. (Hrsg.) ; Norrie, M. C. (Hrsg.): Advanced Information Systems Engineering (CAiSE 2001), Interlaken,Switzerland. Heidelberg : Springer Verlag, June 2001 (LNCS 2068). ISBN 3540422153, p. 433447 Taentzer, G.: AGG: A Tool Environment for Algebraic Graph Transformation. In: Nagl, M. (Hrsg.) ; Schrr, A. (Hrsg.) ; Mnch, M. (Hrsg.): Applications of Graph Transformations with Industrial Relevance, International Workshop,AGTIVE99, Kerkrade, The Netherlands. Heidelberg : Springer Verlag, September 1999 (LNCS 1779). ISBN 3540676589, p. 481488 The Open Group (2002), The Open Group Architectural Framework (TOGAF) Version 8 Enterprise Edition. The Open Group, Reading, UK. http://www.opengroup.org/togaf/. Triskel Project, IRISA. UMLAUT: Unified Modeling Language All pUrposes Transformer. URL http://www.irisa.fr/UMLAUT. 2001 Oldevik, J. UML Model Transformation Tool: Overview and user guide documentation. UMT documentation. Available at: http://umt-qvt.sourceforge.net/docs/UMT_documentation_v08.pdf Modelbased.Net UMT-QVT Homepage. UMT documentation. Available at: http://umt-qvt.sourceforge.net/ Varr, D. ; Gyapay, S. ; Pataricza, A. ; Whittle, J. (Hrsg.). Automatic Transformation of UML Models for System Verification. WTUML: Workshop on Transformations in UML, ETAPS 2001 Satellite Event, Genova, Italy. April 2001 Varr, D. ; Varr, G. ; Pataricza, A.: Designing the Automatic Transformation of Visual Languages. In: Science of Computer Programming 44(2002), August, Nr. 2, p. 205227 JISC National Mirror Service. What is EMF? Availabe at: http://text.mirror.ac.uk/mirror/download.eclipse.org/tools/emf/scripts/home.php Zachman, J.A. (1987), A Framework for Information Systems Architecture, IBM Systems Journal, 26(3):276 292. Sowa J.F., Zachman J.A. (1992), Extending and Formalizing the Framework for Information Systems Architecture, IBM Systems Journal, 31(3): 590616. Zee, H. van der, Laagland, P. and Hafkenscheid, B. (eds.) (2000), Architectuur als Management Instrument Beheersing en Besturing van Complexiteit in het Netwerktijdperk. (in Dutch).

[Pyt03] [PZB00] [Rhapsody]

[Sar02] [SPH+01]

[Tae99]

[TOGAF]

[Tri01]

[UMT]

[UMT-QVT] [VGP01]

[VVP02]

[What is EMF]

[Zachman 87]

[Zachman 92]

[Zee, Laagland, Hafkenscheid]

IX.3 Bibliography Service Oriented Computing

304/366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

[Aberer 2001] Karl Aberer, K., P-Grid: A self-organizing access structure for P2P information systems, Proc. of the 6th International Conference on Cooperative Information Systems (CoopIS 2001), Trento, Italy, 2001 [ACKM 2003] Alonso, G., Casati, F., Kuno, H., Machiraju V., Web Services, Springer Verlag [Agile] Agile Technologies, http://www.agiletech.com/ [AHM 2003] Arora, G., Hanneghan, M., Merabti, M, CasPaCE: A framework for cascading payments in peer-to-peer digital content exchange, Proc. of 4th Annual Postgraduate Symposium on the Convergence of Telecommunications, Networking and Broadcasting, PGNet2003, Liverpool, UK, pages 110-116, June 2003. [ATHENA WKD 5.1 2004], ATHENA project: Perspectives in Service-Oriented Architectures and their Application in Environments that Require Solutions to be Planned and Customisable, Working Document WKD 5.1., March 2004
[Avaki] Avaki company, http://www.avaki.com

[Bar 2002] Barkai, D., Internet Distributed Computing: The Intersection of Web Services, Peer-toPeer and Grid Comuting, Proceedings of the Second International Conference on Peer-to-Peer Computing, September 2002, http://www.ida.liu.se/conferences/p2p/p2p2002/keynotes/DavidBarkai.pdf [JIMD] Behr, A., Defining the SOA, April 2004, http://www.sdtimes.com/news/089/special1.htm 2003, [PSOA] Berlind, D., Plotting a course of SOA, March http://www.zdnet.com.au/news/business/0,39023166,20273261,00.htm, April 2004

[BGG 2003] John Brooke, Kevin Garwood, Carole Goble, ESNW, University of Manchester, UK, Interoperability of Grid Resource Descriptions: A Semantic Approach, Proc. of the 1st GGF Semantic Grid Workshop, at the Ninth Global Grid Forum, Oct., 2003, Chicago, USA

[BM 2003] Bhusate, A., De Meer, H., Web Services Over Infrastructure-less Networks, Proc. of the London Communications Symposium 2003 [BPMI 2002] BPMI, BPML: http://bpmi.org/bpml-spec.esp Business Process Modeling Language 1.0 (2002),

[CDKNMW 2002] Curbera, F., Duftler, M., Khalaf, R., Nagy, W., Mukhi, N., Weerawarana, S., Unraveling the Web Services, Web. IEEE Internet Computing 6(2):86-93, March-April, 2002 [CFS] Cooperative File System [CFS], http://www.pdos.lcs.mit.edu/papers/cfs:sosp01/cfs_sosp.pdf [CGrids Lab] Communitygrids Lab, http://www.communitygrids.iu.edu/ [CKMTW 2003] Curbera, F., Khalaf, R., Mukhi, N., Tai, S., Weerawarana, S., The Next Step In Web Services, Communications of the ACM, vol. 46 No. 10, October 2003 [COM+] Microsoft Corporation, COM+ Technologies, http://www.microsoft.com/com/tech/complus.asp [ebXML 2003] OASIS http://www.ebxml.org and UN/CEFACT, Electronic Business XML (ebXML),

[Edutella] Edutella project, http://edutella.jxta.org 305/366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

[EJB2.1 2003] SUN Microsystems, Enterprise JavaBeans Specification, ver2.1, November 2003 [EK 2001] Eibach, W., Kuebler, D., Metering and accounting for Web Services: A dynamic ebusiness solution, http://www-106.ibm.com/developerworks/webservices/library/ws-maws/, Jul. 2001 [Eric 2004] Mark Ericson, Interoperability: the Key to Web Service Quality, published on 27.05.2004 http://www.mywebservices.org/index.php/article/articleview/1468/1/50/ [Evans 2003] Evans, C., Web Services Reliability http://www.sonicsoftware.com/docs/ws_reliability.pdf (WS-Reliability) Version 1.0,

[FGKR 2002] Florescu, D., Grunhagen, A., Kossmann, D., Rost, S., XL:Platform for web services, Proc. of SIGMOD Conference, p. 625, Madison,Wis., USA, 2002 [FK 1999] Foster, I., Kesselman, C., The Globus Toolkit, In Ian Foster and Carl Kesselman, editors, The Grid: Blueprint for a New Computing Infrastructure, pages 259--278. Morgan Kaufmann, San Francisco, CA, 1999, Chap. 11 [FKNT 2002] Foster, I., Kesselman, C., Nick, J., Tuecke, S., The Physiology of the Grid: An Open Grid Services Architecture for Distributed Systems Integration, Globus Project, 2002, Available at http://www.globus.org/research/papers/ogsa.pdf [FKT 2001] Foster, I., Kesselman, C., Tuecke, S., The Anatomy of the Grid, Enabling Scalable Virtual Organizations, Supercomputer Applications, 2001 [Gao 2004], Gao, R., Project Venezia-Gondola (A Framework for P-Commerce), published at P2P Journal, July/August 2004 issue [GARN] Gartner Inc., http://www.gartner.com, April 2004 [GART] SODA and SOA: Complementary http://www.ngi.nl/docs/limburg/WebServices/sld014.htm, April 2004 [GGF] Global Grid Forum, http://www.gridforum.org/ [GHMS 2003] Gerke, J., Hausheer, D., Mischke, J., Stiller, B., An Architecture for a Service Oriented Peer-to-Peer System (SOPPS), in Praxis der Informationsverarbeitung und Kommunikation (PIK) 2/03, p.90-95, April 2003, http://www.mmapps.org/papers/sopps.pdf [GKS 2002] A. Gokhale, B. Kumar, A. Sahuguet, Reinventing the Wheel? CORBA vs. Web Services, Proc. of the WWW2002, May 2002
[Gnutella], Gnutella, www.gnutella.com

Concepts,

[GPA WG] GGF Grid Protocol Architecture Working Group, http://www-itg.lbl.gov/GPA/ [GRIP 2002] Grid Interoperability Project, http://www.grid-interoperability.org [Groove] Groove, http://www.groove.net [GWD-I] Open Grid Services Architecture, http://forge.gridforum.org/projects/ogsa-wg [HK 2002] Hondo, M.., Kaler, C., Web Services Policy http://www-106.ibm.com/developerworks/library/ws-polfram Framework (WS-Policy),

306/366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

[HPWSP] HP Web Services Platform, http://archive.devx.com/javasr/whitepapers/hp/default.asp [IBM UDDI] IBM UDDI Registry, http://www-3.ibm.com/services/uddi/ [Ideas 2001] Ideas IST-2001-37368, Deliverables D 3.4, D 3.5, D 3.6: A Gap Analysis Required Activities in Research, Technology and Standardisation to close the RTS Gap [ISO1984] ISO (1984). Open Systems Interconnection Basic Reference Model, International Standard ISO 7498. [ISO-IEC 1996], ISO-IEC Guide 2:1996(E/F/R), ISO/IEC, Switzerland 1996 [Jabber 2002] Jabber Technology Overview, Jabber Software Foundation, 2002, http://www.jabber.org/ [JMVS 2004] Jardim-Goncalves, R., Mal, P., Vieira, H., Steiger-Garcao, A., Platform for enhanced management of resources in collaborative networked industrial environments; In Proc. of the CE2004 Conference [Jos 2002] Joseph, S., NeuroGrid: Semantically Routing Queries in Peer-to-Peer Networks, In Proceedings of the International Workshop on Peer-to-Peer Computing, 2002 [JXTA] Project JXTA, http://www.jxta.org/ [JXTAv2.0 2003] Project JXTA v2.0: Java Programmers Guide, Sun Microsystems, May 2003, http://www.jxta.org/docs/JxtaProgGuide_v2.pdf [Kaler 2002] Kaler, C., Web Services Security (WS-Security), Version 1.0, 106.ibm.com/developerworks/library/ws-secure http://www-

[Lanworthy 2003] Langworthy, D., Web Services Reliable Messaging Protocol (WSReliableMessaging), http://xml.coverpages.org/ws-reliablemessaging20030313.pdf [LLT 2002] Laoveerakul, S., Laongwaree, K., Tongsima, S., Decentralized UDDI based on p2p Protocol, p2p/NSCD session, APAN Shanghai Meetings, 2002 [Microsoft UDDI] Microsoft UDDI Registry, http://uddi.microsoft.com/ [MBI 2003a] Microsoft, BEA, IBM, Web Services Coordination (WS-Coordination) [MBI 2003b] Microsoft, BEA and IBM (2003b), Web Services AtomicTransaction (WSAtomicTransaction) [MBI 2004] Microsoft, BEA and IBM (2004.), Web Services BusinessActivity (WSBusinessActivity) [MHSRB 2001] Minar, N., Hedlund, M., Shirky, C., O'Reilly, T., Bricklin, D., Anderson, D., Miller, J., Langley, A., Kan, G., Brown, A., Waldman, M., Cranor, L., Rubin, A., Dingledine, R., Freedman, M., Molnar, D., Dornfest, R., Brickley, D., Hong, T., Lethin, R., Udell, J., Asthagiri, N., Tuvell, W., Wiley, B., Peer-to-Peer, Harnessing the Benefits of a Disruptive Technology, edited by Andy Oram, March 2001 [MKLN 2003] Milojicic , D., Kalogeraki, V., Lukose, R., Nagaraja, K., Pruyne, J., Richard, B., Rollins, S., Xu, Z., HP Laboratories Palo Alto, Peer-to-Peer Computing, HPL-2002-57 (R.1), July , 2003 307/366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

[MozoNation 2001] MozoNation, http://mojonation.net [MRK 2003] Milenkovic, M., Robinson, S., Knauerhase, R., Barkai, D., Garg, S., Tewari, V., Anderson, T., Bowman, M., Intel, Toward Internet Distributed Computing, published at Computer Society (IEEE), May 2003 (Vol. 36, No. 5), p. 38-46 [NextPage] NextPage company, http://www.nextpage.com [OASIS] Homepage for OASIS, available at http://www.oasis-open.org [OGSA] Open Grid Services Architecture, http://www.globus.org/ogsa/, http://www.gridforum.org/Meetings/GGF11/Documents/draft-ggf-ogsa-spec.pdf [OGSI] http://www.gridforum.org/Meetings/ggf7/drafts/draft-ggf-ogsi-gridservice-23_2003-0217.pdf [OMG 2001] Object Management Group, The Common Object Request Broker: Architecture and Specification, 2.5 edition, September 2001 [OWL 2004] OWL Web Ontology Language Overview, W3C Recommendation 10 February 2004, http://www.w3.org/TR/owl-features/ [P2Pwg] Peer to Peer Working Group, http://p2p.internet2.edu/ [Peltz 2003] Peltz, C., Web Service Orchestration, HP white paper, http://devresource.hp.com/drc/technical_white_papers/WSOrch/WSOrchestration.pdf 2003,

[PG 2003] Papazoglou, M., Georgakopoulos, D., Service-Oriented Computing, Communications of the ACM, vol. 46 No. 10, October 2003 [pyGlobus] Python Globus(pyGlobus), http://dsd.lbl.gov/gtg/projects/pyGlobus/, Last Access: July 3rd, 2004 [pyGridWar] Python OGSI Client and Implementation (pyGridWare) Official Website, http://www-itg.lbl.gov/gtg/projects/pyGridWare/index.html, Last Access: July3rd, 2004 [RD 2001] Rowstron, A., Druschel, P., Pastry: Scalable, distributed object location and routing for large-scale peer-to-peer systems, Proc. of the 18th IFIP/ACM Intl. Conf. on Distributed Systems Platforms, 2001 [RFHKS 2001] Ratnasamy, S., Francis, P., Handley, M., Karp, R., Shenker, S., A scalable content addressable network, Proc. Of the ACM SIGCOMM, 2001 [RHH 2001] Roman, G., Huang, Q., Hazemi, A, Consistent Group Membership in Ad Hoc Networks, Proc. of ICSE 2001, Toronto, Canada, 2001 [Rohrs 2002] Rohrs, C., Query Routing for the Gnutella Network, May 16, 2002, http://www.limewire.com/developer/query_routing/keyword%20routing.htm [RosettaNet], Homepage for RosettaNet, available at http://www.rosettanet.org [RS 1996] Rivest, R., Shamir, A., Payword and MicroMint -- two simple micropayment schemes, Proc. of 1996 International Workshop on Security Protocols, pp. 69--87, 1996

308/366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

[RUP] Rational Unified Process (RUP) Resources, http://www306.ibm.com/software/awdtools/rup/ [SamSad 2002] Samtani, G., Sadhwani, D., Web Services and p2p Computing, Tect Ltd, 2002, http://www.webservicesarchitect.com/content/articles/samtani05.asp [SBLW 2002] Snelling, D., van den Berghe, S., von Laszewski, G., Wieder, P., Breuer, D., MacLaren, J., Nicole, D. and Hoppe, H. C. (2002) A Unicore Globus Interoperability Layer, Computing and Informatics 21:pp. 399-411 [Shamir 1979] Shamir, A, How to share a secret, Communications of the ACM, vol. 22, n. 11, pp. 612-613, November 1979 [Shirky 2004] Shirky, C., Interoperability, Not Standards, published on OpenP2P.com, http://www.openp2p.com/pub/a/p2p/2001/03/15/clay_interop.html, Last access: 29 June 2004 [SMKKB 2001] Stoica, I., Morris, R., Karger, D., Kaashoek, M., Balakrishnan, H., Chord: A Scalable Peer-to-Peer Lookup Service for Internet Applications, Proceedings of the 2001 Conference on applications, technologies, architectures, and protocols for computer communications, 2001 [SOAP] SOAP, http://www.w3.org/TR/SOAP [SODIUM 2004] SODIUM project, http://www.atc.gr/sodium [SdPe 2003], Sderstrm, E. and Petterson, A. (2003), Adoption of B2B standards, In JardimGoncalves et al. (eds.) Concurrent Engineering: Enhanced Interoperable Systems, A.A.Balkema Publishers, Lisse, Netherlands, 2003, pp.343-350 [SS 2002] Samtani, G., Sadhwani, D., Services and Peer-to-Peer Computing, Companion Technologies, May 2002 [SSDN 2002] Schlosser, M., Sintek, M., Decker, S., Nejdl, W., HyperCuP - Hypercubes, Ontologies and Efficient Search on p2p Networks, International Workshop on Agents and Peer-toPeer Computing, Bologna, Italy, July 2002 [Szyp 2003] Szyperski, C., Component Technology-What, Where and How?, Proc. of the 25th International conference on Software engineering, May 2003 [TalTru 2003] Domenico Talia, Paolo Trunfio, University of Calabria, Toward a Synergy Between P2P and Grids, published in IEEE Internet Computing, July/August 2003 issue, http://dsonline.computer.org/0307/d/wp4p2p.htm [TCFFGK 2002] Tuecke, S., Czajkowski, K., Foster, I., Frey, J., Graham, S., Kesselman, C., Grid Service Specification, Available at: http://www.gridforum.org/ogsiwg/drafts/GS_Spec_draft02_2002-06-13.pdf, Last Access: August 16th, 2004 [Thatte 2003] Thatte, S., Business Process Execution Language for Web Services version 1.1, http://dev2dev.bea.com/techtrack/BPEL4WS.jsp

309/366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

[TP 2002] Tsalgatidou, A., Pilioura, T., An Overview of Standards and Related Technology in Web Services, Internation Journal of Distributed and Parallel Data Bases, Special Issue on E-Services, 12(2); p. 135-162, Sep 2002 [TSN 2003] Thaden, U., Siberski, W., Nejdl, W., A Semantic Web based Peer-to-Peer Service Registry Network, Proc. of the 1st Workshop on Semantics in Peer-to-Peer and Grid Computing http://www.semanticgrid.org/GGF/at the Twelfth International World Wide Web Conference, Hungary, May 2003 [UDDI] http://www.uddi.org/ [VR 2004] Vawter, C., Roman, E., J2EE vs. Microsoft .NET: A comparison of building XML-based Web services, http://www.theserverside.com/articles/article.tss?l=J2EE-vs-DOTNET. Last Access: July 3rd, 2004 [w3c 2004] Booth, D., Haas, H., McCabe, F., Newcomer, E., Champion, M., Ferris, C., Orchard, D., Web Services Architecture, W3C Working Group Note, Feb 2004, http://www.w3c.org/ws-arch/ [W3C] World Wide Web Consortium, http://www.w3.org/ [WebCast 2003] Support WebCast: Microsoft .NET: Introduction to Web Services, published under Q324881, 2003, http://support.microsoft.com/default.aspx?scid=kb;en-us;324881&gssnb=1 [Websphere51] IBM Corp., IBM WebSphere SDK for Web Services (WSDK) Version 5.1 Overview, Available at: http://www-106.ibm.com/developerworks/webservices/wsdk/, Last Access: ? [WR 2002] Wieder, P., Rambadt, M., UNICORE Globus: Interoperability of Grid Infrastructures, Proc. of Cray User Group Summit 2002, May 2002, Manchester [WSA 2004] Web Services Architecture, W3C Working Group Note 11 February 2004 [WS-Addressing] WS-Addressing, an XML serialization and standard SOAP binding for representing network wide pointers to services, http://www.ibm.com/developerworks/webservices/library/ws-add/ [WSAO] Web Services architecture overview: The next stage of evolution for e-business, published on 01.09.2000, http://www-106.ibm.com/developerworks/library/w-ovr/?dwzone=ws [WSCI 2002] BEA Systems, Intalio, SAP, Sun Microsystems, Web Service Choreography Interface (WSCI) 1.0, http://www.w3.org/TR/wsci [WSDL 2001] Web Services http://www.w3.org/TR/wsdl Description Language (WSDL) 1.1. Note, W3C,

[WS-I] Web Services Interoperability Organization, http://www.ws-i.org/ [WS-Notification] This whitepaper describes the concepts, patterns and terminology used in the WS-Notification family of specifications (WS-BaseNotification, WS-Topics and WSBrokeredNotification), http://www-106.ibm.com/developerworks/library/ws-pubsub/WSPubSub.pdf

310/366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

[WS-ResourceProperties] This specification describes how elements of publicly visible properties of a resource can be described, retrieved, changed and deleted, http://www106.ibm.com/developerworks/library/ws-resource/ws-resourceproperties.pdf [WSRF] The WS-Resource Framework, http://www.globus.org/wsrf/ [XL] XML Language, http://xl.informatik.uni-heidelberg.de/ [XMethods] XMethods, http://www.xmethods.com [XP] eXtreme Programming, http://www.extremeprogramming.org/ [ZBDKS 2003] Zeng, L., Benatallah, B., Dumas, M., Kalagnanam, J., Sheng, Q, Quality Driven Web Services Composition, Proc. of WWW2003, Budapest, Hungary, May 2003

IX.4 Bibliography Component- and Message-oriented Computing [Ap03] Apperly, H. et.al.: Service- and composnend-based Development: Using the Select Perspective and UML. Addison-Wesley, Reading, MA, 2003 [BJM04] Boertien, N., Steen, M.W.A., Jonkers, H.: Evaluation of Component-Based Development Methods, in Krogstie, J., Halpin, T., Siau, K. (Edt.) Information Modelling Methods and Methodlogies, Idea Group Publishing, Hershy, PA, 2004 [BIZ00] BizTalk Framework 2.0: Document and Message Specification. Microsoft 2000. URL: http://www.microsoft.com/biztalk/techinfo/framwork20.asp [DW99] D'Souza, D.F.; Wills, A.C.: Objects, components and frameworks with UML: The Catalysis Approach. Addision Wesly, Reading, MA, 1999 [Kr99] Kruchten, P.: Rational Unified Process: An Introduction. Addision Wesly, Reading, MA, 1999 [BS97] Bellin, D.; Simone, S.S.: The CRC Card book. Addison-Wesley, Reading, MA, 1997 [HV00] Hubbers, J.W.; Verhoef, D.: Workshop: component-based development. 2000 [DMKL 2003] Dejan S. Milojicic, Vana Kalogeraki, Rajan Lukose, Kiran Nagaraja1, Jim Pruyne, Bruno Richard, Sami Rollins 2 , Zhichen Xu, HP Laboratories Palo Alto, Peer-to-Peer Computing, HPL-2002-57 (R.1), July 3rd , 2003 [CS 2004] Clay Shirky, Interoperability, Not Standards, published on OpenP2P.com, (http://www.openp2p.com/) accessed 29 June 2004 http://www.openp2p.com/pub/a/p2p/2001/03/15/clay_interop.html [FKT 2001] Ian Foster, Carl Kesselman, Steven Tuecke, The Anatomy of the Grid, Enabling Scalable Virtual Organizations, Supercomputer Applications, 2001 [SS 2002] Gunjan Samtani and Dimple Sadhwani, Web Services and Peer-to-Peer Computing, Companion Technologies, May 2002

311/366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

[GHMS 2003] J. Gerke, D. Hausheer, J. Mischke, B. Stiller: An Architecture for a Service Oriented Peer-to-Peer System (SOPPS) (pdf - 162K) - in: Praxis der Informationsverarbeitung und Kommunikation (PIK) 2/03, p.90-95, April 2003 [A 2001] Karl Aberer. P-Grid: A self-organizing access structure for P2P information systems. In Proceedings of the Sixth International Conference on Cooperative Information Systems (CoopIS 2001), Trento, Italy, 2001. [SMKKB 2001] Ion Stoica, Robert Morris, David Karger, M. Frans Kaashoek, Hari Balakrishnan, Chord: A Scalable Peer-to-Peer Lookup Service for Internet Applications, Proceedings of the 2001 Conference on applications, technologies, architectures, and protocols for computer communications [RD 2001] Rowstron Antony, Druschel Peter: Pastry: Scalable, distributed object location and routing for large-scale peer-to-peer systems. Proc. of the 18th IFIP/ACM Intl. Conf. on Distributed Systems Platforms (2001) [RFHKS 2001] Sylvia Ratnasamy, Paul Francis, Mark Handley, Richard Karp, Scott Shenker: A scalable content addressable network. Proc. ACM SIGCOMM (2001) [MHSRB 2001] Nelson Minar, Marc Hedlund, Clay Shirky, Tim O'Reilly, Dan Bricklin, David Anderson, Jeremie Miller, Adam Langley, Gene Kan, Alan Brown, Marc Waldman, Lorrie Cranor, Aviel Rubin, Roger Dingledine, Michael Freedman, David Molnar, Rael Dornfest, Dan Brickley, Theodore Hong, Richard Lethin, Jon Udell, Nimisha Asthagiri, Walter Tuvell, Brandon Wiley Edited by Andy Oram, March 2001 [S 1979] Shamir, A. How to share a secret. Communications of the ACM, vol. 22, n. 11, pp. 612613, Nov. 1979 [RHH 2001] Roman, G., Huang, Q., Hazemi, A. Consistent Group Membership in Ad Hoc Networks. Proceedings of ICSE, Toronto, Canada, 2001 [AHM 2003] G. Arora, M. Hanneghan, and M. Merabti. CasPaCE: A framework for cascading payments in peer-to-peer digital content exchange, In Proceedings of 4th Annual Postgraduate Symposium on the Convergence of Telecommunications, Networking and Broadcasting, PGNet2003, Liverpool, UK, pages 110-116, June 2003. [RS 1996] R. Rivest, A. Shamir, "Payword and MicroMint -- two simple micropayment schemes," Proceedings of 1996 International Workshop on Security Protocols, pp. 69--87, 1996 [MozoNation 2001] http://mojonation.net [OASIS 2004] Homepage for OASIS, available at http://www.oasis-open.org [W3C 2004] Homepage for W3C, available at http://www.w3c.org [ISO-IEC 1996], ISO-IEC Guide 2:1996(E/F/R), ISO/IEC, Switzerland 1996 [RosettaNet 2004], Homepage for RosettaNet, available at http://www.rosettanet.org

312/366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

IX.5 Bibliography Agent-oriented Computing [1] [2] Fipa communicative act library specification., Document number SC00037J. L. Gasser A. Bond, Readings in distributed artificial intelligence, Morgan Kaufmann, 1988.

[3] V. Benjamins A. Gomez-Perez, Overview of knowledge sharing and reuse components: ontologies and problem-solving methods, Proc. IJCAI-99 Workshop on Ontologies and Problemsolving Methods: Lessons Learned, 1999. [4] Grigoris Antoniou and Frank van Harmelen, Web ontology language: Owl, Handbook on Ontologies in Information Systems (S. Staab and R. Studer, eds.), Springer-Verlag, 2003. [5] S. Kraus B. Grosz, Collaborative plans for complex group actions, Artificial Intelligence 86 (1996), 269358. [6] B. Bauer, J. Muller, and J. Odell, Agent UML: A formalism for specifying multiagent interaction. [7] Giovanni Caire, Wim Coulier, Francisco Garijo, Jorge Gomez, Juan Pavon, Francisco Leal, Paulo Chainho, Paul Kearney, Jamie Stark, Richard Evans, and Philippe Massonet, Agent oriented analysis using Message/UML, 2222 (2002), 119?? [8] Brahim Chaib-draa, Agents et systmes multiagents (IFT 64881A) notes de cours dpartement dinformatique facult des sciences et de gnie, universit de Laval, 1999. [9] [10] H. Clark, Using language, Cambridge Univ Press, 1996. Khanh Hoa Dam and Michael Winikoff, Comparing agent-oriented methodologies.

[11] P. Davidsson, Categories of artificial societies, Engineering Societies in the Agents World II (R. Tolksdorff A. Omicini, P. Petta, ed.), vol. LNAI 2203, Springer, 2001. [12] C. Dellarocas, Contractual agent societies: Negotiated shared context and social control in open multi-agent systems, Proc. of Workshop on Norms and Institutions in Multi-Agent Systems, Autonomous Agents, 2000. [13] F. Dignum, Agents, markets, insitutions and protocols., Agent Mediated Electronic Commerce (C. Sierra F.Dignum, ed.), vol. LNAI 1991, Springer, 2001, pp. 98114. [14] E. A. Emerson and J. Srinivasan, Branching time temporal logic, Proceedings of the School/Workshop on Linear Time, Branching Time and Partial Order in Logics and Models of Concurrency (J. W. de Bakker, W.-P. de Roever, and G. Rozenberg, eds.), Lecture Notes in Computer Science, vol. 354, Springer, June 1988, pp. 123172. [15] E.Verharen, A language/action perspective on the design of cooperative information agents, Ph.D. thesis, Tilburg University, 1997. [16] A. Omicini M. Wooldridge F. Zambonelli, N. Jennings, Agent-oriented software engineering for internet applications., Coordination of Internet Agents: Models, Technologies and Applications (M. Klusch R. Tolksdorf A. Omicini, F. Zambonelli, ed.), Springer, 2001, pp. 326 346. 313/366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

[17] Jacques Ferber, Les systmes multi-agents, vers une intelligence collective, InterEditions, Paris, 1995. [18] Multi-agent systems, an introduction to distributed artificial intelligence, InterEditions, Paris, 1999. [19] F. Arbab G. Papadopoulos, Coordination models and languages, Advances in Computers, vol. 46, Academic Press, 1998, pp. 329400. [20] F. Giunchiglia, J. Mylopoulos, methodology: Processes, 2001. and A. Perini, The tropos software development

[21] T. Gruber, Towards principles for the design of ontologies used for knowledge sharing., Formal Ontology in Conceptual Analysis and Knowledge Representation (R. Poli N. Guarino, ed.), Kluwer, 1993. [22] J. Odell H. Parunak, Representing social structures in uml, Agent-Oriented Software Engineering II (P. Ciancarini M.Wooldridge, G. Weiss, ed.), vol. LNCS 2222, Springer, 2002. [23] A. de Moor H. Weigand, Argumentation semantics for communicative action, Proceedings of the 9th International Working Conference on the Language-Action Perspective on Communication Modelling (LAP 2004) (M. Lind M. Aakhus, ed.), 2004. [24] A. de Moor H. Weigand, S. Hoppenbrouwers, The context of conversations - texts and communities, Proc. LAP Workshop, 1999. [25] F. Dignum H. Weigand, E. Verharen, Integrated semantics for information and communication systems, Proc. IFIP WG 2.5 Conf on Database Application Semantics (L. Mark R. Meersman, ed.). [26] J. Hintikka, Knowledge and belief, Cornell University Press, Ithaca, NY, 1962.

[27] Michael N. Huhns and Munindar P. Singh (eds.), Readings in agents, Morgan Kaufmann, San Francisco, 1998. [28] C. Iglesias, M. Garijo, J. Gonz, and l Velasco, A methodological proposal for multiagent systems development extending CommonKADS, 1996. [29] Carlos Iglesias, Mercedes Garrijo, and Jos Gonzalez, A survey of agent-oriented methodologies, Proceedings of the 5th International Workshop on Intelligent Agents V: Agent Theories, Architectures, and Languages (ATAL-98) (Jrg Mller, Munindar P. Singh, and Anand S. Rao, eds.), vol. 1555, Springer-Verlag: Heidelberg, Germany, 1999, pp. 317330. [30] Carlos Argel Iglesias, Mercedes Garijo, Jose Centeno-Gonzalez, and Juan R. Velasco, Analysis and design of multiagent systems using MAS-common KADS, Agent Theories, Architectures, and Languages, 1997, pp. 313327. [31] B. Chaib-draa J. Bentahar, B. Moulin, Commitment and argument network: a new formalism for agent communication, AAMAS Workshop on Agent Communication Languages and Conversation Policies, 2003.

314/366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

[32] Nicholas R. Jennings and Michael J. Wooldridge, Applications of intelligent agents, Agent Technology: Foundations, Applications, and Markets (Nicholas R. Jennings and Michael J. Wooldridge, eds.), Springer-Verlag: Heidelberg, Germany, 1998, pp. 328. [33] M. van Velsen J. Giampapa K. Sycara, M. Paolucci, The retsina mas infrastructure, Journal of Autonomous Agents and Multi-Agent Systems 7 (2003), no. 1 and 2. [34] David Kinny, Michael Georgeff, and Anand Rao, A methodology and modelling technique for systems of BDI agents, Seventh European Workshop on Modelling Autonomous Agents in a Multi-Agent World (Eindhoven, The Netherlands) (Rudy van Hoe, ed.), 1996. [35] N. Parsons L. Amgoud, N. Mauidet, An argumentaton-based semantics for agent communication languages, 15th European Conference on Artificial Intelligence, 2002. [36] Y. Lesperance, A formal account of self-knowledge and action, Proc. of the 11th IJCAI (Detroit, MI), 1989, pp. 868874. [37] A. Manzardo M. Bonifacio, P. Bouquet, A distributed intelligence paradigm for knowledge management, bringing knowledge to the business process, Proc. AAI Spring Symposium, AAAI Press, 2000. [38] J. McCarthy, Generality in artificial intelligence, Communications of the ACM 30 (1987), no. 12, 10301035. [39] C. Sierra M.Esteva, J. Padget, Formalizing a language for institutions and norms, Intelligent Agents VIII, Springer, 2002, pp. 348366. [40] R. C. Moore, A formal theory of knowledge and action, Formal Theories of the Commonsense World (J. R. Hobbs and R. C. Moore, eds.), Ablex, Norwood, NJ, 1985, pp. 319 358. [41] L. Morgenstern, Knowledge preconditions for actions and plans, Readings in Distributed Artificial Intelligence (A. H. Bond and L. Gasser, eds.), Kaufmann, San Mateo, CA, 1988, pp. 192 199. [42] H. Stephen N. Gibbins and N. Shadbolt, Agent-based semantic web services, Proceedings The Twelfth International World Wide Web Conference (WWW2003), 2003. [43] L. Padgham and M. Winikoff, Prometheus: A methodology for developing intelligent agents, 2002. [44] H. Levesque P.Cohen, Teamwork, Nous 25 (1991), no. 4, 487512.

[45] W. Powell, Neither market nor hierarchy: Network forms of organization, Research in Organisational Behavior 12 (1990), 295336. [46] Anand S. Rao and Michael P. Georgeff, Modeling rational agents within a BDIarchitecture, Proceedings of the 2nd International Conference on Principles of Knowledge Representation and Reasoning (KR91) (James Allen, Richard Fikes, and Erik Sandewall, eds.), Morgan Kaufmann publishers Inc.: San Mateo, CA, USA, 1991, pp. 473484.

315/366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

[47] A. Th. Schreiber, B. Wielinga, R. de Hoog, H. Akkermans, and W. van de Velde, CommonKADS : A comprehensive methodology for KBS development, IEEE Expert, 1994, pp. 28 37. [48] J. Searle, Speech acts - an essay in the philosophy of language, Cambridge Univ Press, 1969. [49] M. P. Singh, Towards a theory of situated know-how, Proc. of the 9th ECAI (Stockholm, Sweden), 1990, pp. 604609. [50] Group ability and structure, Decentralized A.I. 2: Proc. of the 2nd European Workshop on Modelling Autonomous Agents in a Multi-Agent World (Y. Demazeau and J.-P. Muller, eds.), North-Holland, Amsterdam, 1991, pp. 127145. [51] Towards a formal theory of communication for multiagent systems, Proc. of the 12th IJCAI (Sidney, Australia), 1991, pp. 6974. [52] M. P. Singh and N. M. Asher, Towards a formal theory of intentions, Logics in AI: Proc. of the European Workshop JELIA90 (J. van Eijck, ed.), Springer, Berlin, Heidelberg, 1991, pp. 472 486. [53] R.G. Smith, The contract net protocol: High-level communication and control in a distributed problem solver, IEEE Transaction on Computers 29 (1980), no. 12, 11041113. [54] J.Mayfield T. Finin, Y.Labrou, Kqml as an agent communication language, Software Agents (J.Bradshaw, ed.), MIT Press, Cambridge, 1997. [55] K.Crowston T. Malone, The interdisciplinary study of coordination, ACM Computing Surveys 26 (1994), no. 1. [56] R.I. Benjamin T. Malone, J. Yates, Electronic markets and electronic hierarchies, Communications of the ACM 30 (1987), no. 6. [57] V. Lesser T. Sandholm, Issues in automated negotiation and electronic commerce, Proc. 1st Int Conf on Multi-Agent Systems (ICMAS), 1995, pp. 328335. [58] F. Flores T. Winograd, Understanding computers and cognition - a new foundation for design, Ablex, 1986. [59] M. Tambe, Towards flexible teamwork, Journal of Artificial Intelligence Research 7 (1997), 83124. [60] Paolo Giorgini University, The tropos methodology: An overview +.

[61] L. Xu V. Dignum, H. Weigand, Agent societies - towards framework-based design, AgentOriented Software Engineering II (G. Weiss M. Wooldridge, P. Ciancarini, ed.), Springer, 2001, pp. 3349. [62] G. Weiss, Multiagent systems: A modern approach to distributed artificial intelligence, 1999.

316/366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

[63] E. Werner, Toward a theory of communication and cooperation for multiagent planning, Proc. of the Second Conference on Theoretical Aspects of Reasoning about Knowledge (Asilomar, CA), 1988, pp. 129143. [64] Cooperating agents: A unified theory of communication and social structure, Distributed Artificial Intelligence (Vol. II) (L. Gasser and M. N. Huhns, eds.), Kaufmann, San Mateo, CA, 1989, pp. 336. [65] What can agents do together?: A semantics for reasoning about cooperative ability, Proc. of the 9th ECAI (Stockholm, Sweden), 1990, pp. 694701. [66] A unified view of information, intention and ability, Decentralized A.I. 2: Proc. of the 2nd European Workshop on Modelling Autonomous Agents in a Multi-Agent World (Y. Demazeau and J.-P. Muller, eds.), North-Holland, Amsterdam, 1991, pp. 109125. [67] O. Williamson, Markets and hierarchies: Analysis and antitrust implications., Free Press, New York, 1975. [68] Mark F. Wood and Scott DeLoach, An overview of the multiagent systems engineering methodology, AOSE, 2000, pp. 207222. [69] M. Wooldridge and M. Fisher, A first-order branching time logic of multi-agent systems, Proc. of the 10th ECAI (Vienna, Austria), 1992, pp. 234238. [70] Michael Wooldridge and Nicholas R. Jennings, Intelligent agents: Theory and practice, HTTP://www.doc.mmu.ac.uk/STAFF/mike/ker95/ker95-html.h (Hypertext version of Knowledge Engineering Review paper), 1994. [71] Michael Wooldridge, Nicholas R. Jennings, and David Kinny, The Gaia methodology for agent-oriented analysis and design, Autonomous Agents and Multi-Agent Systems 3 (2000), no. 3, 285312. [72] Mike Wooldridge and P. Ciancarini, Agent-Oriented Software Engineering: The State of the Art, First Int. Workshop on Agent-Oriented Software Engineering (P. Ciancarini and M. Wooldridge, eds.), vol. 1957, Springer-Verlag, Berlin, 2000, pp. 128. [73] F. Zambonelli, Abstractions and infrastructures for the design and development of mobile agent organizations, Agent-Oriented Software Engineering II (P.Ciancarini M. Wooldridge, G. Weiss, ed.), vol. LNCS 2222, Springer, 2002, pp. 245262. [74] Nowostawski, M. Bush, G. Purvis, M. and Cranefield, M., Platforms for Agent-Oriented Software Engineering. Seventh Asia-Pacific Software Engineering Conference (APSEC'00), December 05 - 08, 2000, Singapore. [75] Sycara, K. and Paolucci, M, Ontologies in Agent Architectures, In Handbook on Ontologies. Staab, S. Studer. R (Eds.) Springer, 2004. [76] Tamma, V. Woolridge, M. Blacoe, I. and Dickinson, I. , An ontology based approach to automated negotiation. Revised Papers from the Workshop on Agent Mediated Electronic Commerce on Agent-Mediated Electronic Commerce IV, Designing Mechanisms and Systems, Lecture Notes In Computer Science, Springer-Verlag. 317/366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

[77] Uschold, M. , Barriers to Effective Agent Communication. Proceedings of the Workshop on Ontologies in Agent Systems, 5th International Conference on Autonomous Agents. Montreal, Canada, May 29, 2001. [78] Williams, A., Learning to Share Meaning in a Multi-Agent System. Autonomous Agents and Multi-Agent Systems, Vol. 8 (2): 165-193, Kluwer Academic Publishers, 2004. [79] Williams, A and Ren, Z., Agents teaching Agents to Share Meaning. International Conference on Autonomous Agents. Proceedings of the fifth international conference on Autonomous agents, Montreal, Quebec, Canada pp. 465-472, 2001. [80] Wooldridge, M., An Introduction to MultiAgent Systems, John Wiley & Sons Ltd, 2002.

[81] Jennings N.R., Sycara, K. and Wooldridge, M., A Roadmap of Agent Research and Development. International Journal of Autonomous Agents and Multi-Agent Systems 1 (1) 7-38, 1998 [82] White, J.E., Mobile Agents, in Bradshaw, J. (ed.), Software Agents, MIT Press, Cambridge MA.,1997, pp. 437-472. [83] Jennings, N.R., Controlling Cooperative Problem Solving Using Joint Intentions, Artificial Intelligence Magazine, 14(4): 79-80 [84] Jennings, N.R., Coordination Techniques for Distributed Artificial Intelligence, in Foundations of Distributed Artificial Intelligence (eds. G.M.P. OHoare and N.R. Jennings), Wiley, 1996, 187-210. [85] Mueller, H.J., Negotiation Principles, (G.M.P OHare and N.R. Jennings eds.), Foundations of Distributed Artificial Intelligence, John Wiley & Sons, pp.211-230. [86] Kendall, Elizabeth A. and Malkoun, M. T. and Jiang, C. H., A Methodology for Developing Agent-based Systems for Enterprise Integration, in Modelling and Methodologies for Enterprise Integration, (Eds. Bernus and Nemes), 1996, Chapman and Hall, pp. 333-344. [87] Tavetar, K. and Wagner, G., Combining {AOR} Diagrams and Ross Business Rules Diagrams for Enterprise Modelling, in Proceedings of the Second International Workshop on Agent-Oriented Informations Systems (AOIS2000) at CAiSE, June 2000, Stockholm, Sweden, [88] Wagner, G., Agent-oriented Analysis and Design of Organisational Information Systems, Proceedings of Fourth IEEE International Baltic Workshop on Database and Information Systems, May 2000, Vilnius, Lithuania. [89] Kendall, Elizabeth A., Agent Roles and Role Models: New Abstractions for Intelligent Agent System Analysis and Design, Proceedings of the Workshop on Intelligent Agents in Information and Process Management, KI'98, {Eds. Holsten, A. et. al.}, 1998, pp. 35-46. [90] Fischer K., Mueller, J., Hemmig, I. and Scheer, A., Intelligent Agents in Virtual Enterprises, in Proceedings of First International Conference and Exhibition on the Practical Applications of Intelligent Agents and Multi-Agent Technology, April 1996, London, U.K., pp. 205-204.

318/366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

[91] Oliveira, E. and Rocha, A. P., Agents' Advanced Features for Negotiation in Electronic Commerce and Virtual Organisation Formation Process, European Perspectives on Agent Mediated Electronic Commerce, Springer Verlag, June 2000. [92] Rocha, A. P. and Oliveira, E., Electronic Institutions as a Framework for Agents' Negotiation and Mutual Commitment, Progress in Artificial Intelligence (Proceedings of 10th EPIA), LNAI 2258, (Eds. P.Brazdil and A.Jorge, December 2001, Springer-Verlag, pp. 232-245. [93] Sycara, K. and Widoff, M. and Klusch, M. and Lu, J., Larks: Dynamic Matchmaking among Heterogeneous Software Agents in Cyberspace, Autonomous Agents and Multi-agent Systems, Vol. 5, No. 2, 2002, pp. 173-203. [94] Camarinha-Matos, L. M. and Afsarmanesh, H., Virtual Enterprise Modelling and Support Infrastructures: Applying Multi-agent System Approaches, in (Eds. Luck, M., Maurik, V., Stpankova, O., and Trappl, R.), Multi-agent Systems and Applications, Springer-Verlag, Lecture Notes in Artificial Intelligence LNAI 2086. [95] Avila, P., Putnik, G. D., and Cunha, M. M., Brokerage Function in Agile Virtual Enterprise Integration A Literature Review, in (Eds. Camarinha-Matos, L.~M.), Collaborative Business Ecosystems and Virtual Enterprises, Kluwer Academic Publishers, pp. 65-72. [96] Rabelo, R.J., Camarinha-Matos, L.M., and Vallejos, R.V., Agent-based Brokerage for Virtual Enterprise Creation in the Moulds Industry, (Eds. Camarinha-Matos, L.M., Afsarmanesh, H., and Rabelo, R.J.), E-Business and Virtual Enterprises Managing Business-to-Business Cooperation, Kluwer Academic Publishers, pp. 281--290. [97] Shen, W. and Norrie, D. H., Agent-based Systems for Intelligent Manufacturing: A State-ofthe-art Survey, Knowledge and Information Systems, 1(2):129-156. [98] Genesereth, M.R., An Agent-based Framework for Interoperability, Software Agents, in /Eds. Bradshaw, J.M.), Software Agents, AAAI Press/The MIT Press, pp. 317-346. [99] Barbuceanu, M. and Fox, M.S., The Information Agent: An infrastructure agent supporting collaborative enterprise architectures, in Proceedings of the Third IEEE Workshop on Enabling Technologies: Infrastructure for Collaborative Enterprises (WET-ICE 1994), Morgan Town, West Virginia, U. S. A. [100] Fox, M.S., Chionglo, J.F., and Barbuceanu, M., The Integrated Supply Chain Management System, Technical Report, Enterprise Integration Laboratory, University of Toronto.

IX.6 Bibliography - Business Process Management and workflow

[Abdulla et al., 1998] P. A. Abdulla, A. Bouajjani, and B. Jonsson. On-the-fly analysis of systems wit hunbounded, lossy fifo channels. In Proc. Computer Aided Verification, pages 305318, 1998. 319/366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

[Abiteboul et al., 2000] S. Abiteboul, V. Vianu, B. Fordham, and Y. Yesha. Relational transducers for electronic commerce. Journal of Computer and System Sciences, 61(2):236269, 2000. [Afsamanesh et al., 1997] H. Afsamanesh, C. Garita, B. Hertzberger, and V. Santos Silva. Managment of distributed information in virtual enterprises - the PRODNET approach. In Proceedings of ICE97 - International Conference on Concurrent Enterprising, Nottingham, UK, 1997. http://www.uninova.pt/prodnet. [Alfaro and Henzinger, 2001] L. de Alfaro and T. A. Henzinger. Interface automata. In Proc ACM SYmp Foundations of Software Engineering, 2001. [Alonso, 1999] G. Alonso. Wise: Business to business e-commerce. In Proceedings of the 9th workshop on research issues on data engineering (RIDE-VE99), Sydney, 1999. [Anderson et al., 1998] Anderson, R. C. (GWU) Cheah, S. C. (UMD) Knutilla, A. (Knutilla Tech. / MI) Polyak, S. T. (UK) Tate, A. (UK) Schlenoff, C. I. (INTELLIGENT SYSTEMS DIVISION 823) Ray, S. (MANUFACTURING SYSTEMS INTEGRATION DIVISION - 826). Process Specification Language: An Analysis of Existing Representations. NIST Publication Number: NISTIR 6160. NTIS Accession Number: PB98-157092. 1998. [Ankolenkar et al., 2001] A. Ankolenkar et al. Daml-s: Semantic markup for web services. In Proc. of Int. Semantic Web Working Symposium (SWWS), pages 411430, July. [Arbab et al., 2002] Farhad Arbab, Marcello Bonsangue, Juan Guillen Scholten, Maria-Eugenia Iacob, Henk Jonkers, March Lankhorst, Erik Proper, and Andies Stam. State of the art in architecture frameworks and tools. archimate phase 0 deliverable 2. Technical report, May 2002. [Arkin, 2002] A. Arkin. Business process modeling language (bpml). Technical report, 2002. http://www.bpmi.org/specifications.esp. [Arkin et al., 2002] A. Arkin, S. Askary, S. Fordin, W. Jekeli, K. Kawaguchi, D. Orchard, S. Pogliani, K. Riemer, S. Struble, P. Takacsi-Namy, I. Trickovic, and S. Zimek. Web service chreography interface 1.0. http://www.sun.com/software/xml/developers/wsci/wsci-spec-10.pdf. [Banerji et al., 2002] A. Banerji, C. Bartolini, D. Beringer, V. Chopella, K. Govidarajan, A. Karp, H. Kuno, M. Lemon, G. Pogossiants, S. Sharma, and S. Williams. Web services conversation language (wscl) 1.0. http://www.w3c.org/TR/wscl10/. [Benatallah et al., 2004] Boualem Benatallah, Olivier Perrin, Fethi Rabhi, and Claude Godart. Web Service Computing: Overview and Directions, chapter xx. Springer Verlag, 2004. [Ben-Shaul and Kaiser, 1994] Z. Ben-Shaul and G. E. Kaiser. A Paradigm for Decentralized Process Modeling and its Realization in the Oz Environment. Proc. Sixteenth International Conference on Software Engineering, Sorrento, Italy, pp. 179-188, 1994. [Bernauer et al., 2003] Martin Bernauer, Gerti Kappel, and Gerhard Kramler. Comparing wsdlbased and ebxml-based approaches for b2b protocol specification. In ISSOC-03, Trento, 2003. http://www.big.tuwien.ac.at/research/publications/2003/1103-slides.pdf. 320/366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

[Bhiri et al., 2003] Sami Bhiri, Olivier Perrin, Walid Gaaloul, and Claude Godart. An object oriented metamodel for inter-enterprises cooperative processes based on web services. In IDPT 03, Seventh Conference on Integrated Design & Process Technology, Austin, USA, 2003. December 35, 2003. [Bichler et al., 1998] M. Bichler, A. Segev, and J. L. Zhao. Component-based e-commerce: assesment of current practices and future directions. ACM SIGMOD Rec, 27(4):714, 1998. [Bij et al., 1999] Bij J.D. v. d., Dijkstra L., Vries G. d., Walburg J., 1999. Improvement and renewal of healthcare processes: results of an empirical research project. Health Policy, 48, pp. 135-152. [Bitcheva et al., 2003] Julia Bitcheva, Olivier Perrin, and Claude Godart. Coordination of cooperative processes with the synchronization point concept. In ICWS 2003, pages 7682, 2003. [BizTalk, 2004] BizTalk. http://www.BizTalk.org. [BizzDesign, 2000] BiZZdesign, Handboek Testbed, versie 6.1, Enschede, the Netherlands, June 2000 (in Dutch). [Boigelot et al., 1996] B. Boigelot, P. Godefroid, B. Williams, and P. Wolper. Symbolic verification of communication protocols with infinite state spaces using qdds. In Proceedings of 8th International Conference on Computer Aided Verification, volume 1102 of LNCS, pages 112. Springer Verlag, August 1996. [Boigelot et al., 1997] B. Boigelot, P. Godefroid, B. Williams, and P. Wolper. The power of qdds. In Proc. 4th Static Analysis Symposium, September 1997. [Bolcer and Kaiser, 1999] G.A. Bolcer and G. Kaiser. Swap: Leveraging the web to manage workflow. IEEE Internet Comput, 3(1):5588, 1999. [Bonner, 1999] A. J. Bonner. Workflow, transactions and datalog. In Proc. 18th ACM SIGACTSIGMODSIGART Symposium on Principles of Database Systems, Philadelphia, pages 294-305, 1999. [BPMI, 2004] BPMI. http://www.bpmi.org [BPML, 2002] Business Process Modeling Language, December 2002. [BPMN, 2002] Business Process Modeling Notation, November 2002. [BPMN, 2003a] Business Process Management Initiative. Business Process Modeling Notation. Working Draft (1.0), Aug. 2003. http://www.bpmi.org. [BPMN, 2003b] Business Process Modeling Notation, August 2003. Working Draft (1.0). http://www.bpmi.org. [BPMN, 2004] BPMN. http://www.bpmi.org/specifications.esp

321/366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

[Brand and Zafiropule, 1983] D. Brand and P. Zafiropule. On communicating finite-state machines. Journal of the ACM, 30(2):323342, 1983. [Bultan et al., 2003] T. Bultan, Z. Fu, R. Hull, and J. Su. Conversationm specification: A new approach to design and analysis of e-service composition. In Proc. World Wide Web Conf., 2003. [Bussler, 2001] C. Bussler. B2b protocol standards and their role in semantic b2b integration engines. Bulletin of the Technical Committee on Data Engineering, 24(1), March 2001. [Camarinha-Matos, 2003] Luis M. Camarnha-Matos. Infrastructure for virtual organizations where we are. In Proceedings of ETFA03 - 9th international conference on Emerging Technologies and Factory Automation, Lisboa, Portual, September 2003. [Camarinha-Matos and Afsarmanesh, 2001] L. M. Camarinha-Matos and H. Afsarmanesh. Service federation in virtual organisations. In PROLAMAT01, Budabest, Hungary, November 2001. [Cardoso et al., 2004] Jorge Cardaso, Robert P. Bostrom, and Amit Sheth. Workflow management systems vs. erp systems: Differencies, commonalities, and applications. [Cardoso et al., 2004] Jorge Cardoso, Robert P. Bostrom, and Amit Sheth. Workflow management systems vs. erp systems: Differences, commonalities, and applications. Technical report, 2009. [Cardoso and Sheth, 2002] Jorge Cardoso and Amit Sheth. Semantic e-workflow composition. Technical report, 2002. LSDIS Lab, Computer Science, University of Georgia. [Carlsen, 1997] S. Carlsen. Conceptual Modeling and Composition of Flexible Workflow Models. Dissertation, Norwegian University of Science and Technology, 1997. [Casati et al.,1996] F. Casati and S. Ceri and Barbara Pernici and G. Pozzi. Deriving active rules for workflow enactment. In R. Wagner and H. Thoma (Edt.), Proceedings of the 7th International Conference on Database and Expert Systems Applications (DEXA96), pages 94-115, 1996. [Casati et al., 2001] F. Casati, U. Dayal, and M. C. Shan. E-business applications for supply chain automation: challenges and solutions. In ICDE Conference, pages 7178, Heidelberg, Germany, 2001. [Casati et al., 2000a] Fabio Casati, Ski Ilnicki, LiJie Jin, Vasudev Krishnamoorthy, and Ming-Chien Shan. Adaptive and dynamic composition in eflow. [Casati et al., 2000b] F. Casati, S. Ilnicki, L. J. Jin, V. Krishnamoorthy, and M. C. Shan. eflow: a platform for developing and managing composite e-services. Technical report, 2000. HPL-2000-36, HP Laboratories, Palo Alto, Calif. [Chen and Hsu, 2001a/b] Q. Chen and M. Hsu. Inter-enterprise collaborative business process management, IEEE, 2001.

[Chen and Dayal, 1997] Q. Chen and Umesh Dayal. Failure recovery across transaction hierarchies. In Proc of 13th International Conference on Data Engineering (ICDE97), 1997. 322/366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

[Chen and Vernadat, 2002] David Chen and Francois Vernadat. Enterprise Interoperability: A Standardization View. http://www.cimosa.de/Standards/DCFV02.html. [Chen et al., 2000] Q. Chen, Meichun Hsu, Umesh Dayal, and Martin Griss. Incorporating multiagent cooperation, dynamic workflow and xml for e-commerce automation. In Proc. Fourth International Conference on Autonomous Agents, 2000. [Christophides et al., 2001] V. Christophides, R. Hull, G. Karvounrakis, A. Kumar, G. Tong, and M. Xiong. Beyond discrete e-services: Composing session-oriented services in telecommunications. In Proc. of Workshop on Technologies for E-Services (TES), volume 2193 of Springer LNCS, Rome, Italy, September 2001. [Clarke et al., 2000] E. Clarke, O. Grumber, and D. Peled. Model Checking. MIT Press, 2000. [Cobb, 2001] E. Cobb. The evolution of distributed component architectures. In CoopIS Conference, pages 712, Trento, Italy, 2001. [Combi and Pozzi, 2003] Carlo Combi and Giuseppe Pozzi. Temporal Conceptual Modelling of Workflows. Conceptual Modeling - ER 2003, 22nd International Conference on Conceptual Modeling, Chicago, IL, USA, October 13-16, 2003, Proceedings. Volume 2813 of Lecture Notes in Computer Science, Springer, 2003. [COSA, 2004] COSA Workflow. http://www.ley.de/cosa/. [Curbera et al., 2002a/b] F. Curbera, Y. Goland, J. Klein, F. Leyman, D. Roller, S. Thatte, and S. Weerawarana. Business Process Execution Language for Web Services (BPEL4WS) 1.0, August 2002. http://www.ibm.com/developerworks/library/ws-bpel. [cXML, 2004] cXML. http://www.cxml.org. [DAML-S, 2001] Daml-s: Semantic markup for web services, 2001. http://www.daml.org/services/daml-s/2001/10/daml-s.html. [Dayal et al., 1991] U. Dayal, M. Hsu, and R. Ladin. A transactional model for long-running activities. In Proc 17th conference on very large data bases, September 1991. [Dayal et al., 2001] Umeshwar Dayal, Mechun Hsu, and Rivka Ladin. Business process coordination: State of the art, trends, and open issues. In Proceedings of the 27th International Conference on Very Large Data Bases, pages 313, Roma, Italy, September 2001. [Deutch, 1992] A. Deutch. A storeless model for aliasing and its abstractions using finite representation of right-regular equivalence relations. In IEEE Int. Conf. on Computer Languages, pages 213, 1992. [Dickson et al., 2002] Dickson K. W. Chiu, Shigh-Chi Cheung, Kamalakar karlapalem, Qing Li, and Sven Till. Workflow view driven cross-organizational interoperability in a web-services environement. In Ch. Nussler et al., editor, WES 2002. LNCS2512, pages 4156, 2002.

323/366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

[DCOM, 2004] Distributed Component Object Model (DCOM). http://www.microsoft.com/net. [ebXML, 2001] ebXML Technical Architecture Project Team. ebXML Technical Architecture Specification v1.0.4. Technical report, ebXML, February 2001. http://www.ebxml.org/specs/ebTA.pdf. [ebXML, 2002] ebXML Business Process Specification Schema (v1.01), May 2002. http://www.ebxml.org/specs/ebBPSS.pdf. [ebXML, 2004] Homepage for ebXML, available at http://www.ebxml.org [eCO, 2004] eCO. http://eco.commercenet.net. [Eder and Gruber, 2002] J. Eder and W. Gruber. A Meta Model for Structured Workflows Supporting Workflow Transformations. In: Proceedings of the 6th East-European Conference on Advances in Databases and Information Systems (ADBIS). Springer Verlag, LNCS 2435, pages 326-330, 2002. [Eder and Liebhart, 1995] J. Eder and W. Liebhart. The workflow activity model wamo. In Proceedings of the 3rd international conference on Cooperative Information Systems (COOPIS), Vienna, Austria, May 1995. [Eder and Liebhart, 1998] J. Eder and W. Liebhart. Contributions to exception handling in woklow management. In Proceedings of the Sixth International Conference on Extending database technology, Valencia, Spain, March 1998. [Eder et al., 1997a] J. Eder and H. Groiss and W. Liebhart. The workflow management system panta rhei. In Asuman Dogac, Leonid Kalinichenko, M. Tamer Ozsu and Amit Sheth (Edts) Advances in Workflow Management Systems and Interoperability, pages 129-144, 1997. [Eder et al., 1997b] Johann Eder and Euthimios Panagos and Heinz Pozewaunig and Michael Rabinovich. Time Management in Workflow Systems. In W. Abramowicz and M.E. Orlowska (Editors): BIS '99: Proceedings of the 3rd International Conference on Business Information Systems , Poznan, Poland, 14-16 1999. Springer Verlag, 1999. [Eder et al., 1999] Johann Eder and Euthimios Panagos and Michael Rabinovich. Time Constraints in Workflow Systems. In Advanced Information Systems Engineering: 11th International Conference on Advanced Information Systems Engineering (CAiSE ''99) Heidelberg Germany, June 14-18, 1999, Proceedings. Volume 1626 of Lecture Notes in Computer Science, pages 286300, Springer, 1999. [Edmond and Hofstede, 1999] D. Edmond and A. H. M. ter Hofstede. Achieving workflow apativility by means of reflection. ACM SIGGROUP Bulletin, 20(3), December 1999. [Eertink et al., 1999] Eertink H., Janssen, W., Oude Luttighuis, P., Teeuw, W. and Vissers, C., A business process design language, in Proceedings of the 1st World Congress on Formal Methods, Toulouse, France, 1999.

324/366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

[Elmagarmid, 1992] A. K. Elmagarmid, editor. Database Transaction Models for Advanced Applications. Morgan-Kaufmann, 1992. [EMEIS, 1995] ENV 13550 Enterprise Model Execution and Integration Services (EMEIS), 1995. [Eriksson and Penker, 1998] Hans-Erik Eriksson, Magnus Penker. Business Modeling with UML. Publisher: John Wiley & Son Inc, 2002. ISBN: 0-471-29551-5. [Eshuis, 2002] H. Eshuis. Semantics and Verification of UML Activity Diagrams for Workflow Modeling. PhD thesis, 2002. [Eshuis and Wieringa, 2001] R. Eshuis and R. Wieringa. A Comparison of Petri Net and Activity Diagram Variants. In: Weber, Ehrig, Reisig (Eds.): Proc. of 2nd Int. Coll. on Petri Net Technologies for Modelling Communication Based Systems, pages 93-104. DFG Research Group "Petri Net Technology", September 2001. [Eshuis et al., 2003a] R. Eshuis, P. Brimont, E. Dubois, B. Grogoire, and S. Ramel. Animating ebXML Transactions with a Workflow Engine. 2003. http://www.cs.rmit.edu.au/fedconf/. [Eshuis et al., 2003b] R. Eshuis, P. Brimont, E. Dubois, B. Grogoire, and S. Ramel. Efficient: toolset supporting modelling and validation of ebXML transaction. 2003.

[Fischer et al., 2001] K. Fischer, P. Funk, and C. Ru. Specialized agent applications. In Proc. Advanced Course on Artificial Intelligence (ACAI 2001) ?Multi-Agent Systems and Their Applications (MASA). Springer Lecture Notes in Artificial Intelligence (LNAI) 2086, Prague, Czech Republic, July 2001. [Frank et al., 2004] J.H. Frank et al. Business Process Definition Metamodel: Concepts and overview, 2004. BPDM Whitepaper. http://www.omg.org/docs/bei/04-05-03.pdf. [Frey et al., 2001] J. Frey, T. Tannenbaum, I. Foster, M. Livny, and S. Tuecke. Condor-g: A computation management agent for multi-institutional grids. In Proc. IEEE SYmp. High Performance Distributed COmputing (HDPC), 2001.

[Garcia-Molina and Salem, 1987] H. Garcia-Molina and K. Salem. Sagas. In Proc ACM SIGMOD Int Conference on Management of Data, May 1987. [Gay and Hole, 2000] S. Gay and M. Hole. Types for correct communication in client-server systems. Technical report, Department of Computer Science, Royal Holloway, University of London, December 2000. CSD-TR-00-07. [Genesereth, 1998] M. R. Genesereth. Knowledge Interchange Format draft proposed American National Standard (dpANS). NCITS. T2/98-004. Online Paper http://logic.stanford.edu/kif/kif.html. [Georkakopoulos and Rusinkiewicz, 1997] Georgakopoulos, D., & Rusinkiewicz(1997). Workflow management tutorial. VLDB conference, Athens, August 1997.

325/366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

[Geppert and Tombros, 1998] A. Geppert and D. Tombros. Event-based distributed workflow execution with eve. In Middleware98 workshop, pages 427442. [Glance et al., 1996] N.S. Glance and D.S. Pagani and R. Pareschi. Generalized Process Structure Grammars (GPSG) for flexible representations of work. In: M. Ackeramn (Ed.): CSCW96 Proceedings of the Conference on Computer Supported Cooperative Work. Boston, MA, 1996. [Godart et al., 2004] Claude Godart, Pascal Molli, Grald Oster, Olivier Perrin, Hala Skaf-Molli, Pradeep Ray, and Fethi Rabhi. The toxicfarm integrated cooperation framework for virtual teams. Distributed and Parallel Databases, 15(1):6788, 2004. [Godefroid and Long, 1996] P. Godefroid and D. Long. Symbolic procotol verification with queue bdds. In Proc. IEEE Symposium on Logic in Computer Science, pages 112, 1996. [Grefen et al., 1997] Grefen, P., Vonk, J., Boertjes, E., Apers, P., Two-layer Transaction Management for Workflow Management Applications. Proceedings of the Eight International Conference on Database and Expert System Administration. Toulouse, France, 1997. [Grefen et al., 2000] Paul Grefen, Karl Aberer, Yigal Hoffner, and Heiko Ludvig. Crossflow: crossorganisational workflow management in dynamic virtual enterprises. International Journal of Computer Systems Science and Engineering, 15(5), September 2000. [Gregoire et al., 2004] Bertrand Gregoire, Ghristophe Incoul, Sopie Ramel, Michael Schmitt, Laurent Gautheron, Pierre Brimont, and Eric Dubois. Efficient: A framework for animating and validating e-business transactions. In ERCIM N58. [Gruber, 2004] W. Gruber, Modeling and Transformation of Workflows with Temporal Constraints. Akademische Verlagsgesellschaft Aka, Berlin, 2004. ISBN 3-89838-484-5. [Hanson et al., 2002] James E. Hanson, Prabir Nandi, and Santhos Kumaran. Conversation support for business process integration. In EDOC2002, 2002. http://www.research.ibm.com/convsupport/papers/edoc02.pdf. [Harmon, 2004] Paul Harmon. Bpm tools. Business Process Trends, 2(4), April 2004. [Hasselbring, 2000] Wilhelm Hasselbring. Information system integration, June 2000. [Henl et al., 1999] P. Henl, S. Horn, S. Jablonski, J. Nebb, K. >Stein, and M. Techkle. A comprehensible approach to flexibility in workflow management systems. In WACC99, 1999. [Hoare, 1987] C. A. R. Hoare. Communicating sequential processes. Communications of the ACM, 21(8):666677, 1987. [Hollingsworth, 1994] D. Hollingsworth. The workflow reference model. http://www.aiai.ed.ac.uk/WfmC/DOCS/refmodel/rmv1-16.html. [Hollingsworth, 2004] David Hollingsworth. The Workflow Reference Model: 10 Years On. Fujitsu Services, UK; Technical Committee Chair of WfMC, 2004.

326/366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

[Holt et al., 1983] Holt, A.W., Ramsey, H.R., Grimes, J.D., Coordination system technology as the basis for a programming environment, Electrical Communication, vol. 57, no. 4, 1983. [IDEF, 1993] IDEF, Integration Definition for Function Modeling (IDEF0) Draft, Federal Information Processing Standards Publication FIPSPUB 183, U.S. Department of Commerce, Springfield, VA 22161, Dec. 1993. [Honda et al., 1998] K. Honda, V. Vasconcelos, and K. Kubo. Language primitives and type discipline for structured communication-based programming. In Programming languages and systems, 7th European Symposium on Programming, 1998. [Hull et al., 2003] Richard Hull, Michael Benedikt, Vassilis Christophides, and Jianwan Su. Eservices: A look behind the curtain. In Proceedings of the 22nd ACM SIGACT-SIGMOD-SIGART Symposium on Principles of Database Systems (PODS 2003), San Diego, USA, June 2003. ACM Press. [Ibarra, 2000] O. H. Ibarra. Reachability and safety in queue systems. In Implementation and Application of Automata, 5th International Conference (CIAA), pages 145156, July 2000. [IS, 2001] Integration Specification, 2001. [ISO/DIS, 2004] PSL: Principles and Overview, January 2004. ISO/DIS 18629-1. http://www.tc184-sc4.org/SC4_Open/SC4_Work_Products_Documents/PSL_(18629)/. [ISO/IEC, 1996] ISO/IEC JTC1. Information Technology Open Systems Interconnection, Data Management and Open Distributed Processing. Reference Model of Open Distributed Processing. Part 3: Architecture, 1996. IS10746-3. [Jablonski, 1994] S. Jablonski. Mobile: A modular workflow model and architecture. In Proceedings of teh 4th International Working Conference on Dynamic Modelling and Infomration Systsms, Noordwijkerhout, Netherlands, 1994. [Jablonski and Bussler, 1996a/b] S. Jablonski and C. Bussler. Workflow Management: Modeling, Concepts, Architecture, and Implementation. International Thompson Computer Press, 1996. [Jablonski et al., 1997] Jablonski, S.; Bhm, M.; Schulze, W. (Eds.): Workflow-Management. Entwicklung von Anwendungen und Systemen. Facetten einer neuen Technologie. dpunkt-Verlag, Heidelberg 1997. (in German) [Jajodia and Kerschberg, 1997] S. Jajodia and L. Kerschberg, editors. Advanced Transaction Models and Architectures. Kluwer Academic Publishers, 1997. [Jansseen et al., 2003] W. Janssen, M.W.A. Steen, and H. Franken. Business process engineering versus e-business engineering: A summary of case experiences. In Proc. 36th Hawaii International Conference on System Sciences. IEEE Computer Society Press, 2003. https://doc.telin.nl/dscgi/ds.py/Get/File-26050. [JBoss, 2004] JBoss homepage. http://www.jboss.org (last accessed: 23.10.2004)

327/366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

[JBPM, 2004] jBPM Homepage. http://www.jbpm.org (last accessed: 23.10.2004) [Jeron, 1991] T. Jeron. Testing for unboundedness of fifo channels. In Proc. STACS-91: Symposium on Theoretical Aspects of Computer Science, volume 480 of LNCS, pages 322333, Hamburg, 1991. Springer Verlag. [Johannesson et al., 2000a/b] Johannesson P., Wangler B., and Jayawwra P. Application and process integration - concepts, issues, and research directions. In Brinkkemper S., Lindencrona E., and Solvberg A., editors, Information Systems Engineering. State of the Art and Research Themes. Springer, 2000. [JoinFlow, 1998] JoinFlow: Workflow Management Facility, 1998. Revised submission bom/98-0607. [Jonkers et al., 2004] Jonkers, H., Lankhorst, M.M., Buuren, R. van, Hoppenbrouwers, S., Bonsangue, M., & Torre, L. van der, Concepts for modelling enterprise architectures, International Journal of Cooperative Information Systems, Special Issue on Architecture in IT, vol. 13, no. 3, Sept. 2004, pp. 257-287. [Jung et al., 2004] Jae yoon Jung, Wonchang Hur, Suk-Ho Kang, and Hoontae Kim. Business process choreogrphy for b2b collaboration. IEEE Internet Computing, January 2004. [Junginger, 2000] Junginger, S.: The Workflow Management Coalition Standard WPDL: First Steps towards Formalization. In: Proceedings of ECEC'2000. Society for Computer Simulation, pp. 163168. [Junginger et al., 2000] Junginger, S.; Khn, H.; Heidenfeld, M.; Karagiannis, D.: Building Complex Workflow Applications: How to Overcome the Limitations of the Waterfall Model. In: Fischer, L. (Ed.): Workflow Handbook 2001, Future Strategies, Lighthouse Point, 2000. [Junginger et al., 2004] Junginger, S.; Kuehn, H.; Bayer, F.; Karagiannis, D.: Workflow-based Business Monitoring. In: Fischer, L. (Ed.): Workflow Handbook 2004, Future Strategies, Lighthouse Point, 2004. [Kak and Sotero, 2002] Rajeev Kak and Dave Sotero. Implementing rosettanet e-business standards for greater supply chain collaobration and efficiency. Technical report, August 2002. RosettaNet/i2 White Paper. [Kaplan and Norton, 1996] Kaplan, R. S.; Norton, D. P.: The Balanced Scorecard: Translating Strategy into Action. Harvard Business School Press. 1996. [Kaplan and Norton, 2000] Kaplan, R. S.; Norton, D. P.: The Strategy-Focused Organization: How Balanced Scorecard Companies Thrive in the New Business Environment. Harvard Business School Press. 2000. [Kappel et al., 1995] G. Kappel and P. Lang and S. Rausch-Schott and W. Retschitzegger. Workflow management based on objects, rules, and roles. In Data Engineering Bulletin, volume 18, pages 11-18, 1995.

328/366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

[Karagiannis, 1995] Karagiannis, D.: BPMS: Business Process Management Systems. In: ACM SIGOIS Bulletin, Vol. 16, Nr. 1, August 1995, pp. 10-13. [Karagiannis et al., 1996] Karagiannis, D.; Junginger, S.; Strobl, R.: Introduction to Business Process Management Systems Concepts. In: Scholz-Reiter, B.; Stickel, E. (Eds.): Business Process Modelling. Springer, Berlin et al. 1996., pp. 81-106. [Kiepuszewski, 2002] B. Kiepuszewski. Expressiveness and Suitability of Languages for Control Flow Modeling in Workflows. PhD thesis, 2002. [Kiepuszewski et al., 1999] B. Kiepuszewski and A.H.H. ter Hofstede and C. Bussler. On structured workflow modeling. In Benkt Wangler and Lars Bergman, editors, Advanced Information Systems Engineering, 12th International Conference, CAiSE00, Stockholm, Sweden, June 5-9, 2000, Proceedings, volume 1789 of Lecture Notes in Computer Science, pages 431-445. Springer, 1999. [Kiepuszewski et al., 2002] B. Kiepuszewski, A.H.M. ter Hofstede, and W.M.P. van der Aalst. Fundamentals of control flow in workflows. Technical report, Queensland University of Technology, Brisbane, Australia, 2002. QUT Technical report FIT-TR-2002-03, http://www.tm.tue.nl/it/research/patterns), also appeared in Acta Informatica. [Kochut et al., 1999] K. J. Kochut, A. P. Sheth, and J. A. Miller. Orbwork: A corba-based fully distributed, scalable and dynamic workflow enactment service for meteor. Technical report, 1999. [Krishnamoorty and Shan, 2000] V. Krishnamoorty and M. C. Shan. Virtual transaction model for workflow applications. In Procs of SAC00, Como, Italy, 2000. [Krishnan et al., 2002] S. Krishnan, P. Wagstrom, and G. von Laszewski. Gsfl: A workflow framework for grid services. Technical report, Argonne National Laboratory, August 2002. ANL/MCS-P980-0802. [Kupferman, 2001] O. Kupferman and M. Y. Vardi. Synthetizing distributed systems. In Proceedings of IEEE Symposium on Logic In Computer Science, 2001. [Kutvonen, 2002] L. Kutvonen, Automated management of inter-organisational applications. The 6th International Enterprise Distributed Object Computing Conference (EDOC2002), IEEE Computer Society, 2002. [Kutvonen, 2004] L. Kutvonen, B2B middleware for managing process-aware eCommunities. Technical report C-2004-62, Department of Computer Science, University of Helsinki, 2004. [Lankhorst et al., 2004] Lankhorst, M.M. , Buuren, R. van, Leeuwen, D. van, Jonkers, H., Doest H. ter., Enterprise architecture modelling The issue of integration, to appear in Advanced Engineering Informatics, Special Issue Making Technology Really Work, 2004. [Lazcano et al., 2000] A. Lazcano, G. Alonso, H. Schuldt, and C. Schuler. The wise approach to electronic commerce. International Journal of Computer Systems Science and Engineering, 2000.

329/366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

[Lee et al., 1998] J. Lee and M. Gruninger and Y. Jin and T. Malone and A. Tate and G. Yost. The PIF Process Interchange Format and Framework Version 1.2. The Knowledge Engineering Review, volume 13 (1), Cambridge University Press, 1998. [Lewis, 2000] Malcom Lewis. Supply chain optimization: An overview of rosettanet e-business processes. EAI Journal, June 2000. [Leymann, 2001] Frank Leymann. Web services flow language (wsfl 1.0). Technical report, 2001. http://www-4.ibm.com/software/solutions/webservices/pdf/WSFL.pdf. [Leymann and Altenhuber, 1994] F. Leymann and W. Altenhuber. Managing Business Processes as an Information Resource. IBM Systems Journal, volume 33 (2), pages 326-348 1994. [Lin et al., 2002] Hao Lin, Zhibiao Zhao, Hongchen Li, Zhiguo Chen. A Novel Graph Reduction Algorithm to Identify Structural Conflicts. In Proceedings of 35th Hawaii International Conference on System Sciences (HICSS-35 2002), 7-10 January 2002, Big Island, HI, USA. IEEE Computer Society, 2002. [Ludwig and Hoffner, 1999] H. Ludwig and Y. Hoffner. Contract-based cross-organisational workflows - the crossflow project. In Internationa Joint Conference on Work Activities Coordination and Collaboration (WACC99), 1999. [Ludwig and Whittingham, 1999] H. Ludwig and K. Whittingham. Virtual enterprise coordinator agreement-driven gateways for cross-organizational workflow management. In Proceedings of International Joint Conference on Work Activities Coordination and Collaboration (WACC99), San Fransisco, 1999. [Malone et al., 1993] T. Malone and K. Crowston and J. Lee and B. Pentland. Tools for Inventing Organizations: Toward A Handbook of Organizational Processes. In: Proceedings of th 2nd IEEE Workshop on Enabling Technologies Infrastructure for Collaborative, IEEE Computer Society Press, 1993. [Mamath and Ramamritham, 1998] M. Mamath and K. Ramamritham. Failure handling and coorinated execution of concurrent workflows. In Proceedings of the 14th International Conference on Data Engineering, Orlando, Florida, February 1998. [MAPLE, 1996] ISO13281.2 Manufacturing Automation Programming Environment (MAPLE), 1996. [Marjanovic and Orlowska, 2000] Olivera Marjanovic and Maria E. Orlowska. Dynamic Verification of Temporal Constraints in Production Workflows. In Proceedings of the Australasian Database Conference ADC'2000, Canberra, Australia, January 31 - February 03, 2000. IEEE Computer Society, 2000. [Mayer et al., 1995] Mayer, R.J., Menzel, C.P., Painter, M.K., deWitte, P.S., Blinn, T., Perakath, B., Information Integration for Concurrent Engineering (IICE) IDEF3 Process Descrition Capture Method Report, Interim Technical Report April 1992-Sept. 1995, Knowledge Based Systems Inc.

330/366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

[McCarthy and Sarin, 1993] D. McCarthy and S. Sarin. Workflow and transactions in InConcert. Data Engineering Bulletin, volume 16(2), pages 53-56, 1993. [McGregor, 2001] McGregor, C.: The Impact of Business Performance Monitoring on WfMC Standards. In: Fischer, L. (Ed.): Workflow Handbook 2002, Future Strategies, Lighthouse Point, 2002. [Mecella and Pernici, 2002] Massimo Mecella and Berbara Pernici. Building flexible and cooperative applications based on e-services. Technical report, Dipartimento di Informatica e Sistemistica, Universita di Roma La Sapienza, Roma, Italy, 2002. Technical report 21-2002, available at http://www.dis.uniroma1.it/ mecella/publications/index.htm. [Medina-Mora et al., 1992] R. Medina-Mora and T. Winograd and R. Flores and F. Flores. The Action Workflow Approach to Workflow Management Technology. In: CSCW92 Proceedings of the Conference on Computer Supported Cooperative Work. ACM Publishers, New York, 1992. [Medjahed et al., 2003] B. Medjahed, B. Benatallah, A. Bouguettaya, A. H. H. Ngu, and A. K. Elmagarmid. Business-to-business interactions: issues and enabling technologies. The VLDB Journal, 12(1):5985, 2003. [Meng et al., 2002] Jien Meng, Stanley Y. W. Su, Herman Lam, and Abdelsalem Helal. Achieving dynamic inter-organizational workflow mangement by integrating busienss processes, events and rules. In Proceedings of the 35th hawaii International Confernece of System Sciences, 2002. [Mentzas et al., 2001] G. Mentzas and C. Halaris and S. Kavadias. Modeling business processes with workflow systems: an evaluation of alternative approaches. In: International Journal of Information Management, volume 21, pages 123-135. Elsevier Science Inc., 2001. [Milner, 1980] R. Milner. A Calculus of Communicating Systems, volume 92 of Lecture Notes in Computer Science. Springer Verlag, 1980. [Mohan et al., 1995] C. Mohan and et. al. Exotica: A research perspective on workflow management systems. Data Engineering Bulletin, 18(1):1926, 1995. [Moss, 1985] E. Moss. Nested Transactions. MIT Press, 1985. [MQSeries, 2004] MQSeries Workflow. http://www-3.ibm.com/software/ts/mqseries/workflow/. [Muller and Rahm, 1999] R. Muller and E. Rahm. Rule-based dynamic modification of workflows in a medical domain. In Proceedings of BTW99, pages 429448. Springer, 1999. [Murdoch and McDermid, 2000] Murdoch, J. and J.A. McDermid, Modeling engineering design processes with Role Activity Diagrams, Transactions of the Society for Design and Process Science, vol. 4, no. 2, June 2000. [Muth et al., 1998] P. Muth, D. Wodtke, J. Weissenfels, A. K. Dittrich, and G. Weikum. From centralized workflow specification to distributed workflow execution. J Intell Inf Syst, 10(2):159 184, 1998.

331/366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

[Neely, 2002] Neely, A. (Ed.): Business Performance Measurement. Cambridge University Press, 2002. [Nuettgens et al., 1998] M. Nttgens and T. Feld and V. Zimmermann. Business Process Modeling with EPC and UML: Transformation or Integration?. In: Schader, M.; Korthaus, A. (Edt.): The Unified Modeling Language - Technical Aspects and Applications, Proceedings (Mannheim, Oktober 1997), Workshop des Arbeitskreises "Grundlagen objektorientierter Modelierung" (GROOM) der GI-Fachgruppe 2.1.9 ("Objektorientierte Softwareentwicklung"), Heidelberg 1998. [OASIS, 2004] Homepage for OASIS, available at http://www.oasis-open.org [Odeh et al., 2002] M. Odeh and I. Besson and S. Green and J. Sa. Modeling Processes Using RAD and UML Activity Diagrams: an Exploratory Study. In: Proceedings of the International Arab Conference on IT, Doha Qatar, 2002. [ORiordan, 2002] David ORiordan. Business process standards for web services. Technical report, 2009. http://www.webservicesarchitect.com. [OS, 2004] Open Source Workflow Engines Written in Java. http://www.manageability.org/blog/stuff/workflow_in_java/view (last accessed: 23.10.2004) [Ould, 1995] Ould, M.A., Business Processes: Modelling and Analysis for Re-Engineering and Improvement. J. Wiley, Chichester, 1995. [Papazoglou and Georgakopoulos, 2003] M. P. Papazoglou and D. Georgakopoulos. Service oriented computing. Communications of the ACM, October 2003. [Perrin and Godart, 2003] Olivier Perrin and Claude Godart. A contract model to deploy and control. In TES 2003; LNCS 2819, pages 7890, 2003. Cooperative Processes. [Perrin and Godart, 2004a] Olivier Perrin and Claude Godart. A model to support collaborative work in virtual enterprises. Data& Knowledge Engineering, xxx(xxx), 2004. [Perrin and Godart, 2004b] Olivier Perrin and Claude Godart. An approach to implement contracts as trusted intermediaries. In EEE Conference on E-Commerce Technology, International Workshop on Electronic Contracting (WEC), San Diego, California, USA, July 2004. [Perrin et al., 2002] Olivier Perrin, Julia Bitcheva, and Claude Godart. Cooperative process coordination. In SEA 2002, 6th IASTED International Conference Software Engineering and Applications, Cambridge, USA, November 2002. [Perrin et al., 2003] Olivier Perrin, Franck Wynen, Julia Bitcheva, and Claude Godart. A model to support collaborative work in virtual enterprises. In Business Process Management, pages 104119, 2003. LNCS 2678. [Pierce and Hosoya, 2001] B. Pierce and H. Hosoya. Regular expression pattern mathcing for xml. In Proc. ACM Symp. on Principles of Programming Languages, pages 6780, 2001.

332/366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

[Pierce and Sangiorgi, 1993] B. Pierce and D. Sangiorgi. Typing and subtyping for mobile processes. In Proc. IEEE Symp. on Logic in Comp. Science, 1993. [Popkin, 2004] [popkin] http://www.popkin.com/products/product_overview.htm [Pozewaunig et al., 1997] H. Pozewaunig and J. Eder and W. Liebhart. ePERT: Extending PERT for workflow management systems. In: Journal of Intelligent Information Systems, volume 10, pages 93-129, 1998. [Presley and Liles, 1995] Presley, A. and Liles, D., The use of IDEF0 for the design and specification of methodologies, 4th Industrial Engineering Research Conference, Nashville, TN, 1995. [PSL, 2004] http://www.mel.nist.gov/psl/. [Rabelo et al., 2000] R. Rabelo, L. M. Camarinha-Matos, and R. V. Vallejos. Agent-based brokerage for virtual enterprise creation in the moulds industry. In E-business and Virtual Enterprises, 2000. http://gsigma-grucon.ufsc.br/massyve. [Raut and Basavaraja, 2003] [RaB03] Ashutosh Raut, Ashwin Basavaraja, Enterprise Business Process Integration, Wipro Technologies, No-72, Kconics Electronics City Bangalore, 2003 IEEE [Reichert and Dadam, 1998] M. Reichert and P. Dadam. Adeptflex supporting dynamic changes of workflow without losing control. Journal of Intelligent Information Systems, 10(2):93129, 1998. Special Issue on Workflow Management. [Reuter, 1992] A. Reuter. Contracts. In Transaction Models for Advanced Database Applications, 1992. [Reuter et al., 1997] A. Reuter, K. Schneider, and F. Schwenkreis. Contracts revisited. In S. Jajodia and L. Kerschberg, editors, Advanced Transaction Models and Architectures. Kluwer Academic Publishers, 1997. [Rittgen, 2000] Rittgen, P., A modelling method for developing web-based applications, in Proc. International Conference IRMA 2000, Anchorage, AK, 2000 pp. 135-140. [Roman et al., 2001] E. Roman, S. W. Ambler, and T. Jewell. Mastering Enterprise JavaBeans. Wiley, 2001. [Rosemann and zur Muehlen, 1998] M. Rosemann and M. zur Muehlen. Evaluation of Workflow Management Systems A Meta Model Approach. In: Australian Journal of Information Systems, volume 6, pages 1, pp. 103-116, 1998. [RosettaNet, 2000] Clusters, Segments and PIPs Overview, 2000. http://www.rosettanet.org. [RosettaNet, 2001] RosettaNet Implementation Framework, Core Specification, 2001. http://www.rosettanet.org. [RosettaNet, 2002] RosettaNet Overview, 2002. http://www.rosettanet.org.

333/366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

[RosettaNet, 2004a] Rosettanet implementation framework: Core specification v02.00.00, 2004. http://www.rosettanet.org/. [RosettaNet, 2004b] Homepage for RosettaNet, available at http://www.rosettanet.org [Roxburgh, 2001] Ulrich Roxburgh. Biztalk orchestration: Transactions, exceptions, and debugging. Technical report, February 2001. [Rupietta, 1997] W. Rupietta. Organization and Role Models for Workflow Processes. Workflow Handbook 1997, Ed.: P. Lawrence. John Wilsey & Sons Ltd, Chichester et al., pages 165-172, 1997 [Sadiq and Orlowska, 1997] W. Sadiq and M. Orlowska: On correctness issues in conceptual modelling of workflows, in: Galliers, R. et al. (Eds.): Proceedings of the 5th European Conference on Information Systems, Cork, Vol. II 1997. [Sadiq and Orlowska, 1999] W. Sadiq and M.E. Orlowska. Applying Graph Reduction Techniques for Identifying Structural Conflicts in Process Models. In Proceedings of the 11th Conf on Advanced Information Systems Engineering (CAiSE'99), pages 195--209, Hildeberg, Germany, June 1999. [Scheer, 1992] Scheer, A.-W., (1992), Architektur Integrierter Informationssysteme, 2nd edition. Springer-Verlag, 1992 (in German). [Scheer, 1994a] A.-W. Scheer. Business Process Engineering: Reference Models for Industrial Enterprises. Springer, Berlin, Germany, 1994. [Scheer, 1994b] Scheer, A.-W., (1994), Business Process Engineering: Reference Models for Industrial Enterprises, 2nd ed., Springer, Berlin, 1994. [Scheer, 1998] Scheer A.-W., 1998. Aris Business Process Frameworks. Springer. [Schmitt, 2004] Michael Schmitt, Bertrand Gregoire, Christpohe Incol, Sopie Ramel, Pierre Brimont, and Eric Dubois. If business models could speak! efficient: a framework of appraisal, design and simulation of electronic business transactions. [Schuler et al., 1999] C. Schuler, H. Schuldt, G. Alonso, and H. H. Schek. Workflows over workflows: Practical experiences with the integration of sap r/3 busienss workflows in wise. In Enterprise-wide and Cross-enterprise Workflow Managment: Concepts, Systems, Applications, Paderborn, Germany, 1999. [Schulz et al., 2003] Karsten Schulz, Klaus-Dieter Platte, Torsten Leidig, Rainer Guggaver, Kim Elams, Arian Zwegers, Frank Lillehagen, Guy Doumeingts, Arne Berre, Maria Anastasiou, Maria Jose Nunez, Ricardo Goncalves, Dadid Chen, and Michele Missikoff. A gap analyisis interoperabilty development for enterprise application and software - road maps. Technical report, 2003. [Schuster, 2000] H. Schuster, D. Baker, A. Cichocki, D. Georgakopoulous, and M. Rusinkiewicz. The collaboration management infrastructure. In ICDE Conference, pages 677678, San Diego, California, 2000. 334/366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

[Schuster et al., 2000] H. Schuster, D. Georgakopoulos, A. Cichocki, and D. Baker. Modeling and composing service-based and referece process-based multi-enterprise processes. In Proceedings of CAiSE00: Advanced Information Systems Engineering Confererence, Stockholm, June 2000. [Segev et al., 2003] Arie Segev, Ajit Patankar, and J. Leon Zhao. E-business process interleaving: Managerial and technological implications. Technical report. [Seidl, 1990] H. Seidl. Deciding equivalence of finite tree automata. SIAM Journal of Computing, 19(3):424437, 1990. [Senge, 1990] P. Senge. The fifth discipline The art and practice of the learning organization. Century Business Publishers, London, 1990. [SFS, 1992] SFS 1980:100, 1980. Sekretesslagen, Justitiedepartementet L6 1980. Reprinted in SFS 1992:1474 (in Swedish). [Sheth et al., 1997] A. Sheth, D. Georgakopoulous, S. M. M. Joosten, M. Rusinkiewicz, W. Sacchi, J. Wileden, and A. WOlf. Report from the nssf workshop on workflow and process automation in information systems. ACM SIGSOFT Software Engineering Notes, 22(1), 1997. [Sivashanmugam et al., 2003] Kaarthik Sivashanmugam, John A. Miller, Amit Sheth, and Kunal Verma. Framework for semantic web process composition. Technical report, Department of Computer Science, UGA, June 2003. Technical report 03-008, LSDIS Lab. [Staffware, 2004] Staffware. http://www.staffware.com/. [Steen et al., 2002a] Steen, M.W.A., M.M. Lankhorst and R.G. van de Wetering, Modelling networked enterprises, in Proc. 6th International Enterprise Distributed Object Computing Conference (EDOC'02), Lausanne, Switzerland, Sept. 2002, pp. 109-119. [Steen et al., 2002b] M.W.A. Steen, M.M. Lankhorst, and R.G. van de Wetering. Modelling networked enterprises. In Proc. Sixth International Enterprise Distributed Object Computing Conference (EDOC02), pages 109119, Lausanne, Switzerland, 2002. https://doc.telin.nl/dscgi/ds.py/Get/File-22162. [Steen and ter Hofte, 2002] Integrating collaborative support and business transaction environments. Technical report. https://doc.telin.nl/dscgi/ds.py/Get/File25639/CoCoNetWs1SteenTerHofteIntegratingCSandBTS.pdf. [Strrle, 2004] H. Strrle. Semantics and Verification of Data Flow in UML 2.0 Activities. To be published in: Electronic Notes in Theoretical Computer Science. Elsevier Science Inc., 2004. [Tagg, 2001] Roger Tagg. Workflow in different styles of virtual enterprise. In Proceedings of the workshop on Information technology for virtual enterprises, 2001. [Thatte, 2001] Satish Thatte. Xlang. Technical report, 2001. http://www.gotdotnet.com/team/xml_wsspecs/xlang-c/default.html.

335/366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

[Thatte, 2003] S. Thatte. Specification: Business Process Execution Language for Web Services Version 1.1, May 2003. http://www.ibm.com/developerworks/library/ws-bpel/. [TIBCO, 2004] TIBCO InConcert. http://www.tibco.com/products/in_concert/. [Turner, 1993] J. J. Turner. Using Formal Description Techniques An Introduction to Estelle, Lotos and SDL. Wiley, 1993. [Tlle et al., 2003] Martin Tlle, Arian Zwegers, and Johan Vesterager. Globemen: Virtual enterprise reference architecture and methodology. Technical report, 2003. EU Project Deliverable. [UDDI, 2003] Universal Description, Discovery and Integration of Web Services (UDDI) 3, 2002. http://www.oasis-open.org/committees/uddi-spec/tcspecs.shtmluddiv3. [Ulieru and Unland, 2003] Mihaela Ulieru and Rainer Unland. Emergent holonic enterprises: How to efficiently find and decide on good partners. International Journal of Information Technology and Decision Making, 2(4), December 2003. [UN/EDIFACT, 2003] United Nations Directories for Electronic Data Interchange for Administration, Commerce, and Transport (UN/EDIFACT), 2003. http://www.unece.org/trade/untdid/welcome.html. [Urban et al., 2001] S. D. Urban, S.W. Dietrich, A. Saxena, and A. Sundermier. Interconnection of distributed components: an overview of current middleware solutions. J Comput Inf Sci Eng, 1:23 31, 2001. [van den Heuvel, 2003] Willem-Jan van den Heuvel and Zakaria Maamar. Intelligent web services moving towared a framework to compose. Communications of the ACM, 46(10):103109, October 2003. [van den Heuvel and Weigand, 2000] Willem-Jan van den Heuvel and Hans Weigand. Crossorganizational workflow integration using contracts. In Business Object Component Workshop: EAI on OOPSLA2000, 2000. [van den Heuvel and Weigand, 2003] Willem-Jan van den Heuvel and Hans Weigand. Coordinating web-service enabled business transactions with contracts. [van der Aalst, 1998] W. M. P. van der Aalst. The application of petri nets to workflow management. The Journal of Circuits, Systems and Computers, 8(1):2166, 1998. [van der Aalst, 2000] W.M.P. van der Aalst. Workflow Verification: Finding Control-Flow Errors using Petri-net-based Techniques. In W.M.P. van der Aalst, J. Desel, and A. Oberweis, editors, Business Process Management: Models, Techniques, and Empirical Studies, volume 1806 of Lecture Notes in Computer Science, pages 161-183. Springer-Verlag, Berlin, 2000. [van der Aalst, 2003] W. M. P. van der Aalst. Dont go with the flow: Web services composition standards exposed. IEEE Intelligent Systems, January 2003.

336/366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

[van der Aalst and ter Hofstede, 2002] W.M.P. van der Aalst, A.H.M. ter Hofstede, B. Kiepuszewski, and A.P. Barros. Workflow Patterns. Distributed and Parallel Databases, volume 14(3), pages 5-51, July 2003. [van der Aalst and ter Hofstede, 2004] W.M.P. van der Aalst and A.H.M. ter Hofstede. YAWL: Yet Another Workflow Language. Accepted for publication in Information Systems. [van der Aalst et al., 2002a] W. M. P. van der Aalst and A. Hirnschall and H. M. W. Verbeek. An Alternative Way to Analyze Workflow Graphs. In: A. Banks Pidduck, J. Mylopoulos, C.C. Woo, M. Tamer Ozsu (Eds.): Advanced Information Systems Engineering, 14th International Conference, CAiSE 2002 Toronto, Canada, May 27-31, 2002, pages 1-535pp. Springer Verlag, LNCS 2348, May 2002. [van der Aalst et al., 2002b] W. M. P. van der Aalst, M. Dumas, A. H. M. ter Hofstede, and P. Wohed. Pattern based analysis of bpml (and wsci). Technical report, Queensland University of Technology, 2002. FIT-TR-2002-05, http://tmitwww.tn.tue.nl/staff/wvdaalst/Publications/pl76.pdf. [van der Aalst et al., 2003] W.P.M. van der Aalst, A. H. M. ter Hofstede, and M. Weske. Business process management: A survey. In Conference on Business Process Management: On the Application of Formal Methods to Process Aware Information Systems. LNCS, June 2003. [van der Aalst et al., 2004] W.M.P. van der Aalst, L. Aldred, M. Dumas, and A.H.M. ter Hofstede. Design and implementation of the YAWL system. In The 16th International Conference on Advanced Information Systems Engineering (CAiSE04), Riga, Latvia, June 2004. Springer Verlag. [Verbeek, 2001] H. M. W. Verbeek, T. Basten, and W. M. P. van der Aalst. Diagnosting workflow processes using woflan. The COmputer Journal, 44:246279, 2001. [Vissers, 1998] Vissers, J. M. H., 1998. Health care management modelling: a process perspective, Health Care Management Science, 1, pp. 77-85. [Visuera, 2003] Visuera. Visuera Process Manager [online]. Available from: http://www.visuera.com [Accessed April 2003]. [VITA Nova, 2002] VITA Nova consortium, 2002. Projektplan VITA Nova I [online]. Available from http://www.ida.his.se/ida/research [Accessed April 2003]. [Wangler et al., 2003a] Wangler B., hlfeldt R-M., and Perjons E. Process oriented information systems architectures in health care, 2003. [Wangler et al., 2003b] Wangler B., Alfeldt R-M., and Perjons E. Process oriented information systems architectures in health care. Health Informatics Journal, 9(4):253265, 2003. [Wangler et al., 2003c] Wangler, B., hlfeldt, R. and Perjons, E. (2003) Process Oriented Information Systems Architectures in Healthcare. Proceedings of the 8th International Symposium for Health Information Management Research (iSHIMR 2003). June 1-3, 2003, Bors, Sweden. [WebV2, 2003] Dynamic and mobile federated business process execution. Technical report, December 2003. A WebV2 Whitepaper.

337/366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

[Weigand and van den Heuvel, 1998] Hans Weigand and Willem-Jan van den Heuvel. Metapatterns for electronic commerce transactions based of flbc. [Weikum and Schek, 1992] G. Weikum and H. Schek. Concepts and applications of multi-level transactions and open nested transactions. In A. Elmagarmid, editor, Transaction Models for Advanced Database Applications. Morgan Kaufmann, 1992. [WfMC-I1, 2002] The Workflow Management Coalition. Workflow Process Definition Interface XML Process Definition Language. A Workflow Management Coalition Specification, Document Number WFMC-TC-1025, 2002. [Wf-XML, 2003] Wf-XML 2.0: XML Based Protocol for Run-Time Integration of Process Engines, October 2993. Draft. [Winograd and Flores, 1987] T. Winograd and R. Flores. Understanding Computers and Cognition. Addison-Wesley, 1987. ISBN 0201112973. [Wodtke and Weikum, 1997] D. Wodtke and G. Weikum. A formal foundation for distributed workflow execution based on state charts. In: ICDT97, pages 230-246, 1997. [Wodtke et al., 1996] D. Wodtke and et al. The mentor project: Steps towards enterprise-wide workflow managment. In Proceedings of the International Conference on Data Engineering, New Orleans, 1996. [Wohed et al., 2002] P. Wohed, W. M. P. van der Aalst, M. Dumas, and A. H. M. ter Hofstede. Pattern based analysis of bpel4ws. Technical report, Queensland University of Technology, 2002. FIT-TR-2002-04, http://tmitwww.tn.tue.nl/staff/wvdaalst/Publications/pl75.pdf. [Workshop, 2002] Workshop on data derivation and provenance, October 2002. http://wwwfp.mcs.anl.gov/ foster/provenance/. [WfMC, 1998] Workflow Management Coalition: Workflow Management Coalition Audit Data Specification. Document Number WFMC-TC-1015, Document Status - Version 1.1, September 1998. http://www.wfmc.org/standards/docs/TC-1015_v11_1998.pdf (2003-11-23). [WfMC, 1999] Workflow Management Coalition: Terminology & Glossary. Document Number WFMC-TC-1011, Document StatusIssue 3.0, February 1999. http://www.wfmc.org/standards/docs/TC-1011_term_glossary_v3.pdf (2003-11-23). [WfMC, 2000] Workflow Management Coalition: Workflow StandardInteroperability Wf-XML Binding. Document Number WFMC-TC-1023, Document StatusOfficial Version 1.0. May 2000. http://www.wfmc.org/standards/docs/Wf-XML-1.0.pdf (2003-11-23). [WfMC, 2002] Workflow Management Coalition: Workflow Process Definition InterfaceXML Process Definition Language. Document Number WFMC-TC-1025, Document StatusVersion 1.0 Final Draft. October 2002. http://www.wfmc.org/standards/docs/TC-1025_10_xpdl_102502.pdf (2003-11-23).

338/366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

[WFMC-TC-1011, 1999] Workflow Management Coalition. Terminology & Glossary. WFMC-TC1011, February 1999 [WPDL, 2004] WPDL. [WSDL, 2002] Web Services Conversation Language (WSDL), 2002. http://www.w3.org/TR/2002/NOTE-wsc110-20020314. [X12, 2003] X12 EDI (Electronic Data Interchange), 2003. http://www.x12.org/. [YAWL, 2004] YAWL Project Homepage. http://sourceforge.net/projects/yawl (last accessed: 23.10.2004) [Yang and Papazoglou, 2000] J. Yang and M. P. Papazoglou. Interoperation support for electronic business. Communications of the ACM, 43(6):3947, 2000. [Yang et al., 2001] J. Yang, W. J. van den Heuvel, and M. P. Papazoglou. Service deployment for virtual enterprises. In xxx, xxx. [Zhao and Stohr, 1999] J.L. Zhao and E.A. Stohr. Temporal workflow management in a claim handling system. In Work activities coordination and collaboration, International Joint Conference, WACC99, San Francisco, Proceedings, pages 187-195, 1999. [zur Muehlen, 1999] M. zur Muehlen. Evaluation of Workflow Management Systems Using Meta Models. In: Proceedings of the 32nd Hawaii International Conference on System Sciences, IEEE, 1999. [zur Muehlen, 2001] zur Muehlen, M.: Workflow-based Process Controlling - Or: What You Can Measure You Can Control. In: Fischer, L. (Ed.): Workflow Handbook 2001, Future Strategies, Lighthouse Point, 2001. [zur Muehlen and Becker, 1999] M. zur Muehlen and J. Becker. WPDL State-of-the-Art and Development Perspectives of a Meta-Language. In: Proceedings of the 1st KnowTech Forum, September 17th-19th 1999, Potsdam 1999. [zur Muehlen and Allen, 2000] zur Muehlen, M.; Allen, R.: Workflow Classification. Embedded & Autonomous Workflow Management Systems. Workflow Management Coalition White Paper, March 10th 2000. http://www.aiim.org/wfmc/standards/docs/MzM_RA_WfMC_WP_ Embedded_and_Autonomous_Workflow.pdf (2004-10-01). [hlfeldt, 2002] hlfeldt, R.., 2002. Information Security in Home Healthcare: A Case Study, In Proceedings of the Third International Conference of the Australian Institute of Computer Ethics (AiCE), september 30th, Sydney, Australia, pp. 1-10.

IX.7 Bibliography Non functional aspects [1] E. Amoroso et al. Toward an approach to measuring software trust. In IEEE Computer Society Symposium on Research in Security and Privacy, 1991. 339/366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

[2] R. J. Anderson et al. Secure books: Protecting the distribution of knowledge. In Proceedings of Security Protocols Workshop 97, 1997. [3] G. D. Abowd, R. Allen, and D. Garlan. Formalizing style to understand descriptions of software architecture. ACM Transactions on Software Engineering and Methodology, 4(4):319364, October 1995. [4] F. A. Aagesen. QoS frameworks for open distributed processing systems. Telektronikk, 93(1):2641, 1997. [5] J. . Aagedal. Towards an ODP-compliant object definition language with QoS support. In Proceedings of 5th International Workshop on Interactive Distributed Multimedia Systems and Telecommunication Services (IDMS98), pages 183194, Oslo, Norway, 1998. [6] J. . Aagedal. Component quality modelling language (CQML). Technical Report STF40 A99048, SINTEF, Oslo, 1999. [7] Jan yvind Aagedal. Quality of Service Support in Development of Distributed Systems. PhD thesis, Department of Informatics, Faculty of Mathematics and Natural Sciences, University of Oslo, 2001. Unipub forlag 2001, ISSN 1501-7710. [8] J. . Aagedal and A.-J. Berre. ODP-based QoS-support in UML. In Proceedings of First International Enterprise Distributed Object Computing Workshop (EDOC 97), pages 310321, Gold Coast, Australia, 1997. [9] J. . Aagedal, A.-J. Berre, V. Goebel, and T. Plagemann. Object-oriented role-modeling with QoS support for the ODP viewpoints. Technical Report STF40 A97067, SINTEF, 1997. [10] F. Aquilani, S. Balsamo, and P. Inverardi. Performance analysis at the software architectural design level. Performance Evaluation, 45(23), July 2001. [11] M. Abadi, M. Burrows, B. W. Lampson, and G. Plotkin. A calculus for access control in distributed systems. ACM TOPLAS, 15(4):706, September 1993. [12] M. D. Abrams. Trusted system concepts. Computers and Security, pages 4556, 1995. [13] C. Aurrecoechea, A. Campell, and L. Hauw. Survey of QoS architectures. Technical Report MPG-95-18, Center for Telecommunication Research, Columbia University, 1997. [14] J. Polk A. Carroll, M. Juarez and T. Leininger. Palladium: A business overview, 2002. Retrieved Oct 2002, from http://www.microsoft.com /PressPass/features/2002/jul02/0724palladiumwp.asp. [15] J. . Aagedal, F. den Braber, T. Dimitrakos, B. A. Gran, D. Raptis, and K. Stolen. Modelbased risk assessment to improve enterprise security. In Proceedings of 6th International Conference on Enterprise Distributed Object Computing (EDOC02), Lausanne, Switzerland, September 2002. [16] Bob Atkinson, Giovanni Della-Libera, Satoshi Hada, Maryann Hondo, Phillip Hallam-Baker, Chris Kaler, Johannes Klein, Brian LaMacchia, Paul Leach, John Manferdelli, Hiroshi Maruyama, Anthony Nadalin, Nataraj Nagaratnam, Hemma Prafullchandra, John Shewchuk, and Dan Simon. Web Services Security (WS-Security), April 2002. [17] C. Adams and S. Farrell. RFC2510 - internet X.509 public key infrastructure certificate management protocols, 1999.

340/366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

[18] D. Agrawal, J. Giles, K.-W. Lee, K. Voruganti, and K. Filali-Adib. Policy-based validation of san configuration. In Fifth IEEE International Workshop on Policies for Distributed Systems and Networks, pages 7786, Yorktown Heights, New York, USA, June 2004. IEEE Computer Society. [19] M. D. Abrams and M. V. Joyce. New thinking about information technology security. Computers and Security, 14(1):6981, 1995. [20] M. D. Abrams and M. V. Joyce. Trusted computing update. Computers and Security, 14(1):57 68, 1995. [21] M. Abadi and L. Lamport. An old-fashioned recipe for real-time. ACM Transactions on Programming Languages and Systems, 16(5):15431571, September 1994. [22] M. Alfano. A quality of service management architecture (QoSMA): A preliminary study. Technical Report TR-95-070, Berkeley International Computer Science Institute, 1995. [23] Jan yvind Aagedal and Zoran Milosevic. Enterprise modeling and QoS for command and control systems. In Second International Enterprise Distributed Object Computing Workshop, pages 88101. IEEE, November 1998. [24] Gaining real-time business value from web services management: Leveraging the content and context of xml web applications, 2002. white paper. [25] R. J. Anderson, V. Matyas, Jr., and F. A. P. Petitcolas. The eternal resource locator: An alternative means of establishing trust on the world wide web. In rd USENIX Workshop on Electronic Commerce, Boston, Massachusetts, USA, 1998. [26] J. . Aagedal, Z. Milosevic, and A. Wood. Modelling virtual enterprises and the character of their interactions. In Proceedings of Ninth International Workshop on Research Issues on Data Engineering: Information Technology for Virtual Enterprises (RIDE-VE99), pages 1926, Sydney, Australia, 1999. [27] R. J. Anderson. NHS wide networking and patient confidentiality: Britain seems headed for a poor solution. British Medical Journal, 311:56, 1995. [28] R. J. Anderson. Clinical system security - interim guidelines. British Medical Journal, 312:109111, 1996. [29] R. J. Anderson. Patient confidentiality - at risk from NHS-wide networking. In Proceedings of Healthcare 96, 1996. [30] R. J. Anderson. A security policy model for clinical information systems. In IEEE Symposium on Security and Privacy, Oakland, California, USA, 1996. IEEE Computing Society Press. [31] R. J. Anderson. An update on the BMA security policy. In Proceedings of 1996 Cambridge Workshop on Personal Information - Security, Engineering and Ethics, pages 217234, 1996. [32] R. J. Anderson. Problems with the NHS cryptography strategy, 1997. [33] R. J. Anderson. The DeCODE proposal for an Icelandic Health Database, 1998. [34] R. J. Anderson. Information technology in medical practice: Safety and privacy lessons from the united kingdom, 1998. [35] R. J. Anderson. Remarks on the Caldicott Report, 1998. [36] R. Anderson. Security Engineering. Wiley, 2001.

341/366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

[37] Anne Anderson. An introduction to the web services policy language (wspl). In Fifth IEEE International Workshop on Policies for Distributed Systems and Networks, pages 189192, Yorktown Heights, New York, USA, June 2004. IEEE Computer Society. [38] J. . Aagedal and J. Oldevik. DEM: A data exchange facility for virtual enterprises. In Proceedings of Sixth Workshops on Enabling Technologies: Infrastructure for Collaborative Enterprises (WET-ICE 97), pages 1722, Cambridge, Massachusetts, USA, 1997. [39] A. Abdul-Rahman and S. Hailes. Security issues in mobile systems, 1995. [40] A. Abdul-Rahman and S. Hailes. Supporting trust in virtual communities. In Hawaii International Conference on System Sciences 33, Maui, Hawaii, 2000. [41] J. I. Asensio. Contribucin a la Especificacin y Gestin Integrada de la Calidad de Servicio en Aplicaciones de Objetos Distribuidos. PhD thesis, University of Valladolid, 2000. [42] J. I. Asensio and V. A. Villagr. A UML profile for QoS management information specification in distributed object-based applications. In HPOVUA 2000, Santorini, Greece, 2000. [43] Khaled Alghathbar and Duminda Wijesekera. Flowuml: A framework to enforce information flow security policies in uml based requirements engineering. In Fifth IEEE International Workshop on Policies for Distributed Systems and Networks, pages 193196, Yorktown Heights, New York, USA, June 2004. IEEE Computer Society. [44] M. Blaze et al. Managing trust in an information-labeling system. European Transactions on Telecommunications, 8:491501, 1997. [45] M. Blaze et al. RFC2704 - the KeyNote trust management system (version 2), 1999. [46] M. Blaze et al. The role of trust management in distributed systems security. In Jan Vitek and Christian Jensen, editors, Secure Internet Programming: Security Issues for Mobile and Distributed Objects. Springer-Verlag, 1999. [47] C. Burt et al. Quality of Service issues related to transforming platform independent models to platform specific models. In Proceedings of Sixth International Enterprise Distributed Object Computing Conference, Lausanne, Switzerland, September 2002. IEEE. [48] R. Bragg et al. Network Security: The Complete Reference. McGraw-Hill/Osborne, Emeryville, California, 2003. [49] M. Burrows, M. Abadi, and R. M. Needham. A logic of authentication. ACM Transactions on Computer Systems, 8(1):1836, 1990. [50] J. R. Barney. Firm resources and structural competitive advantage. Journal of Management, 17:99120, 1991. [51] J.S. Barnes. The mobile commerce value chain: analysis and future developments. International Journal of Information Management, 22, 2002. [52] D. Barry. Web Services and Service Oriented Architectures, the Savvy Managers Guide. Morgan Kaufmann: Elsevier Science, San Francisco, Calif, 2003. [53] W. Burakowski and A. Beben, editors. Architectures for Quality of Service in the Internet: International Workshop, Art-QOS, volume 2698 of Lecture Notes in Computer Science, Berlin, 2003. Springer-Verlag. [54] G. Blair, L. Blair, H. Bowman, J. Bryans, A. Chetwynd, J. Derrick, and D. Hutchison. V-QoS. Lancaster University and University of Kent, 2000. 342/366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

[55] H. Bowman, E. A. Boiten, J. Derrick, and M. Steen. Viewpoint consistency in ODP, a general interpretation. In First IFIP International Workshop on Formal Methods for Open Object-Based Distributed Systems, pages 189204. Chapman and Hall, March 1996. [56] E. Becker, W. Buhse, D. Gnnewig, and N. Rump, editors. Digital Rights Management: Technological, Economic, Legal and Political Aspects, volume 2770 of Lecture Notes in Computer Science. Springer-Verlag, Berlin, 2003. [57] E. Becker, W. Buhse, D. Gnnewig, and N. Rump. Digital rights management: technological, economic, legal and political aspects, volume 2770 of Lecture Notes in Computer Science. Springer-Verlag, Berlin, 2003. [58] C. Burt, B. Bryant, R. Raje, A. Olson, and M. Auguston. Model driven security: Unification of authorization models for fine-grain access control. In Proceedings of 7th International Conference on Enterprise Distributed Object Computing (EDOC03), Brisbane, Australia, September 2003. [59] G. Blair, L. Blair, and J.-B. Stefani. A specification architecture for multimedia systems in Open Distributed Processing. Computer Networks and ISDN Systems, 29:473500, 1997. Special Issue on Specification Architecture. [60] G. S. Blair, L. Blair, and J. B. Stefani. A specification architecture for multimedia systems in Open Distributed Processing. Computer Networks and ISDN Systems, 29(4):473500, 1997. [61] Don Box, Francisco Curbera, Maryann Hondo, Chris Kaler, Dave Langworthy, Anthony Nadalin, Nataraj Nagaratnam, Mark Nottingham, Claus von Riegen, and John Shewchuk. Web Services Policy Framework (WS-Policy), May 2003. Version 1.1. [62] Mike Burmester and Yvo G. Desmedt. Is hierarchical public-key certification the next target for hackers? Communications of the ACM, 47(8):6874, August 2004. [63] Siddharth Baja, Giovanni Della-Libera, Brendan Dixon, Mike Dusche, Maryann Hondo, Matt Hur, Chris Kaler, Hal Lockhart, Hiroshi Maruyama, Anthony Nadalin, Nataraj Nagaratnam, Andrew Nash, Hemma Prafullchandra, and John Shewchuk. Web Services Federation Language (WS-Federation), July 2003. Draft Version 1.0. [64] R. Boulton, T. Elliott, B. Libert, and S. Samek. A business model for the new economy. Journal of Business Strategy, pages 2935, JulyAugust 2000. [65] V. Berget. A survey of extended transaction models in a distributed cooperative environment. Masters thesis, Department of Informatics, University of Oslo, 1998. [66] S. Berinato. The future of security. CIO Magazine, December 2003. [67] Matt Blaze, Joan Feigenbaum, John Ioannidis, and Angelos D. Keromytis. RFC 2704, The KeyNote Trust-Management System, Version 2. IETF, September 1999. [68] M. Blaze, J. Feigenbaum, and A. D. Keromytis. KeyNote: Trust management for public-key infrastructures. In Security Protocols International Workshop, Cambridge, England, 1998. [69] M. Blaze, J. Feigenbaum, and J. Lacy. Decentralized trust management. In IEEE Conference on Security and Privacy, Oakland, California, USA, 1996. [70] M. Blaze, J. Feigenbaum, and J. Lacy. Managing trust in medical information systems. Technical report, AT&T, 1996. [71] M. Blaze, J. Feigenbaum, and M. Strauss. Compliance checking in the PolicyMaker trust management system. In Financial Cryptography: 2nd International Conference, Anguilla, British West Indies, 1998. Springer-Verlag. 343/366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

[72] C. Becker and K. Geihs. MAQS - management for adaptive QoS-enabled services. In Proceedings of IEEE Workshop on Middleware for Distributed Real-Time Systems and Services, San Fransisco, USA, 1997. [73] C. Becker and K. Geihs. Quality of service aspects of distributed programs. In Proceedings of International Workshop on Aspect-Oriented Programming at ICSE98, Kyoto, Japan, 1998. [74] Christian Becker and Kurt Geihs. QoS as a competitive advantage for distributed object systems: From enterprise objects to a global electronic market. In Second International Enterprise Distributed Object Computing Workshop, pages 230238. IEEE, 1998. [75] J. Bosch and H. Grahn. Characterising the performance of three architectural styles. In Proceedings of First Int. Workshop on Software and Performance, Santa Fe, NM, USA, October 1998. [76] C. Becker and K. Geihs. Generic QoS specifications for CORBA. In Proceedings of 11.ITG/VDE Fachtagung Kommunikation in Verteilten Systemen (KIVS99), Darmstadt, Germany, 1999. [77] C. Becker and K. Geihs. Quality of service and object-oriented middleware - multiple concerns and their separation. In Proceedings of DDMA Workshop at ICDCS, Phoenix, Arizona, USA, 2001. [78] C Boyens and O. Gunther. Trust is not enough: Privacy and security in ASP and web service environments. In Y. Manolopoulos and P. Nvrat, editors, ADBIS 2002, volume 2435 of Lecture Notes in Computer Science, pages 822, Berlin, 2002. Springer-Verlag. [79] C. Becker, K. Geihs, and J. Gramberg. Representation of Quality of Service preferences by contract hierarchies. In Proceedings of Elektronische Dienstleistungswirtschaft und Financial Engineering (FAN99), Augsburg, Germany, 1999. [80] P. Ballon, S. Helmus, and R. Pas. Business models for next generation wireless services. In GigaMobile 2001, 2001. [81] Gustav Bostrm, Martin Henkel, and Jaana Wyrynen. Aspects in the Agile Toolbox, 2004. to be published. [82] B. Boehm and H. In. Identifying quality-requirements conflicts. IEEE Software, pages 2535, March 1996. [83] M. Blaze, J. Ioannidis, and A. D. Keromytis. Trust management and network layer security protocols. In Cambridge Protocols Workshop, Cambridge, 1999. [84] Matt Blaze, John Ioannidis, and Angelos D. Keromytis. Trust management and network layer security protocols. In Proceedings of the 1999 Security Protocols International Workshop, Cambridge, England, April 1999. [85] Daryl P. Black. Building Switched Networks: Multilayer Switching, QoS, IP Multicast, Network Policy and Service Level Agreements. Addison Wesley, 1999. [86] M. Blaze. Using the KeyNote trust management system. Technical report, AT&T Research Labs, 1999. [87] Arosha K Bandara, Emil C Lupu, Jonathan Moffett, and Alessandra Russo. A goal-based approach to policy refinement. In Fifth IEEE International Workshop on Policies for Distributed Systems and Networks, pages 229239, Yorktown Heights, New York, USA, June 2004. IEEE Computer Society.

344/366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

[88] W.B. Bradley and D.P. Maher. The nemo p2p service orchestration framework. In Proceedings of the 37th Hawaii International Conference on System Sciences, January 2004. [89] Ron Bodkin. Enterprise security aspects. In AOSD Technology for Application-level Security (AOSDSEC), Aspect Oriented Software Development Workshop, 2004. [90] H. Bohnenkamp. Compositional Solution of Stochastic Process Algebra Models. PhD thesis, Rheinisch-Westflischen Technischen Hochschule, Aachen, Germany, February 2002. [91] Gustav Bostrm. Database encryption as an aspect. In AOSD Technology for Application-Level Security, AOSD04 Workshop, Lancaster, UK, March 2004. [92] H. Bouwman. Business models, value webs, design and metrics, a state of the art, with special emphasis on business models for the mobile domain. Technical Report B4U/D1.1, Telematica Instituut, Enschede, 2003. [93] G. Brahnmath, R. R. Raje, A. Olson, B. Bryant, M. Auguston, and C. Burt. A Quality of Service catalog for software components. In Proceedings of the Southeastern Software Engineering Conference, pages 513520, Huntsville, Alabama, USA, April 2002. [94] G. Blair and J.-B. Stefani. Open Distributed Processing and Multimedia. Addison-Wesley, 1997. [95] Moritz Y. Becker and Peter Sewell. Cassandra: Distributed access control policies with tunable expressiveness. In Fifth IEEE International Workshop on Policies for Distributed Systems and Networks, pages 159168, Yorktown Heights, New York, USA, June 2004. IEEE Computer Society. [96] S. Burns. Web services security - an overview, 2001. [97] M. F. Bertoa and A. Vallecillo. Quality attributes for COTS components. In Proceedings of 6th Workshop on Quality Approaches in Object-Oriented Software Engineering (QAOOSE 2002), Malaga, Spain, 2002. [98] H. Bouwman and E. van den Ham. Business models and emetrics, a state of the art. In B. Preissl, H. Bouwman, and C. Steinfield, editors, Elife after the Dot.com bust. Springer Verlag, Berlin, 2003. [99] H. Bouwman and L. van de Wijngaert. E-commerce b2c research in context: policy capturing, channel choice and customer value. In th Bled eCommerce Conference, eTransformation, Bled, Slovenia, June 2003. [100] Y.-H. Chu et al. REFEREE: Trust management for web applications. AT&T Research Labs, 1997. [101] K. Crisler, M. Anneroth, A. Aftelak, and P. Pulil. The human perspective of the wireless world. Computer Communications, 26:1118, 2003. [102] Andrew T. Campbell, Cristina Aurrecoechea, and Linda Hauw. A review of QoS architectures invited paper. In Proceedings of the 4th International Workshop on Quality of Service (IWQoS), 1996. [103] Cape Clear Software Ltd. Creating Web Services From WebSphere Applications Using Cape Clear Software, 2002. [104] Jorge Cardoso. Quality of Service and Semantic Composition of Workflows. PhD thesis, Department of Computer Science, University of Georgia, Athens, GA (USA), 2002. [105] M. Castells. The Rise of the Network Society. Blackwell, Oxford, 1996. 345/366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

[106] G. Coulson, G. S. Blair, J. B. Stefani, F. Horn, and L. Hazard. Supporting the realtime requirements of continuous media in Open Distributed Processing. Technical Report MPG-92-35, University of Lancaster, 1994. [107] Maitland C., J. M. Bauer, and J. R. Westerveld. The european market for mobile data: evolving value chains and industry structures. Telecommunication Policy, 26(9/10):485504, 2002. [108] A. T. Campbell, G. Coulson, F. Garcia, D. Hutchinson, and H. Leopold. Integrated Quality of Service for multimedia communications. In Twelfth Annual Joint Conference of the IEEE Computer and Communications Societies. Networking: Foundation for the Future (INFOCOM 93), volume 2, pages 732739. IEEE, 1993. [109] Z. Chen and A. Dubinsky. A conceptual model of perceived customer value in e-commerce: A preliminary investigation. Psychology and Marketing, 20(4):323347, 2003. [110] L. Cysneiros and J. do Prado Leite. Non-functional requirements: From elicitaion to modelling languages. In International Conference on Software Engineering, 2002. [111] D. Cohen, N. Goldman, and K. Narayanaswamy. Adding performance information to ADT interfaces. ACM SIGPLAN Notices, 29(8), 1994. Proceedings of the Interface Definition Languages Workshop. [112] B. Christianson and W. S. Harbison. Why isnt trust transitive? In Security Protocols International Workshop. University of Cambridge, 1996. [113] T. H. Clark and G. L. Ho. Electronic intermediaries: Trust building and market differentiation. In nd Annual Hawaii International Conference on Systems Sciences, Hawaii, 1999. [114] Chiariglione. MPEG-21, 2004. Retrieved Sept. 2004, from http://www.chiariglione.org/mpeg/working_documents.htm. [115] Y.-H. Chu. Trust management for the World Wide Web. Massachusetts institute of Technology, 1997. [116] J. Clabby. Web Services Explained, Solutions and Applications for the Real World. Prentice Hall, Upper Saddle River, NJ, 2002. [117] Huseyin Cavusoglu, Birendra Mishra, and Srinivasan Raghunathan. A model for evaluating it security investments. Communications of the ACM, 47(7):8792, July 2004. [118] L. Chung, B. A. Nixon, and E. Yu. Using quality requirements to systematically develop quality software. In Proceedings of 4th International Conference on Software Quality, McLean, VA, USA, 1994. [119] A. Cockburn. Goals and use cases. Journal of Object-Oriented Programming, 10(5):3540, 1997. [120] Julia E. Cohen. DRM and privacy. Communications of the ACM, 46(4):4749, 2003. [121] ContentGuard. MPEG-REL, 2004. Retrieved Sept 2004, from http://www.contentguard.com/MPEGREL_home.asp. [122] 2004. http://www.coral-interop.org/. [123] Brad Cox. Superdistribution. Wired Magazine, pages 8992, September 1994. [124] Brad Cox. Superdistribution Objects as Property on the Electronic Frontier. Addison-Wesley, 1996.

346/366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

[125] G. Camponovo and Y. Pigneur. Analysing the m-business landscape. Annals of Telecommunication, 2002. [126] G. Camponovo and Y. Pigneur. Business models analysis applied to mobile business. In ICEIS 2003, page 11, 2003. [127] D. Chalmers and M. Sloman. A survey of Quality of Service in mobile computing environments. IEEE Communications Surveys, 1999. [128] J. Cross and D. Schmidt. Applying the Quality Connector Pattern to optimize distributed realtime and embedded middleware. In Patterns and Skeletons for Distributed and Parallel Computing. Springer Verlag, 2002. [129] S. Chatterjee, J. Sydir, and B. Sabata. Modeling application for adaptive QoS-based resource management. In Proceeding of the 2nd IEEE High Assurance Systems Engineering Workshop, August 1997. [130] S. Chatterjee, B. Sabata, and J. J. Sydir. ERDoS QoS architecture. Technical Report ITAD1667-TR-98-075, SRI International, Manlo Park, CA, 1998. [131] S. Crdenas and M. V. Zelkowitz. Evaluation criteria for functional specifications. In Proceedings of 12th ICSE, Nice, France, 1990. [132] N. Damianou et al. The Ponder policy specification language. In Policy 2001: Workshop on Policies for Distributed Systems and Networks, volume 1995 of Lecture Notes in Computer Science, Bristol, UK, 2001. Springer-Verlag. [133] R. Demkes. Comet: A comprehensive methodology for supporting telematics investment decisions. Technical report, Telematica Instituut, Enschede, 1999. [134] F. Duclos, J. Estublier, and P. Morat. Describing and using non-functional aspects in component based applications. In st International Conference on Aspect-Oriented Software Development, 2002. [135] E. X. de Jesus. Security implications of web services - web services need all the security features of any web-based operation, and more, 2001. [136] Gary Duzan, Jospeh Loyall, Richard Schantz, Richard Shapiro, and John Zinky. Building adaptive distributed applications with middleware and aspects. In Third International Conference on Aspect Oriented Software Development, 2004. [137] A Di Marco and P Inverardi. Compositional generation of software architecture performance QN models. In Proceedings 4th Working IEEE/IFIP Conference on Software Architecture (WICSA 2004), pages 3746, Oslo, Norway, June 2004. [138] Trusted computer system evaluation criteria. Department of Defense, 1983. [139] Y. Ding and H. Petersen. A new approach for delegation using hierarchical delegation tokens. Technical report, University of Technology Chemnitz-Zwickau, Department of Computer Science, 1995. [140] T. G. Das and B. S. Teng. Resources and risk management in the strategic alliance making process. Journal of Management, 24(1):2142, 1998. [141] J. Daniel, B. Traverson, and S. Vignes. Integration of Quality of Service in distributed object systems. In Proceedings of IFIP TC6 WG6.1 Second International Working Conference on Distributed Applications and Interoperable Systems (DAIS99), pages 3143, Helsinki, Finland, 1999. 347/366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

[142] B. Dunn. A managers guide to web services. EAIJournal, pages 1417, January 2003. [143] Sverine Dusollier. Fair Use by Design in the European Copyright Directive of 2001. Communications of the ACM, 46(4):5155, 2003. [144] Bart de Win. Engineering Application-level Security using Aspect-Oriented Software Development. PhD thesis, Department of Computer Science, K.U.Leuven, March 2004. [145] B. de Win, W. Joosen, and F. Piessens. AOSD and security: a practical assessment. In Workshop on Software engineering Properties of Languages for Aspect Technologies (SPLAT03), pages 16, 2003. [146] B. de Win, F. Piessens, W. Joosen, and T. Verhanneman. On the importance of the Separation of Concerns principle in secure software engineering. In Workshop on the Application of Engineering Principles to System Security Design, 2002. [147] B. de Win, B. Vanhaute, and B. de Decker. How Aspect-Oriented programming can help to build secure software. Informatica, 26(2):141149, 2002. [148] F. Dzubeck. Application service providers: An old idea made new, 1999. [149] T. Elrad, R. Filman, and A. Bader. Aspect-Oriented programming: an introduction. Communications of the ACM, 44(10), 2001. [150] C. Evans, C. D. W. Feather, A. Hopmann, M. Presler-Marshall, and P. Resnick. PICSRules 1.1. [151] E. F. Ecklund, V. H. Goebel, J. . Aagedal, E. Bach-Gansmo, and T. Plagemann. A requirements model for a Quality of Service-aware multimedia lecture on demand system. In Proceedings of ED-MEDIA 2001 World Conference on Educational Multimedia, Hypermedia and Telecommunications, page 2, Tampere, Finland, 2001. [152] D. Ecklund, V. Goebel, T. Plagemann, E. Ecklund, C. Griwodz, J. . Aagedal, K. Lund, and A.-J. Berre. QoS management middleware - a separable, reusable solution. In Proceedings of IWQoS 2001, Karlsruhe, Germany, 2001. [153] Ahmed Elfatatry and Paul Layzell. Negotiating in service-oriented environments. Communications of the ACM, 47(8):103108, August 2004. [154] D. Elgesem. The modal logic of agency. Journal of Philosophical Logic, 2(2):146, 1997. [155] B. Elvester. A replication framework architecture for mobile information systems. Masters thesis, Department of Informatics, University of Oslo, 2000. [156] Jacky Estublier, Anh-Tuyet Le, and Jorge Villalobos. Using federations for flexible SCM systems. In ICSE workshop, SCM 2003, volume 2649 of Lecture Notes in Computer Science, Portland, Oregon, USA, May 2003. Springer-Verlag. [157] F. Eliassen and S. Mehus. Type checking stream flow endpoints. In Proceedings of IFIP International Conference on Distributed Systems Platforms and Open Distributed Processing (Middleware 98), The Lake District, UK, 1998. [158] Csaba Egyhazy and Raj Mukherji. Interoperability architecture using RM-ODP. Communications of the ACM, 47(2):9397, February 2004. [159] F. Eliassen and H. O. Rafaelsen. A trading model of stream binding selection. In Proceedings of The Fifth IFIP Conference on Intelligence in Networks (SmartNet99), pages 251264, Bangkok, Thailand, 1999. 348/366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

[160] John S. Erickson. Fair use, DRM, and trusted computing. Communications of the ACM, 46(4):3439, 2003. [161] W. Estrem. An evaluation framework for deploying web services in the next generation manufacturing enterprise. Robotics and Computer Integrated Manufacturing, 19:509519, 2003. [162] N. Evans. Clear 10 hurdles to Web Services Success. Fawcette Technical Publications, 2002. [163] R. Filman et al. Inserting -ilities by controlling communications. Communications of the ACM, 45(1), 2002. [164] Xavier Franch and Pere Botella. Putting non-functional requirements into software architecture. In Proceedings of the 9th International Workshop on Software Specification and Design, pages 6067, Ise-Shima, Japan, 1998. IEEE Computer Society. [165] T. Fitzpatrick, G. Blair, G. Coulson, N. Davies, and P. Robin. Supporting adaptive multimedia applications through open bindings. In Proceedings of 4th International Conference on Configurable Distributed Systems (ICCDS 98), Annapolis, Maryland, USA, 1998. [166] Paolo Falcarin, Mario Baldi, and Daniele Mazzocchi. Software tampering detection using aop and mobile code. In AOSD Technology for Application-level Security (AOSDSEC), Aspect Oriented Software Development Workshop, 2004. [167] J. Feigenbaum. Overview of the AT&T Labs trust management project: Position paper. In Proceedings of 1998 Cambridge University Workshop on Trust and Delegation, Lecture Notes in Computer Science, 1998. [168] Edward W. Felten. A skeptical view of DRM and Fair Use. Communications of the ACM, 46(4):5759, 2003. [169] N. Fenton. Software Metrics: A Rigorous Approach. Chapman-Hall, 1991. [170] E. B. Fernandez. Web services security. In P. Fletcher and M. Waterhouse, editors, Web Services Business Strategies and Architectures. Expert Press Ltd, UK, 2002. [171] Robert E. Filman and Daniel P. Friedman. Aspect-oriented programming is quantification and obliviousness. In Workshop on Advanced Separation of Concerns, OOPSLA 2000, Minneapolis, October 2000. [172] J.A. Fitzsimmons and M.J. Fitzsimmons. New Service Development: Creating memorable experiences. Sage Publications, Thousand Oaks, CA, 2000. [173] S. Frlund and J. Koistinen. QML: A language for Quality of Service specification. Technical Report HPL-98-10, Software Technology Laboratory, Hewlett-Packard Company, 1998. [174] S. Frlund and J. Koistinen. Quality of Service aware distributed object systems. Technical Report HPL-98-142, Software Technology Laboratory, Hewlett-Packard Company, 1998. [175] Svend Frlund and Jari Koistinen. Quality of Service specification in distributed object systems (design?). Distributed Systems Engineering Journal, 5(4):179202, December 1998. [176] J. Feigenbaum and P. Lee. Trust management and proof-carrying code in secure mobile code applications: Position paper. In DARPA Workshop on Foundations for Secure Mobile Code, 1997. [177] Barbara L. Fox and Brian A. LaMacchia. Encouraging a recognition of Fair Use in DRM systems. Communications of the ACM, 46(4):6163, 2003. [178] F. Fluckiger. Understanding Networked Multimedia. Prentice Hall, 1995.

349/366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

[179] A. Fvrier, E. Najm, and J.-B. Stefani. Contracts for ODP. In Proceedings of Fourth AMAST Workshop on Real-Time Systems, Concurrent, and Distributed Software, Towards a mathematical transformation-based development, Ciudad de Mallorca, Mallorca, Spain, 1997. [180] N. M. Frank and L. Peters. Building trust: the importance of both task and social precursors. In International Conference on Engineering and Technology Management: Pioneering New Technologies - Management Issues and Challenges in the Third Millennium, 1998. [181] X. Franch. Systematic formulation of non-functional characteristics of software. In Proceedings of 3rd International Conference on Requirements Engineering (ICRE), Colorado Springs, USA, 1998. [182] B. S. Firozabadi and M. Sergot. Power and permission in security systems. In th International Workshop, Security Protocols, Lecture Notes in Computer Science, Cambridge, UK, 1999. Springer-Verlag. [183] D. Ferraiolo, R. Sandhu, S. Gavrila, D. R. Kuhn, and R. Chandramouli. A proposed standard for role based access control. ACM Transactions on Information and System Security, 4(3), August 2001. [184] P. Fletcher and M. Waterhouse, editors. Web Services: Business Strategies and Architectures. Expert Press Ltd., 2002. [185] P. Galvin. Are you certifiable?, 2000. [186] J. Galbreath. Twenty-first century management rules: the management of relationships as intangible assets. Management Decision, 40(2):116126, 2002. [187] V. Goebel, I. Eini, K. Lund, and T. Plagemann. Design, implementation, and valuation of TOOMM: A temporal object-oriented multimedia data model. In Proceedings of 8th IFIP 2.6 Working Conference on Database Semantics (DS-8), pages 145168, Rotorua, New Zealand, 1999. [188] E. Gerck. Certification: Extrinsic, intrinsic and combined, 1997. [189] E. Gerck and the M.-C. Group. Overview of certification systems: X.509, CA, PGP and SKIP. Meta-Certificate Group, 1997. [190] S.M. Goldstein, R. Johnston, J. Duffy, and J. Rao. The service concept: the missing link in service design research? Journal of Operations Management, 20(2):121134, April 2002. [191] Valrie Gay, Peter Leydekkers, and Robert Huis int Veld. Specification of multiparty audio and video interaction based on the Reference Model of Open Distributed Processing. Computer Networks and ISDN Systems, 27(8):12471262, 1995. [192] M. R. Genesereth and J. N. Nilsson. Logical Foundations of Artificial Intelligence. Morgan Kaufmann Publishers Inc., California, USA, 1987. [193] R. Gulati, N. Nohria, and A. Zaheer. Strategic networks. Strategic Management Journal, 21:203216, 2000. [194] J. Gordijn. Value based requirements engineering: Exploring innovative e-commerce ideas. PhD thesis, Vrije Universiteit Amsterdam, 2002. [195] Seffen Gbel, Cristoph Pohl, Simone Rttger, and Steffen Zschaler. The comquad component model: enabling dynamic selection of implementations by weaving non-functional aspects. In Third International Conference on Aspect Oriented Software Development, 2004. [196] L. Gong and X. Qian. The complexity and composability of secure interoperation. In IEEE Symposium on Security and Privacy, Oakland, California, USA, 1994. 350/366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

[197] R.M. Grant. The resource based theory of competitive advantage. implications for strategy formulation. California Management Review, 33(3):114135, 1991. [198] M. Granovetter. Business groups. In N. J. Smelser and R. Swedberg, editors, The Handbook of Economic Sociology, pages 453475. Princeton University Press, Princeton, N.J., 1994. [199] C. Grnroos. From marketing mix to relationship marketing: Towards a paradigm shift in marketing. Management Decision, 32(2):420, 1994. [200] Tyrone Grandison and Morris Sloman. A survey of trust in internet applications. IEEE Communications Surveys & Tutorials, 3(4), 2000. [201] A. Gokhale, D. C. Schmidt, B. Natarajan, and N. Wang. Applying model-integrated computing to component middleware and enterprise applications. Communications of the ACM, 45(10), October 2002. [202] D. Gross and E. Yu. From non-functional requirements to design through patterns. In Proceedings of Sixth International Workshop on Requirements Engineering: Foundation for Software Quality, Stockholm, Sweden, 2000. [203] A. Herzberg et al. Access control meets public key infrastructure, or: Assigning roles to strangers. In IEEE Symposium on Security and Privacy, 2000. [204] A. Hafid. Hierarchical negotiation for distributed multimedia applications in a multi-domain environment. In Proceedings of Second Workshop on Protocols for Multimedia Systems, pages 397409, Salzburg, Austria, 1995. [205] J. Hagel. The CEO and IT relationship: Harnessing the value of web services technology. Internetworld, April 2003. [206] R. Hauck. Architecture for an automated management instrumentation of component based applications. In proceedings of the 12th International Workshop on Distributed Systems: Operations and Management (DSOM), Nancy, France, 2001. [207] B. Haverkort. Performance of Computer Communication Systems - A Model-Based Approach. Wiley UK, 1998. [208] A. Hafid and G. v. Bochmann. Quality of Service adaptation in distributed multimedia applications. Multimedia Systems, 6(5):299315, 1998. [209] Franz J. Hauck, Ulrich Becker, Martin Geier, Erich Meier, Uwe Rastofer, and Martin Steckermeier. The AspectIX approach to Quality of Service integration into CORBA. Technical Report TR-I4-99-09, Friedrich-Alexander-University Erlangen-Nrnberg, Germany, 1999. [210] A. Hafid, G. v. Bochmann, and B. Kercherve. A Quality of Service negotiation procedure for distributed multimedia presentational applications. In Proceedings of 5th IEEE International Symposium on High Performance Distributed Computing, pages 330339, 1996. [211] B Hartman, J Flinn, D, B. Konstantin, and S. Kawamoto. Mastering Web Services Security. Wiley Technology, Indianapolis, 2002. [212] H. Hermanns, U. Herzog, and J.-P. Katoen. Process algebra for performance evaluation. Theoretical Computer Science, 274(12):4387, March 2002. [213] E. Hindin. Say what? QoS in English. Network World, August 1998. [214] C. P. Holland and A. G. Lockett. Business trust and the formation of virtual organizations. In st Annual Hawaii International Conference on System Sciences, Hawaii, 1998. 351/366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

[215] T. Hoffman. Roi for web services remains elusive. Computerworld, oct 2002. [216] M. B. Holbrook. Consumer Value: A Framework for Analysis and Research. Routledge, New York, NY, 1999. [217] P. Hoschka. Synchronized Multimedia Integration Language (SMIL) 1.0 Specification. W3C, 1998. [218] S. J. Harrington and C. P. Ruppel. Telecommuting: A test of trust, competing values and relative advantage. IEEE Transactions on Professional Communication, 42(4):223239, 1999. [219] J. L. Heskett, W. E. Sasser, and L. A. Schlesinger Jr. The Service Profit Chain. The Free Press, New York, NY, 1997. [220] G. Huston. RFC 2990: Next Steps for IP QoS Architecture. IETF, November 2000. [221] M. Hitchens and Vijay Varadharajan. Tower: A language for role based access control. In Policies for Distributed Systems and Networks: International Workshop (POLICY01), volume 1995 of Lecture Notes in Computer Science, Bristol, UK, January 2001. Springer-Verlag. [222] Minwell Huang, Chunlei Wang, and Lufeng Zhang. Toward a reusable and generic security aspect library. In AOSD Technology for Application-level Security (AOSDSEC), Aspect Oriented Software Development Workshop, 2004. [223] Federation of identities in a web services world. IBM. [224] IBM. Ibm trust establishment policy language. [225] IBM. Web Service Level Agreements (WSLA) Project: SLA Compliance Monitoring for eBusiness on demand. [226] IEEE Computer Society. Software Quality Metrics Methodology. IEEE Std. 1061-1992, 1992. [227] IEEE Computer Society. IEEE Std 1471-2000: IEEE Recommended Practice for Architectural Description of Software-Intensive Systems, October 2000. [228] IETF. RFC 1510, The Kerberos Network Authentication Service (V5), September 1993 (revised 2001). [229] IETF. RFC 3060 - Policy Core Information Model Version 1 Specification, February 2001. Extended by RFC 3460, January 2003. [230] IETF. RFC 3644: Policy Quality of Service (QoS) Information Model, November 2003. [231] M.-E. Iacob and H. Jonkers. Quantitative Analysis of Enterprise Architectures. Telematica Instituut, Enschede, the Netherlands, March 2004. ArchiMate deliverable D3.5.1b, version 2.0. [232] M.-E. Iacob and H. Jonkers. Quantitative analysis of enterprise architectures, 2004. submitted for publication. [233] Implementing data encryption. [234] 2004. http://www.intertrust.com/. [235] IPSJ. MPEG-21, 2004. Retrieved Sept. 2004, from http://www.itscj.ipsj.or.jp/sc29/29w42911.htm#MPEG-21. [236] ISO. ISO 8402: Quality Vocabulary, 1986. [237] ISO. ISO 9000-3: Guidelines for the application of ISO 9001 to the development, supply and maintenance of software, 1991. 352/366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

[238] ISO/IEC IS 10746-2, Information Technology - Open Distributed Processing - Reference Model: Foundations, 1996. [239] ISO/IEC IS 10746-3, Information Technology - Open Distributed Processing - Reference Model: Architecture, 1996. [240] ISO. ISO/IEC 15504: Information Technology - Software Process Assessment, 1998. [241] ISO/IEC IS 10746-1, Information Technology - Open Distributed Processing - Reference Model: Overview, 1998. [242] ISO/IEC IS 10746-4, Information Technology - Open Distributed Processing - Reference Model: Architectural Semantics, 1998. [243] ISO/IEC 9126 Information Technology - Software product quality - Part 1: Quality Model, 1999. [244] ISO/IEC 9126 Information Technology - Software product quality - Part 2: External Metrics, 1999. [245] ISO/IEC 9126 Information Technology - Software product quality - Part 3: Internal Metrics, 1999. [246] ISO/IEC 9126 Information Technology - Software product quality - Part 4: Quality in Use Metrics, 1999. [247] ISO/IEC CD 15935, Open Distributed Processing - Reference Model: Quality of Service, 1999. [248] ISO/IEC IS 14753, Information Technology - Open Distributed Processing - Interface References and Binding, 1999. [249] ISO/IEC IS 15408, Information technology Security techniques Evaluation criteria for IT security Part 1: Introduction and general model, 1999. [250] ISO/IEC IS 15408, Information technology Security techniques Evaluation criteria for IT security Part 2: Security functional requirements, 1999. [251] ISO/IEC IS 15408, Information technology Security techniques Evaluation criteria for IT security Part 3: Security Assurance Requirements, 1999. [252] ISO/IEC IS 15414, Information Technology - Open Distributed Processing - Enterprise Language, 2003. [253] ISO. ISO 21000-5: Information technology Multimedia framework (MPEG-21) Part 5: Rights Expression Language, 2004. [254] ISO. ISO 21000-6: Information technology Multimedia framework (MPEG-21) Part 5: Rights Data Dictionary, 2004. [255] ISO/IEC 19793 Information Technology - Open Distributed Processing - Use of UML for ODP system specifications. Anaheim, February 2004. working draft. [256] ITU-T. E.800: Terms and Definitions related to Quality of Service and Network Performance including Dependability, 1994. [257] ITU-T. Recommendation X.641 Information Technology - Quality of Service: Framework, 1997. [258] ITU-T. Recommendation X.642 Information Technology - Quality of Service: Guide to methods and mechanisms, 1997. 353/366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

[259] C. S. Iacono and S. Weisband. Developing trust in virtual teams. In th Hawaii International Conference on System Sciences, 1997. [260] S. L. Jarvenpaa et al. Consumer trust in an internet store: A cross-cultural validation. Journal of Computer-Meditated Communication, 5(2), 1999. [261] R. Jain. The Art of Computer Systems Performance Analysis. Wiley-Interscience, New York, NY, USA, 1991. [262] H. Jonkers, P. Boekhoudt, M. Rougoor, and E. Wierstra. Completion time and critical path analysis for the optimization of business process models. In M. S. Obaidat, A. Nisanci, and B. Sadoun, editors, Proceedings of the 1999 Summer Computer Simulation Conference, pages 222 229, Chicago, IL, USA, July 1999. [263] H Jonkers and H M Franken. Quantitative modelling and analysis of business processes. In A Bruzzone and E Kerckhoffs, editors, Simulation in Industry: Proceedings 8th European Simulation Symposium, volume 1, pages 175179, Genoa, Italy, October 1996. [264] Andrew J. I. Jones and Babak Sadighi Firozabadi. On the characterisation of a trusting agent aspects of a formal approach. In Workshop on Deception, Trust and Fraud in Agent Societies, pages 157168, 2000. [265] C. Jones, W. S. Hesterly, et al. A general theory of network governance: Exchange conditions and social mechanisms. The Academy of Management Review, 22(4):911945, 1997. [266] A Jonason and B. Holma. Pricing for profits on the mobile internet. In IEEE IEMC, pages 73 78, 2002. [267] A. Jsang and S. J. Knapskog. A metric for trusted systems. In st National Security Conference, 1998. [268] H. Jonkers, M.M. Lankhorst, R. van Buuren, S. Hoppenbrouwers, M. Bonsangue, and L. van der Torre. Concepts for modelling enterprise architectures. International Journal of Cooperative Information Systems (IJCIS), special issue on Architecture in IT, 13(3):257287, September 2004. [269] Jingwen Jin and Klara Nahrstedt. Classification and comparison of QoS specification languages for distributed multimedia applications. Technical report, UIUC CS, 2002. [270] S. Jones. TRUST-EC: Requirements for trust and confidence in e-commerce. Technical report, European Commission, Joint Research Centre, 1999. [271] A. Jsang. The right type of trust for distributed systems. In ACM New Security Paradigms Workshop, 1996. [272] A. Jsang. Artificial reasoning with subjective logic. In nd Australian Workshop on Commonsense Reasoning, 1997. [273] A. Jsang. Prospectives for modeling trust in information security. In Australasian Conference on Information Security and Privacy. Springer-Verlag, 1997. [274] A. Jsang. A subjective metric of authentication. In th European Symposium on Research in Computer Security (ESORICS98). Springer-Verlag, 1998. [275] A. Jsang. Trust-based decision making for electronic transactions. In The 4th Nordic Workshop on Secure IT Systems (NORDSEC99), Stockholm, Sweden, 1999. [276] S. S. Jajodia, P. Subrahmanian, and V. Subrahmanian. A logical language for expressing authorizations. Security and Information Privacy, 1997. 354/366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

[277] H. Jonkers and M van Swelm. Queueing analysis to support distributed system design. In Proc. 1999 Symposium on Performance Evaluation of Computer and Telecommunication Systems, Chicago, IL, USA, July 1999. [278] T. Krauskopf et al. PICS label distribution label syntax and communication protocols version 1.1. [279] P. Khkipuro. UML-based performance modelling framework for object-oriented distributed systems. In UML99 - The unified modelling language - beyond the standard, pages 356371, 1999. [280] R. Kazman, L. Bass, G. Abowd, and M. Webb. Saam: A method for analyzing the properties of software architectures. In Proceedings of ICSE-16, pages 8190, Sorento, Italy, 1994. [281] A. Kini and J. Choobineh. Trust in electronic commerce: Definition and theoretical considerations. In st Annual Hawaii International Conference on System Sciences, Hawaii, 1998. [282] K. Konrad, G. Fuchs, and J. Bathel. Trust and electronic commerce - more than a technical problem. In The 18th Symposium on Reliable Distributed Systems, Lausanne, Switzerland, 1999. [283] S. P. Ketchpel and H. Garcia-Molina. Making trust explicit in distributed commerce transactions. In th International Conference on Distributed Computing Systems, 1996. [284] L. Kutvonen, J. Haataja, E. Silfver, and M. Vhaho. Pilarcos architecture. Technical Report C-2001-10, Department of Computer Science, University of Helsinki, March 2001. [285] H. L. Kesterson II. Digital signatures - whom do you trust? In Aerospace Conference, 1997. [286] S. King. Threats and solutions to web services security. Network Security, September 2003. [287] D. H. Kitson. An emerging international standard for software process assessment. In Proceedings of Third International Software Engineering Standards Symposium and Forum, Walnut Creek, CA, USA, 1997. [288] Y. Krishnamurthy, V. Kachroo, D. Karr, C. Rodrigues, J. Loyall, R. Schantz, and D. Schmidt. Integration of QoS enabled distributed object computing middleware for developing nextgeneration distributed applications. ACM SIGPLAN Notices, 36(8), August 2001. [289] G. Kiczales, J. Lamping, A. Mendhekar, C. Maeda, C. V. Lopes, J.-M. Loingtier, and J. Irwin. Aspect-Oriented Programming. In Proceedings of The European Conference on Object-Oriented Programming (ECOOP97), Finland, 1997. [290] R. Koenen, J. Lacy, M. MacKay, and S. Mitchell. The long march to interoperable digital rights management, January 2004. http://www.intertrust.com/main/research/papers.html. [291] J. Krogstie, O. I. Lindland, and G. Sindre. Defining quality aspects for conceptual models. In Proceedings of IFIP8.1 working conference on Information Systems Concepts (ISCO3); Towards a consolidation of views, pages 216231, Marburg, Germany, 1995. [292] A. Kakas and R. Miller. A simple declarative language for describing narratives with actions. Journal of Logic Programming, (Special Issue on Reasoning About Actions), 1997. [293] Kazunori Kawauchi and Hidehiko Masuhara. Dataflow pointcut for integrity concerns. In AOSD Technology for Application-level Security (AOSDSEC), Aspect Oriented Software Development Workshop, 2004. [294] J. Koistinen. Dimensions for reliability contracts in distributed object systems. Technical Report HPL-97-119, Software Technology Laboratory, Hewlett-Packard Company, 1997.

355/366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

[295] R. Khare and A. Rifkin. Trust management on the World Wide Web. Peer-reviewed Journal on the Internet, 3(6), 1998. [296] J. Koistinen and A. Seetharaman. Worth-based multi-category Quality of Service negotiation in distributed object infrastructures. In Proceedings of Second International Enterprise Distributed Object Computing Workshop (EDOC 98), pages 239249, San Diego, CA, USA, 1998. [297] Lalana Kagal, Jeffrey Undercoffer, Anupam Joshi, and Tim Finin. Vigil : Providing trust for enhanced security in pervasive systems, October 2001. [298] L. Kutvonen. Automated management of inter-organisational applications. In Proceedings of 6th International Workshop on Enterprise Distributed Object Computing (EDOC02), page 27, Lausanne, Switzerland, September 2002. [299] P. Kothandaraman and D. Wilson. The future of competition. value creating networks. Industrial Marketing Management, 30:379389, 2001. [300] L. Lamport. The temporal logic of actions. ACM Transactions on Programming Languages and Systems, 16(3):872923, May 1994. [301] Craig Larman. The value of web services, 2001. seminar slides. [302] A. Lakas, G. Blair, and A. Chetwynd. A formal approach to the design of QoS parameters in multimedia systems. In Proceedings of Fourth International IFIP Workshop on Quality of Service IWQoS96, Paris, France, 1996. [303] Robin Laney, Janet Van der Linden, and Pete Thomas. Evolution of aspects for legacy system security concerns. In AOSD Technology for Application-level Security (AOSDSEC), Aspect Oriented Software Development Workshop, 2004. [304] S. Leue. Specifying real-time requirements for SDL specifications - a temporal logic-based approach. In Proceedings of Fifteenth International Symposium on Protocol Specification, Testing, and Verification (PSTV95), Warsaw, Poland, 1995. [305] P. Leydekkers and V. Gay. ODP view on Quality of Service for open distributed multimedia environments. In Proceedings of 4th International IFIP Workshop on QoS (IWQoS 96), Paris, France, 1996. [306] P. F. Linington. An ODP approach to the development of large middleware systems. In Proceedings of DIAS99, June 1999. [307] P. F. Linington. RISCSIM - A Simulator for Object-based Systems. In David Al-Dabass and Russell Cheng, editors, Proceedings UKSIM99 Conference of the UK Simulation Society, pages 141147. UK Simulation Society, April 1999. [308] P. F. Linington. A policy-based model-driven security framework. In Middleware2003 Companion, Workshop Proceedings, 1st International Workshop on Model-Driven Approaches to Middleware Applications Development, pages 273276, Rio de Janerio, June 2003. Pontificia Universidade Catlica do Rio de Janeiro. [309] Peter F. Linington. Model driven development and non-functional aspects. In Proceedings of Workshop on Model Driven Development (WMDD2004) in ECOOP 2004, Oslo, Norway, June 2004. [310] Peter F. Linington. The role of contracts in establishing interoperability of enterprise systems. In Proceedings of Workshop on Interoperability of Enterprise Systems (INTEREST 2004), in ECOOP 2004, Oslo, Norway, June 2004. 356/366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

[311] C.-H. Lung, A. Jalnapurkar, and A. El-Rayess. Performance-oriented software architecture analysis: An experience report. In Proceedings of First Int. Workshop on Software and Performance, Santa Fe, NM, USA, October 1998. [312] Marc M. Lankhorst, Henk Jonkers, Maarten W. A. Steen, and Hugo W. L. ter Doest. The model-driven enterprise. In Proceedings of Workshop on Interoperability of Enterprise Systems (INTEREST 2004), in ECOOP 2004, Oslo, Norway, June 2004. [313] L. Lymberopoulos, E. Lupu, and M. Sloman. Ponder policy implementation and validation in a CIM and differentiated services framework. In IFIP/IEEE Network Operations and Management Symposium (NOMS 2004), Seoul, Korea, April 2004. [314] P. F. Linington, Z. Milosevic, J. Cole, S. Gibson, S. Kulkarni, and S. Neal. A unified behavioural model and a contract for extended enterprise. Data Knowledge and Engineering Journal, 51(1):529, October 2004. [315] P. F. Linington, Z. Milosevic, and K. Raymond. Policies in communities: Extending the ODP Enterprise Viewpoint. In Proceedings of 2nd International Workshop on Enterprise Distributed Object Computing (EDOC98), San Diego, USA, November 1998. [316] L. Leboucher and E. Najm. A framework for realtime QoS in distributed systems. In IEEE Workshop on Middleware for Distributed Real-Time Systems and Service, San Francisco, 1997. IEEE Computer Society Press. [317] P. F. Linington and S. Neal. Using policies in the checking of business to business contracts. In Fourth IEEE International Workshop on Policies for Distributed Systems and Networks, pages 207218, Lake Como, Italy, June 2003. IEEE Computer Society. [318] M. Lycett and R. Paul. Component-based development: Dealing with non-functional aspects of architecture. In WCOP workshop, ECOOP 2000, 2000. [319] D. Landes and R. Studer. The treatment of non-functional requirements in MIKE. In Proceedings of 5th ESEC, volume 989 of Lecture Notes in Computer Science, Barcelona, Catalunya, Spain, 1995. Springer-Verlag. [320] J. P. Loyall, R. E. Schantz, J. A. Zinky, and D. E. Bakken. Specifying and measuring Quality of service in distributed object systems. In First International Symposium on Object-Oriented RealTime Distributed Computing (ISORC 98), pages 4352, Kyoto, Japan, 1998. [321] L. S. Lundby. Specification and partial implementation of a QoS framework. Masters thesis, Department of Informatics, University of Oslo, 2001. [322] F. Li and J. Whalley. Deconstruction of the telecommunication industry: from value chains to value networks. Telecommunications Policy, 26:451472, 2002. [323] A. Moriera, J. Arajo, and I. Brito. Crosscutting quality attributes for requirements engineering. In st International Conference on Aspect-Oriented Software Development, 2002. [324] D. W. Manchala. Trust metrics, models and protocols for electronic commerce transactions. In th International Conference on Distributed Computing Systems, 1998. [325] Z. Milosevic, D. Arnold, and L. OConnor. Inter-enterprise contract architecture for Open Distributed Systems: Security requirements. In Proceedings of WET ICE96 Workshop on Enterprise Security, Stanford, June 1996. [326] S. P. Marsh. Formalising Trust as a Computational Concept. PhD thesis, Computing Science and Mathematics, University of Stirling, 1994. 357/366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

[327] W. Marshman. Web Services within the corporation. Hewlett-Packard Company, May 2002. [328] Marco Ajmone Marsan, editor. Quality of Service in Multiservice IP Networks: Second International Workshop, QoS-IP, volume 2601 of Lecture Notes in Computer Science, Berlin, 2003. Springer-Verlag. [329] F. L. Mayer. A brief comparison of two different environmental guidelines for determining levels of trust (computer security). In th Annual Computer Security Applications Conference, 1990. [330] P. Monge and N. Contractor. Theories of Communication Networks. Oxford University Press, Oxford, 2003. [331] J. Mylopoulos, L. Chung, and B. A. Nixon. Representing and using non-functional requirements: A process-oriented approach. IEEE Transactions on Software Engineering, 18(6):483497, 1992. [332] Daniel A. Menasce. QoS issues in web services. Internet Computing, pages 7275, November-December 2002. [333] IBM & Microsoft. Security in a web services world: A proposed architecture and roadmap. version 1.0, April 2002. [334] Microsoft. Improving web application security: Threats and countermeasures, 2004. in MSDN Library. [335] Microsoft. Windows Rights Management Services, 2004. http://www.microsoft.com/rms/. [336] Next generation secure computing base, 2004. http://www.microsoft.com/ngscb/. [337] Z. Milosevic, A. Jsang, T. Dimitrakos, and M. A. Patton. Discretionary enforcement of electronic contracts. In Proceedings of 6th International Workshop on Enterprise Distributed Object Computing (EDOC02), Lausanne, Switzerland, September 2002. [338] R. Mori and M. Kawahara. Superdistribution: The concept and the architecture. Transaction of the IEICE, E 73(7):11331146, July 1990. [339] A. C. Myers and B. Liskov. Protecting privacy using the decentralized label model. ACM Transactions on Software Engineering Methodology, 2000. [340] I. MacInnes, J. Moneta, J. Carbarato, and D. Sami. Business models and the mobile games value chain. In BITA/B4U symposium. Business Models for Innovative Mobile Services, Delft, November 2002. [341] J. Morency. Application service providers and e-business, 1999. [342] J. Miller, P. Resnick, and D. Singer. PICS rating services and rating systems (and their machine readable descriptions) version 1.1. [343] D. Miller and J. Shamise. The resource based view of the firm in two environments. the hollywood firm studios from 1836 to 1964. Academy of Management Journal, 39:519543, 1996. [344] ] R. Mori and S. Tashiro. The concept of software service system (sss). Transaction of the IEICE, J70-D.1:7081, January 1987. [345] C Maitland, E. van de Kar, U. When de Montalvo, and H. Bouwman. Mobile information and entertainment services: Business models and service networks. In M-Business 2003, Vienna, June 2003.

358/366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

[346] S. Mysore. Securing Web Services - Concepts, Standards, and Requirements. Sun Microsystems Inc., October 2003. [347] M. Nix. Entering the application service provider market. [348] R. Neisse, E. Della Vecchia Pereira, L. Zambenedetti Granville, M. Janilce Bosquiroli Almeida, and L. Margarida Rockenbach Tarouco. A hierarchical policy-based architecture for integrated management of grids and networks. In Fifth IEEE International Workshop on Policies for Distributed Systems and Networks, pages 103106, Yorktown Heights, New York, USA, June 2004. IEEE Computer Society. [349] R. Normann and R. Ramirez. Designing Interactive Strategy - From Value Chain to Value Constellation. John Wiley, Chichester, UK, 1994. [350] Klara Nahrstedt and Jonathan Smith. The QoS broker. IEEE Multimedia Magazine, 2(1):53 67, 1995. [351] E. Najm and J.-B. Stefani. A formal semantics for the ODP Computational Model. Computer Networks and ISDN Systems, 27:13051329, 1995. [352] E. Najm and J.-B. Stefani. Computational Models for Open Distributed Systems. In Proceedings of Second IFIP conference on Formal Methods for Open Object-based Distributed Systems - FMOODS97, Canterbury, UK, 1997. [353] OASIS. Security Assertion Markup Language (SAML) v1.1. [354] OASIS. Web Services Security: SOAP Message Security, Working Draft 17. [355] Object Management Group. Control and Management of Audio/Video Streams, v 1.0, 1998. [356] Object Management Group. Request For Proposal: UML Profile for Modeling Quality of Service and Fault Tolerant Characteristics and Mechanisms, 2002. OMG Document: ad/2002-01072. [357] Object Management Group. UML: Profile for Modeling Quality of Service and Fault Tolerance Characteristics and Mechanisms Request for Proposal, 2002. OMG Document: ad/200201-07. [358] Object Management Group. UML: Profile for Modeling Quality of Service and Fault Tolerance Characteristics and Mechanisms - Revised submission, August 2003. [359] Object Management Group. UML Profile for Schedulability, Performance, and Time Specification, September 2003. [360] P. Olla and N.Patel. A value chain model for mobile data service providers. Telecommunications Policy, 26(9):551571, 2002. [361] M. ONeill. Is ssl enough protection for web services? EAI Journal, December 2002. [362] M. ONeill. Web Services Security. McGraw-Hill, US, 2003. [363] Ralf Opliger. Certified mail: The next challenge for secure messaging. Communications of the ACM, 47(8):7580, August 2004. [364] J. J. Ordille. When agents roam, who can you trust? In st Annual Conference on Emerging Technologies and Applications in Communications, 1996. [365] J. Ostroff. Formal methods for the specification and design of real-time safety critical systems. Journal of Systems and Software, 18(1), 1992.

359/366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

[366] R. H. Pierce et al. Capturing and verifying performance requirements for hard real-time systems. In Proceedings of International Conference on Software Reliable Technologies, volume 1251 of Lecture Notes in Computer Science, London, England, 1997. Springer-Verlag. [367] M. C. Paulk. Analyzing the conceptual relationship between iso/iec 15504 (software process assessment) and the Capability Maturity Model for software. In Proceedings of The Ninth International Conference on Software Quality, pages 293303, Cambridge, MA, USA, 1999. [368] T. C. Paulsen. Runtime representation of QoS specifications and their semantics. Masters thesis, Department of Informatics, University of Oslo, 2001. [369] M. C. Paulk, B. Curtis, M. B. Chrissis, and C. V. Weber. Capability Maturity Model, version 1.1. IEEE Software, 10(4):1827, 1993. [370] T. Plagemann, F. Eliassen, V. Goebel, T. Kristensen, and H. O. Rafaelsen. Adaptive QoS aware binding of persistent multimedia objects. In Proceedings of International Symposium on Distributed Objects and Applications (DOA99), Edinburgh, Scotland, 1999. [371] T. Plagemann, F. Eliassen, B. Hafskjold, T. Kristensen, R. H. Macdonald, and H. O. Rafaelsen. Flexible and extensible QoS management for adaptable middleware. In Proceedings of International Workshop on Protocols for Multimedia Systems (PROMS 2000), Cracow, Poland, 2000. [372] M. A. Patton and A. Jsang. Technologies for trust in electronic commerce. Electronic Commerce Research Journal, 4((1&2)):921, January 2004. [373] M. E. Porter. Competitive Advantage: Creating and Sustaining Superior Performance. Free Press, New York, 1985. [374] D. Povey. Developing electronic trust policies using a risk management model, 1999. [375] D. Povey. Trust management, 1999. [376] 2004. http://www.indicare.org/. [377] QuA, Quality of Service aware component architecture. An ICT2010-project sponsored by the Norwegian Research Council. [378] S. Race. What is the value of web services? EACommunity.com, 2003. [379] J. Ramachandran. Designing Security Architecture Solutions. Wiley, 2002. [380] P. V. Rangan. An axiomatic basis of trust in distributed systems. In Symposium on Security and Privacy, Washington, DC, 1988. IEEE Computer Society Press. [381] K. Rothermel, G. Dermler, and W. Fiederer. QoS negotiation and resource reservation for distributed multimedia applications. In Proceedings of IEEE International Conference on Multimedia Computing and Systems 97, pages 319326, 1997. [382] T. Renkema. Investeren in de informatie-infrastructuur. Richtlijnen voor besluitvorming in organisaties. Deventer: Kluwer Bedrijfsinformatie, 1996. [383] RightsCom. Rights Data Dictionary (RDD), 2004. Retrieved Sept 2004, from http://www.rightscom.com/default.aspx?tabid=1172. [384] R. L. Rivest. Can we eliminate certificate revocation lists? In Financial Cryptography, 1998. [385] Vincent Rosener, Thibaud Latour, and Eric Dubois. A model-based ontology of the software interoperability problem: Preliminary results. In INTEROP Enterprise Modelling and Ontologies for Interoperability Workshop, CaiSE04, Riga, Latvia, June 2004. 360/366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

[386] J. Rajahalme, T. Mota, F. Steegmans, P. F. Hansen, and F. Fonseca. Quality of service negotiation in TINA. In Proceedings of Global Convergence of Telecommunications and Distributed Object Computing, TINA 97, pages 278286, 1997. [387] T. Roby. Web services: Universal integration powers seamless, long sought after business services, April 2003. [388] Bill Rosenblatt, Bill Trippe, and Stephen Mooney. Digital Rights Management: Business and Technology. M&T Books, New York, 2002. [389] S. Ren, N. Venkatasubramanian, and G. Agha. Formalizing multimedia QoS constraints using actors. In Proceedings of IFIP TC6 WG6.1 International Workshop on Formal Methods for Open Object-based Distributed Systems (FMOODS 97), pages 139153, Canterbury, Kent, 1997. [390] D. Schmidt et al. CoSMIC: An MDA generative tool for distributed real-time and embedded component middleware and applications. In Proceedings of OOPSLA 2002 Workshop on Generative Techniques in the Context of Model Driven Architecture, Seattle, WA, November 2002. [391] Pamela Samuelson. DRM {and, or, vs.} the Law. Communications of the ACM, 46(4):4145, 2003. [392] R.S.S.P. Sandhu. Access control: principle and practice. IEEE Communications Magazine, 32(9):4048, 1994. [393] B. Sabata, S. Chatterjee, M. Davis, J. J. Sydir, and T. F. Lawrence. Taxonomy for QoS specifications. In Proceedings of IEEE Computer Society 3rd International Workshop on Objectoriented Real-time Dependable Systems (WORDS 97), Newport Beach, CA, USA, 1997. [394] B. Schneier. Soap. Crypto-Gram Newsletter, June 2000. [395] Arno Schmidmeier. Using aspectj to eliminate tangling code in eai-projects, 2003. [396] Markus Schumacher. Security Engineering with Patterns: Origins, Theoretical Model, and New Applications, volume 2754 of Lecture Notes in Computer Science. Springer-Verlag, Berlin, 2003. [397] Burkhard Stiller, Georg Carle, Martin Karsten, and Peter Reichl, editors. Group Communications and Changes - Technology and Business Models: Proceedings of the 5th International Worshop on Networked Group Communications and The 3rd International Workshop on Internet Charging and QOS Technologies, volume 2816 of Lecture Notes in Computer Science, Berlin, 2003. Springer-Verlag. [398] A. Sheth, J. Cardoso, J. Miller, K. Kochut, and M. Kang. QoS for service-oriented middleware. In Proceedings of the Conference on Systemics, Cybernetics and Informatics, 2002. [399] Amit Sheth, Jorge Cardoso, John Miller, and Krys Kochut. QoS for service-oriented middleware. In Proceedings of the 6th World Multiconference on Systematics, Sybernetics and Informatics (SCI02), volume 8, pages 528534, Orlando, FL, July 2002. [400] Richard Staehli and Frank Eliassen. QuA: A QoS-aware component architecture. Technical Report 2002-13, Simula Research Laboratory, 2002. [401] Richard Staehli, Frank Eliassen, Jan yvind Aagedal, and Gordon Blair. Quality of Service semantics for component-based systems. In Middleware2003 Companion, Workshop Proceedings, 2nd International Workshop on Reflective and Adaptive Middleware Systems, pages 153157, Rio de Janerio, June 2003. Pontificia Universidade Catlica do Rio de Janeiro.

361/366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

[402] D. Selz. Value Webs. Emerging forms of fluid and flexible organisations. Thinking, organising, communicating and delivering value on the Internet. PhD thesis, St. Gallen, 1999. [403] B. Spitznagel and D. Garlan. Architecture-based performance analysis. In Proceedings of 1998 Conference on Software Engineering and Knowledge Engineering, San Francisco Bay, CA, USA, June 1998. [404] Viren Shah. Using aspect-oriented programming to address security concerns. In International Symposium on Software Reliability Engineering, November 2002. [405] M. Sitaraman. On tight performance specification of object-oriented components. In Proceedings of 3rd International Conference on Software Reuse (ICSR). IEEE Computer Society Press, 1994. [406] D. C. Schmidt, D. L. Levine, and S. Mungee. The design and performance of the TAO realtime object request broker. Computer Communications, 21(14):46, 1998. [407] Jiawen Su and Daniel W. Manchala. Trust vs. threats: Recovery and survival in electronic commerce. In th IEEE International Conference on Distributed Computing Systems, 1999. [408] C. U. Smith. Performance Engineering of Software Systems. Addison-Wesley, 1990. [409] R. E. Smith. Authentication: From Passwords to Public Keys. Addison Wesley, 2002. [410] R. Sturm, W. Morris, and M. Jander. Foundations of Service Level Management. SAMS, 2000. [411] R. Sturm, W. Morris, and M. Jander. Sample internal sla (short form), 2000. [412] J. Smolnicki. How xml and web services will change your business, 2003. [413] Eva Sderstrm. Business value literature summary. University of Skvde, Sweden, 2004. [414] A K Schmig and H Rau. A petri net approach for the performance analysis of business processes. Technical Report 116, Lehrstuhl fr Informatik III, Universitt Wrzburg, 1995. [415] V. Swarup and C. Schmidt. Interoperating between security domains. In ECOOP (European Conference on Object-Oriented Programming) Workshop on Distributed Object Security, Brussels, Belgium, 1998. [416] Bryan Smith, Kent E. Seamons, and Mike Jones. Responding to policies at runtime in trustbuilder. In Fifth IEEE International Workshop on Policies for Distributed Systems and Networks, pages 149158, Yorktown Heights, New York, USA, June 2004. IEEE Computer Society. [417] Akhil Sahai, Sharad Singhal, Rajeev Joshi, and Vijay Machiraju. Automated generation of resource configurations through policies. In Fifth IEEE International Workshop on Policies for Distributed Systems and Networks, pages 107110, Yorktown Heights, New York, USA, June 2004. IEEE Computer Society. [418] C. Safran, D. Z. Sands, and D. M. Rind. On-line medical records: A decade of experience. Methods of Information in Medicine, 38:30812, 1999. [419] P. Sthler. Geschftsmodellle in der digitalen konomie. Merkmale, Strategien und Asuwirkungen. Josef Eul Verlag, Kln, 2001. [420] J. B. Stefani. Computational aspects of QoS in an object based distributed architecture. In Proceedings of 3rd Int. Workshop on Responsive Computer Systems, Lincoln, New Hampshire, USA, 1993. 362/366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

[421] Mark Stefik. Dprl: The digital property rights language. Technical report, Xerox Palo Alto Research Center, CA, USA, 1996. [422] M. Stevens. Service-oriented architecture introduction, part 1, April 2002. [423] C. Sluman, J. Tucker, J. P. LeBlanc, and B. Wood. Quality of Service (QoS). OMG Green Paper, Ver. 0.4a, OMG Doc. ormsc/ 97-06-04, June 1997. [424] Java Cryptography Extensions (JCE). [425] D. Suve, W. Vanderperren, and V. Jonckers. Jasco: an aspect-oriented approach tailored for component based software development. In Proceedings of international conference on aspectoriented software development (AOSD), Boston, USA, March 2003. [426] Mati Shomrat and Amiram Yehudai. Obvious or not?: regulating architectural decisions using Aspect-Oriented Programming. In AOSD 2002, pages 39, 2002. [427] Deploying web services to integrate the enterprise, 2002. White Paper. [428] M. R Thompson, A. Essiari, and S. Mudumbai. Certificate-based authorization policy in PKI. ACM Transactions on Infomation and System Security (TISSEC), 6(4):566588, November 2003. [429] J. Thelin and P. J. Murray. Corba and web services, 2002. [430] Anders Toms. Threats, challenges and emerging standards in web services security. Technical Report HS-IKI-TR-04-001, University of Skvde, Department of Computer Science, Sweden, 2004. [431] Building a foundation of trust in the PC. The Trusted Computing Platform Alliance, 2000. [432] 2004. Retrieved May 2004, from http://www.trustedcomputing.org/. [433] D. Tapscott, D. Ticoll, and A. Lowy. Digital Capital: Harnessing the Power of Business Webs. Harvard Business School, Boston (Ma), 2000. [434] Sandeep Uttamchandani, Guillermo A. Alvarez, and Gul Agha. Decisionqos: an adaptive, selfevolving qos arbitration module for storage systems. In Fifth IEEE International Workshop on Policies for Distributed Systems and Networks, pages 6776, Yorktown Heights, New York, USA, June 2004. IEEE Computer Society. [435] Peter Utton and Brian Hill. Performance prediction: An industry perspective. In th International Conference on Modelling Techniques and Tools, volume 1245 of Lecture Notes in Computer Science, pages 15, St Malo, France, 1997. Springer-Verlag. [436] Andreas Ulrich, Torben Weis, Kurt Geihs, and Christian Becker. DotQoS - QoS extension for .NET remoting. In K. Jeffay, I. Stoica, and K. Wehrle, editors, International Workshop on Quality of Service (IWQoS), pages 363380, Monterey, CA, June 2003. Springer-Verlag. [437] V. A. Villagr, J. I. Asensio, J. E. Lpez-de Vergara, and J. J. Berrocal. An approach to the transparent management instrumentation of distributed applications. In Proceedings of the 8th IEEE/IFIP Network Operations and Management Symposium (NOMS 2002), Florence, Italy, 2002. [438] J. Vaughan. Q&A: Web services security, 2003. [439] John Viega, J.T. Bloch, and Pravir Chandra. Applying aspect-oriented programming to security. Cutter IT Journal, 14(2):3139, February 2001. [440] B. Verheecke and M. A. Cibrn. Aop for dynamic configuration and management of web services in client-applications. In Proceedings of 2003 International Conference on Web Services Europe (ICWS03-Europe), Erfurt (Germany), September 2003.

363/366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

[441] Steven van den Berghe, Filip De Turck, and Piet Demeester. Integrating policy-based management and adaptive traffic engineering for qos deployment. In Fifth IEEE International Workshop on Policies for Distributed Systems and Networks, pages 211214, Yorktown Heights, New York, USA, June 2004. IEEE Computer Society. [442] W-Jan van den Heuvel and H. Weigand. Cross-organisational workflow integration using contracts. Decision Support Systems, 33(3):247265, 2002. [443] John Viega and David Evans. Separation of concerns for security. In ICSE Workshop on Multidimensional Separation of Concerns in Software Engineering, June 2000. [444] J Vesalainen. Making virtual measurable. In NESIS Workshop, 2003. [445] G. Valetto and G. Kaiser. Using process technology to control and coordinate software adaptation. In ICSE 2003, Portland, Oragon, USA, May 2003. [446] A. Vogel, B. Kerherv, G. v. Bochmann, and J. Gecsei. Distributed multimedia and QoS - a survey. IEEE Multimedia, 2(2):1019, 1995. [447] John Viega and Gary McGraw. Building Secure Software. Addison Wesley, 2001. [448] Lea Viljanen, Sini Ruohomaa, and Lea Kutvonen. The tube approach for trust management in collaborative enterprise systems. In Trust conference (to be published), November 2004. [449] R. Vanegas, J. A. Zinky, J. P. Loyall, R. E. Schantz, and D. E. Bakken. QuOs runtime support for Quality of Service in distributed objects. In Proceedings of IFIP International Conference on Distributed Systems Platforms and Open Distributed Processing (Middleware 98), pages 207222, Lake District, Lancaster, UK, 1998. [450] W3C. WS-Trust Web Services Trust Language (WS-Trust), Version 1.0, December 2002. [451] W3C. Web Services Reliability (WS-Reliability version 1.0), January 2003. http://sunonedev.sun.com/platform/technologies/wsreliability.v1.0.pdf. [452] Zheng Wang. Internet QoS: Architectures and Mechanisms. Morgan Kaufmann, 2000. [453] C. Wang, E. F. Ecklund, V. H. Goebel, and T. Plagemann. Design and implementation of a LoD system for multimedia-supported learning at the Medical Faculty. In Proceedings of EDMEDIA 2001 World Conference on Educational Multimedia, Hypermedia and Telecommunications, Tampere, Finland, 2001. [454] D. G. Waddington and D. Hutchison. A general model for QoS adaptation. In Proceedings of Sixth International Workshop on Quality of Service (IWQoS 98), pages 275277, 1998. [455] H.B. Wang, S. Jha, P.D. McDaniel, and M. Livny. Security policy reconciliation in distributed computing environments. In Fifth IEEE International Workshop on Policies for Distributed Systems and Networks, pages 137146, Yorktown Heights, New York, USA, June 2004. IEEE Computer Society. [456] G. Waters, P. Linington, D. Akehurst, P. Utton, and G. Martin. Permabase: predicting the performance of distributed systems at the design stage. IEE Proceedings - Software, 148(4):113 121, August 2001. [457] A. G. Waters, P. F. Linington, D. Akehurst, and A. Symes. Communications software performance prediction. In Performance Engineering of Computer and Telecommunication Systems, pages 38/138/9, Ilkley, UK, 1997. [458] S. Wong. Success with web services. EAIJournal, pages 2729, February 2002. 364/366

INTEROP State of the Art on Interoperability Architectures Work Package 9 - D9.1

[459] C. M. Woodside. A three view model of performance engineering of concurrent software. IEEE Transactions on Software Engineering, 21(9):754767, 2000. [460] R. Wigand, A. Picot, and Reichswald. Information, Organisation and Management. John Wiley, New York, 1997. [461] L. G. Williams and C. U. Smith. Performance evaluation of software architectures. In Proceedings of First Int. Workshop on Software and Performance, pages 164177, Santa Fe, NM, USA, October 1998. [462] Ian S. Welch and Robert J. Stroud. Re-engineering security as a crosscutting concern. The Computer Journal, 46(5):578589, September 2003. [463] U. G. Wilhelm, S. Staamann, and L. Buttyan. On the problem of trust in mobile agent systems. In IEEE Symposium on Network And Distributed System Security, San Diego, California, 1998. [464] X.509 certificates and certificate revocation lists (CRLs). Sun Microsystems Inc. [465] J. Xu and J. Kuusela. Modelling execution architecture of software system using coloured Petri nets. In First International Workshop on Software and Performance, WOSP 98, page 70, Santa Fe, USA, 1998. [466] J. Yoder and J. Barcalow. Architectural patterns for enabling application security. In The 4th Pattern Languages of Programming Conference, Washington University Technical Report 97-34, 1997. [467] J. A. Zinky, D. E. Bakken, and R. Schantz. Overview of Quality of Service for distributed objects. In Proceedings of Dual Use Technologies Conference, Utica, NY, USA, 1995. [468] J. Zinky, D. Bakken, and R. Schantz. Architectural support for Quality of Service for CORBA objects. Theory and Practice of Object Systems, 3(1), 1997. [469] Z. Zlatev, P. van Eck, R. Wieringa, and J. Gordijn. Goal-oriented re for e-services. In International Workshop on Service-oriented Requirements Engineering at RE04, 2004. [470] X. Zhang, R. Podorozhny, and V. Lesser. Cooperative, multistep negotiation over a multidimensional utility function. Technical Report 2000-02, Multi-Agent Systems Laboratory, Department of Computer Science, University of Massachusetts at Amhurst, 2000.

365/366

Вам также может понравиться