Вы находитесь на странице: 1из 118

Reference Type: Journal Article Record Number: 138 Year: 2001 Title: Designing data marts for data

warehouses Journal: ACM Trans. Softw. Eng. Methodol. Volume: 10 Issue: 4 Pages: 452-483 Short Title: Designing data marts for data warehouses ISSN: 1049-331X DOI: 10.1145/384189.384190 Legal Note: 384190 Abstract: Data warehouses are databases devoted to analytical processing. They are used to support decision-making activities in most modern business settings, when complex data sets have to be studied and analyzed. The technology for analytical processing assumes that data are presented in the form of simple data marts, consisting of a well-identified collection of facts and data analysis dimensions (star schema). Despite the wide diffusion of data warehouse technology and concepts, we still miss methods that help and guide the designer in identifying and extracting such data marts out of an enterprisewide information system, covering the upstream, requirement-driven stages of the design process. Many existing methods and tools support the activities related to the efficient implementation of data marts on top of specialized technology (such as the ROLAP or MOLAP data servers). This paper presents a method to support the identification and design of data marts. The method is based on three basic steps. A first top-down step makes it possible to elicit and consolidate user requirements and expectations. This is accomplished by exploiting a goal-oriented process based on the Goal/Question/Metric paradigm developed at the University of Maryland. Ideal data marts are derived from user requirements. The second bottom-up step extracts candidate data marts Reference Type: Journal Article Record Number: 126 Year: 2002 Title: Temporal abstract classes and virtual temporal specifications for real-time systems Journal: ACM Trans. Softw. Eng. Methodol. Volume: 11 Issue: 3 Pages: 291-308 Short Title: Temporal abstract classes and virtual temporal specifications for real-time systems ISSN: 1049-331X DOI: 10.1145/567793.567794 Legal Note: 567794 Abstract: The design and development of real-time systems is often a difficult and time-

consuming task. System realization has become increasingly difficult due to the proliferation of larger and more complex applications. To offset some of these difficulties, real-time developers have turned to object-oriented methodology. The success of object-oriented concepts in the development of non-real-time programs motivates the relevance of these concepts to achieve similar gains from encapsulation and code reuse in the real-time domain. This article presents an approach of integrating real-time constraint specifications within the constructs of an object-oriented language, affording these constraints a status equivalent to other language elements. This has led to the definition of such novel concepts as temporal abstract classes, virtual temporal constraints, and temporal specification inheritance, which extends inheritance mechanisms to accommodate real-time constraint specifications. These extensions provide real-time developers with the ability to manage and maintain the temporal behavior of a real-time program in a comparable manner to its functional behavior. Reference Type: Journal Article Record Number: 117 Year: 2003 Title: Editorial Journal: ACM Trans. Softw. Eng. Methodol. Volume: 12 Issue: 1 Pages: 1-2 Short Title: Editorial ISSN: 1049-331X DOI: 10.1145/839268.839269 Legal Note: 839269 Reference Type: Journal Article Record Number: 90 Year: 2005 Title: Editorial Journal: ACM Trans. Softw. Eng. Methodol. Volume: 14 Issue: 2 Pages: 119-123 Short Title: Editorial ISSN: 1049-331X DOI: 10.1145/1061254.1061255 Legal Note: 1061255 Reference Type: Journal Article Record Number: 52 Year: 2007

Title: Editorial Journal: ACM Trans. Softw. Eng. Methodol. Volume: 17 Issue: 1 Pages: 1-2 Short Title: Editorial ISSN: 1049-331X DOI: 10.1145/1314493.1314494 Legal Note: 1314494 Reference Type: Journal Article Record Number: 152 Year: 2007 Title: Editorial Journal: ACM Trans. Softw. Eng. Methodol. Volume: 17 Issue: 1 Pages: 1-2 Short Title: Editorial ISSN: 1049-331X DOI: 10.1145/1314493.1314494 Legal Note: 1314494 Reference Type: Journal Article Record Number: 40 Year: 2008 Title: Editorial Journal: ACM Trans. Softw. Eng. Methodol. Volume: 17 Issue: 3 Pages: 1-2 Short Title: Editorial ISSN: 1049-331X DOI: 10.1145/1363102.1363103 Legal Note: 1363103 Reference Type: Journal Article Record Number: 154 Year: 2008 Title: Editorial Journal: ACM Trans. Softw. Eng. Methodol. Volume: 17 Issue: 3

Pages: 1-2 Short Title: Editorial ISSN: 1049-331X DOI: 10.1145/1363102.1363103 Legal Note: 1363103 Reference Type: Journal Article Record Number: 155 Year: 2008 Title: Editorial Journal: ACM Trans. Softw. Eng. Methodol. Volume: 17 Issue: 2 Pages: 1-1 Short Title: Editorial ISSN: 1049-331X DOI: 10.1145/1348250.1348251 Legal Note: 1348251 Reference Type: Journal Article Record Number: 46 Year: 2008 Title: Introduction to the special section from the ACM international symposium on software testing and analysis (ISSTA 2006) Journal: ACM Trans. Softw. Eng. Methodol. Volume: 17 Issue: 2 Pages: 1-2 Short Title: Introduction to the special section from the ACM international symposium on software testing and analysis (ISSTA 2006) ISSN: 1049-331X DOI: 10.1145/1348250.1348252 Legal Note: 1348252 Reference Type: Journal Article Record Number: 156 Year: 2008 Title: Introduction to the special section from the ACM international symposium on software testing and analysis (ISSTA 2006) Journal: ACM Trans. Softw. Eng. Methodol. Volume: 17 Issue: 2 Pages: 1-2

Short Title: Introduction to the special section from the ACM international symposium on software testing and analysis (ISSTA 2006) ISSN: 1049-331X DOI: 10.1145/1348250.1348252 Legal Note: 1348252 Reference Type: Journal Article Record Number: 24 Year: 2009 Title: Editorial Journal: ACM Trans. Softw. Eng. Methodol. Volume: 18 Issue: 3 Pages: 1-2 Short Title: Editorial ISSN: 1049-331X DOI: 10.1145/1525880.1525881 Legal Note: 1525881 Reference Type: Journal Article Record Number: 11 Year: 2010 Title: Editorial Journal: ACM Trans. Softw. Eng. Methodol. Volume: 19 Issue: 3 Pages: 1-2 Short Title: Editorial ISSN: 1049-331X DOI: 10.1145/1656250.1656251 Legal Note: 1656251 Reference Type: Journal Article Record Number: 317 Author: Abhik Roychoudhury, Ankit Goel and B. Sengupta Year: 2012 Title: Symbolic Message Sequence Charts Journal: ACM Trans. Softw. Eng. Methodol. Volume: 21 Issue: 2 Pages: 1-44 Short Title: Symbolic Message Sequence Charts ISSN: 1049-331X

DOI: 10.1145/2089116.2089122 Keywords: Design Design tools and techniques Languages message Sequence charts requirements/specifications Symbolic execution Test generation Unified modeling language verification Abstract: Message sequence charts (MSCs) are a widely used visual formalism for scenario-based specifications of distributed reactive systems. In its conventional usage, an MSC captures an interaction snippet between concrete objects in the system. This leads to voluminous specifications when the system contains several objects that are behaviorally similar. MSCs also play an important role in the model-based testing of reactive systems, where they may be used for specifying (partial) system behaviors, describing test generation criteria, or representing test cases. However, since the number of processes in a MSC specification are fixed, model-based testing of systems consisting of process classes may involve a significant amount of rework: for example, reconstructing system models, or regenerating test cases for systems differing only in the number of processes of various types. In this article we propose a scenario-based notation, called symbolic message sequence charts (SMSCs), for modeling, simulation, and testing of process classes. SMSCs are a lightweight syntactic and semantic extension of MSCs where, unlike MSCs, a SMSC lifeline can denote some/all objects from a collection. Our extensions give us substantially more modeling power. Moreover, we present an abstract execution semantics for (structured collections of) SMSCs. This allows us to validate MSC-based system models capturing interactions between large, or even unbounded, number of objects. Finally, we describe a SMSC-based testing methodology for process classes, which allows generation of test cases for new object configurations with minimal rework. Since our SMSC extensions are only concerned with MSC lifelines, we believe that they can be integrated into existing standards such as UML 2.0. We illustrate our SMSCbased framework for modeling, simulation, and testing of process classes using a weather-update controller case-study from NASA. Reference Type: Journal Article Record Number: 102 Author: T. Akgul and V. J. M. III Year: 2004 Title: Assembly instruction level reverse execution for debugging Journal: ACM Trans. Softw. Eng. Methodol. Volume: 13 Issue: 2 Pages: 149-198 Short Title: Assembly instruction level reverse execution for debugging

ISSN: 1049-331X DOI: 10.1145/1018210.1018211 Legal Note: 1018211 Abstract: Assembly instruction level reverse execution provides a programmer with the ability to return a program to a previous state in its execution history via execution of a "reverse program." The ability to execute a program in reverse is advantageous for shortening software development time. Conventional techniques for recovering a state rely on saving the state into a record before the state is destroyed. However, statesaving causes significant memory and time overheads during forward execution.The proposed method introduces a reverse execution methodology at the assembly instruction level with low memory and time overheads. The methodology generates, from a program, a reverse program by which a destroyed state is almost always regenerated rather than being restored from a record. This significantly reduces statesaving.The methodology has been implemented on a PowerPC processor with a custom made debugger. As compared to previous work, all of which heavily use statesaving techniques, the experimental results show from 2X to 2206X reduction in runtime memory usage, from 1.5X to 403X reduction in forward execution time overhead and from 1.2X to 2.32X reduction in forward execution time for the tested benchmarks. Furthermore, due to the reduction in memory usage, our method can provide reverse execution in many cases where other methods run out of available memory. However, for cases where there is enough memory available, our method results in 1.16X to 1.89X slow down in reverse execution. Reference Type: Journal Article Record Number: 321 Author: Alessandro Fantechi, Stefania Gnesi, Alessandro Lapadula, Franco Mazzanti, Rosario Pugliese and F. Tiezzi Year: 2012 Title: A logical verification methodology for service-oriented computing Journal: ACM Trans. Softw. Eng. Methodol. Volume: 21 Issue: 3 Pages: 1-46 Short Title: A logical verification methodology for service-oriented computing ISSN: 1049-331X DOI: 10.1145/2211616.2211619 Keywords: Formal methods Model checking Model checking process Semantics Service-oriented computing Syntax Temporal logic Theory verification Web services

Abstract: We introduce a logical verification methodology for checking behavioral properties of service-oriented computing systems. Service properties are described by means of SocL, a branching-time temporal logic that we have specifically designed for expressing in an effective way distinctive aspects of services, such as, acceptance of a request, provision of a response, correlation among service requests and responses, etc. Our approach allows service properties to be expressed in such a way that they can be independent of service domains and specifications. We show an instantiation of our general methodology that uses the formal language COWS to conveniently specify services and the expressly developed software tool CMC to assist the user in the task of verifying SocL formulas over service specifications. We demonstrate the feasibility and effectiveness of our methodology by means of the specification and analysis of a case study in the automotive domain. Reference Type: Journal Article Record Number: 160 Author: Ali Ebnenasir and S. S. Kulkarni Year: 2011 Title: Feasibility of Stepwise Design of Multitolerant Programs Journal: ACM Trans. Softw. Eng. Methodol. Volume: 21 Issue: 1 Short Title: Feasibility of Stepwise Design of Multitolerant Programs ISSN: 1049-331X DOI: 10.1145/2063239.2063240 Keywords: Automatic addition of fault tolerance Abstract: The complexity of designing programs that simultaneously tolerate multiple classes of faults, called multitolerant programs, is in part due to the conflicting nature of the fault tolerance requirements that must be met by a multitolerant program when different types of faults occur. To facilitate the design of multitolerant programs, we present sound and (deterministically) complete algorithms for stepwise design of two families of multitolerant programs in a high atomicity program model, where a process can read and write all program variables in an atomic step. We illustrate that if one needs to design failsafe (respectively, nonmasking) fault tolerance for one class of faults and masking fault tolerance for another class of faults, then a multitolerant program can be designed in separate polynomial-time (in the state space of the faultintolerant program) steps regardless of the order of addition. This result has a significant methodological implication in that designers need not be concerned about unknown fault tolerance requirements that may arise due to unanticipated types of faults. Further, we illustrate that if one needs to design failsafe fault tolerance for one class of faults and nonmasking fault tolerance for a different class of faults, then the resulting problem is NP-complete in program state space. This is a counterintuitive result in that designing failsafe and nonmasking fault tolerance for the same class of faults can be done in polynomial time. We also present sufficient conditions for polynomial-time design of failsafe-nonmasking multitolerance. Finally, we demonstrate the stepwise design of multitolerance for a stable disk storage system, a token ring network protocol and a

repetitive agreement protocol that tolerates Byzantine and transient faults. Our automatic approach decreases the design time from days to a few hours for the token ring program that is our largest example with 200 million reachable states and 8 processes. Reference Type: Journal Article Record Number: 315 Author: Anders Mattsson, Brian Fitzgerald, Bjrn Lundell and B. Lings Year: 2012 Title: An Approach for Modeling Architectural Design Rules in UML and its Application to Embedded Software Journal: ACM Trans. Softw. Eng. Methodol. Volume: 21 Issue: 2 Pages: 1-29 Short Title: An Approach for Modeling Architectural Design Rules in UML and its Application to Embedded Software ISSN: 1049-331X DOI: 10.1145/2089116.2089120 Keywords: Computer-aided software engineering Design documentation Embedded Software development Human factors Languages Model-driven development Model-driven engineering Object-oriented design Methods Tools Abstract: Current techniques for modeling software architecture do not provide sufficient support for modeling architectural design rules. This is a problem in the context of model-driven development in which it is assumed that major design artifacts are represented as formal or semi-formal models. This article addresses this problem by presenting an approach to modeling architectural design rules in UML at the abstraction level of the meaning of the rules. The high abstraction level and the use of UML makes the rules both amenable to automation and easy to understand for both architects and developers, which is crucial to deployment in an organization. To provide a proof-ofconcept, a tool was developed that validates a system model against the architectural rules in a separate UML model. To demonstrate the feasibility of the approach, the architectural design rules of an existing live industrial-strength system were modeled according to the approach. Reference Type: Journal Article Record Number: 318

Author: Anna Queralt and E. Teniente Year: 2012 Title: Verification and Validation of UML Conceptual Schemas with OCL Constraints Journal: ACM Trans. Softw. Eng. Methodol. Volume: 21 Issue: 2 Pages: 1-41 Short Title: Verification and Validation of UML Conceptual Schemas with OCL Constraints ISSN: 1049-331X DOI: 10.1145/2089116.2089123 Keywords: Conceptual modeling Constraints design Ocl Requirements/specifications uml verification Abstract: To ensure the quality of an information system, it is essential that the conceptual schema that represents the knowledge about its domain is semantically correct. The semantic correctness of a conceptual schema can be seen from two different perspectives. On the one hand, from the point of view of its definition, a conceptual schema must be right. This is ensured by means of verification techniques that check whether the schema satisfies several correctness properties. On the other hand, from the point of view of the requirements that the information system should satisfy, a schema must also be the right one. This is ensured by means of validation techniques, which help the designer understand the exact meaning of a schema and to see whether it corresponds to the requirements. In this article we propose an approach to verify and validate UML conceptual schemas, with arbitrary constraints formalized in OCL. We have also implemented our approach to show its feasibility. Reference Type: Journal Article Record Number: 120 Author: Ant\, \#243, n. Lopes, M. Wermelinger, Jos\, \#233 and L. Fiadeiro Year: 2003 Title: Higher-order architectural connectors Journal: ACM Trans. Softw. Eng. Methodol. Volume: 12 Issue: 1 Pages: 64-104 Short Title: Higher-order architectural connectors ISSN: 1049-331X DOI: 10.1145/839268.839272 Legal Note: 839272 Abstract: We develop a notion of higher-order connector towards supporting the systematic construction of architectural connectors for software design. A higher-order connector takes connectors as parameters and allows for services such as security protocols and fault-tolerance mechanisms to be superposed over the interactions that

are handled by the connectors passed as actual arguments. The notion is first illustrated over CommUnity, a parallel program design language that we have been using for formalizing aspects of architectural design. A formal, algebraic semantics is then presented which is independent of any Architectural Description Language. Finally, we discuss how our results can impact software design methods and tools. Reference Type: Journal Article Record Number: 162 Author: M. Arnold, M. Vechev and E. Yahav Year: 2011 Title: QVM: An Efficient Runtime for Detecting Defects in Deployed Systems Journal: ACM Trans. Softw. Eng. Methodol. Volume: 21 Issue: 1 Short Title: QVM: An Efficient Runtime for Detecting Defects in Deployed Systems ISSN: 1049-331X DOI: 10.1145/2063239.2063241 Keywords: Debugging, Diagnosis heap assertions Reliability run-time environments Testing and debugging Typestate virtual machines Abstract: Coping with software defects that occur in the post-deployment stage is a challenging problem: bugs may occur only when the system uses a specific configuration and only under certain usage scenarios. Nevertheless, halting production systems until the bug is tracked and fixed is often impossible. Thus, developers have to try to reproduce the bug in laboratory conditions. Often the reproduction of the bug consists of the lion share of the debugging effort. In this paper we suggest an approach to address the aforementioned problem by using a specialized runtime environment (QVM, for Quality Virtual Machine). QVM efficiently detects defects by continuously monitoring the execution of the application in a production setting. QVM enables the efficient checking of violations of user-specified correctness properties, e.g., typestate safety properties, Java assertions, and heap properties pertaining to ownership. QVM is markedly different from existing techniques for continuous monitoring by using a novel overhead manager which enforces a user-specified overhead budget for quality checks. Existing tools for error detection in the field usually disrupt the operation of the deployed system. QVM, on the other hand, provides a balanced trade off between the cost of the monitoring process and the maintenance of sufficient accuracy for detecting defects. Specifically, the overhead cost of using QVM instead of a standard JVM, is low enough to be acceptable in production environments. We implemented QVM on top of IBMs J9 Java Virtual Machine and used it to detect and fix various errors in realworld applications. Reference Type: Journal Article Record Number: 58

Author: L. Baresi and S. Morasca Year: 2007 Title: Three empirical studies on estimating the design effort of Web applications Journal: ACM Trans. Softw. Eng. Methodol. Volume: 16 Issue: 4 Pages: 15 Short Title: Three empirical studies on estimating the design effort of Web applications ISSN: 1049-331X DOI: 10.1145/1276933.1276936 Legal Note: 1276936 Abstract: Our research focuses on the effort needed for designing modern Web applications. The design effort is an important part of the total development effort, since the implementation can be partially automated by tools. We carried out three empirical studies with students of advanced university classes enrolled in engineering and communication sciences curricula. The empirical studies are based on the use of W2000, a special-purpose design notation for the design of Web applications, but the hypotheses and results may apply to a wider class of modeling notations (e.g., OOHDM, WebML, or UWE). We started by investigating the relative importance of each design activity. We then assessed the accuracy of a priori design effort predictions and the influence of a few process-related factors on the effort needed for each design activity. We also analyzed the impact of attributes like the size and complexity of W2000 design artifacts on the total effort needed to design the user experience of web applications. In addition, we carried out a finer-grain analysis, by studying which of these attributes impact the effort devoted to the steps of the design phase that are followed when using W2000. Reference Type: Journal Article Record Number: 96 Author: L. Baresi, M. Pezz\ and \#232 Year: 2005 Title: Formal interpreters for diagram notations Journal: ACM Trans. Softw. Eng. Methodol. Volume: 14 Issue: 1 Pages: 42-84 Short Title: Formal interpreters for diagram notations ISSN: 1049-331X DOI: 10.1145/1044834.1044836 Legal Note: 1044836 Abstract: The article proposes an approach for defining extensible and flexible formal interpreters for diagram notations with significant dynamic semantics. More precisely, it addresses semi-formal diagram notations that have precisely-defined syntax, but informally defined (dynamic) semantics. These notations are often flexible to fit the

different needs and expectations of users. Flexibility comes from the incompleteness or informality of the original definition and results in different interpretations.The approach defines interpreters by means of a mapping onto a semantic domain. Two sets of rules define the correspondences between the elements of the diagram notation and those of the semantic domain, and between events and states of the semantic domain and visual annotations on the elements of the diagram notation.Flexibility also leads to notation families, that is, sets of notations that share core concepts, but present slightly different interpretations. Existing approaches usually interpret these notations in isolation; the approach presented in this article allows the interpretation of a family as a whole. The feasibility of the approach is demonstrated through a prototype generator that allows users to implement special-purpose interpreters by defining relatively small sets of rules. Reference Type: Journal Article Record Number: 164 Author: L. Baresi, M. Pezz\ and \#232 Year: 2005 Title: Formal interpreters for diagram notations Journal: ACM Trans. Softw. Eng. Methodol. Volume: 14 Issue: 1 Pages: 42-84 Short Title: Formal interpreters for diagram notations ISSN: 1049-331X DOI: 10.1145/1044834.1044836 Legal Note: 1044836 Abstract: The article proposes an approach for defining extensible and flexible formal interpreters for diagram notations with significant dynamic semantics. More precisely, it addresses semi-formal diagram notations that have precisely-defined syntax, but informally defined (dynamic) semantics. These notations are often flexible to fit the different needs and expectations of users. Flexibility comes from the incompleteness or informality of the original definition and results in different interpretations.The approach defines interpreters by means of a mapping onto a semantic domain. Two sets of rules define the correspondences between the elements of the diagram notation and those of the semantic domain, and between events and states of the semantic domain and visual annotations on the elements of the diagram notation.Flexibility also leads to notation families, that is, sets of notations that share core concepts, but present slightly different interpretations. Existing approaches usually interpret these notations in isolation; the approach presented in this article allows the interpretation of a family as a whole. The feasibility of the approach is demonstrated through a prototype generator that allows users to implement special-purpose interpreters by defining relatively small sets of rules. Reference Type: Journal Article

Record Number: 165 Author: Barthlmy Dagenais and M. P. Robillard Year: 2011 Title: Recommending Adaptive Changes for Framework Evolution Journal: ACM Trans. Softw. Eng. Methodol. Volume: 20 Issue: 4 Short Title: Recommending Adaptive Changes for Framework Evolution ISSN: 1049-331X DOI: 10.1145/2000799.2000805 Keywords: Adaptive changes distribution Maintenance Enhancement Abstract: In the course of a frameworks evolution, changes ranging from a simple refactoring to a complete rearchitecture can break client programs. Finding suitable replacements for framework elements that were accessed by a client program and deleted as part of the frameworks evolution can be a challenging task. We present a recommendation system, SemDiff, that suggests adaptations to client programs by analyzing how a framework was adapted to its own changes. In a study of the evolution of one open source framework and three client programs, our approach recommended relevant adaptive changes with a high level of precision. In a second study of the evolution of two frameworks, we found that related change detection approaches were better at discovering systematic changes and that SemDiff was complementary to these approaches by detecting non-trivial changes such as when a functionality is imported from an external library. Reference Type: Journal Article Record Number: 81 Author: D. Basin, J\, \#252, r. Doser and T. Lodderstedt Year: 2006 Title: Model driven security: From UML models to access control infrastructures Journal: ACM Trans. Softw. Eng. Methodol. Volume: 15 Issue: 1 Pages: 39-91 Short Title: Model driven security: From UML models to access control infrastructures ISSN: 1049-331X DOI: 10.1145/1125808.1125810 Legal Note: 1125810 Abstract: We present a new approach to building secure systems. In our approach, which we call Model Driven Security, designers specify system models along with their security requirements and use tools to automatically generate system architectures from the models, including complete, configured access control infrastructures. Rather than fixing one particular modeling language for this process, we propose a general schema for constructing such languages that combines languages for modeling systems with

languages for modeling security. We present several instances of this schema that combine (both syntactically and semantically) different UML modeling languages with a security modeling language for formalizing access control requirements. From models in the combined languages, we automatically generate access control infrastructures for server-based applications, built from declarative and programmatic access control mechanisms. The modeling languages and generation process are semantically wellfounded and are based on an extension of Role-Based Access Control. We have implemented this approach in a UML-based CASE-tool and report on experiments. Reference Type: Journal Article Record Number: 61 Author: S. Basu and S. A. Smolka Year: 2007 Title: Model checking the Java metalocking algorithm Journal: ACM Trans. Softw. Eng. Methodol. Volume: 16 Issue: 3 Pages: 12 Short Title: Model checking the Java metalocking algorithm ISSN: 1049-331X DOI: 10.1145/1243987.1243990 Legal Note: 1243990 Abstract: We report on our efforts to use the XMC model checker to model and verify the Java metalocking algorithm. XMC [Ramakrishna et al. 1997] is a versatile and efficient model checker for systems specified in XL, a highly expressive value-passing language. Metalocking [Agesen et al. 1999] is a highly-optimized technique for ensuring mutually exclusive access by threads to object monitor queues and, therefore; plays an essential role in allowing Java to offer concurrent access to objects. Metalocking can be viewed as a two-tiered scheme. At the upper level, the metalock level, a thread waits until it can enqueue itself on an object's monitor queue in a mutually exclusive manner. At the lower level, the monitor-lock level, enqueued threads race to obtain exclusive access to the object. Our abstract XL specification of the metalocking algorithm is fully parameterized, both on the number of threads M, and the number of objects N. It also captures a sophisticated optimization of the basic metalocking algorithm known as extra-fast locking and unlocking of uncontended objects. Using XMC, we show that for a variety of values of M and N, the algorithm indeed provides mutual exclusion and freedom from deadlock and lockout at the metalock level. We also show that, while the monitor-lock level of the protocol preserves mutual exclusion and deadlock-freedom, it is not lockout-free because the protocol's designers chose to give equal preference to awaiting threads and newly arrived threads. Reference Type: Journal Article Record Number: 130 Author: D. Batory, C. Johnson, B. MacDonald and D. v. Heeder

Year: 2002 Title: Achieving extensibility through product-lines and domain-specific languages: a case study Journal: ACM Trans. Softw. Eng. Methodol. Volume: 11 Issue: 2 Pages: 191-214 Short Title: Achieving extensibility through product-lines and domain-specific languages: a case study ISSN: 1049-331X DOI: 10.1145/505145.505147 Legal Note: 505147 Abstract: This is a case study in the use of product-line architectures (PLAs) and domain-specific languages (DSLs) to design an extensible command-and-control simulator for Army fire support. The reusable components of our PLA are layers or "aspects" whose addition or removal simultaneously impacts the source code of multiple objects in multiple, distributed programs. The complexity of our component specifications is substantially reduced by using a DSL for defining and refining state machines, abstractions that are fundamental to simulators. We present preliminary results that show how our PLA and DSL synergistically produce a more flexible way of implementing state-machine-based simulators than is possible with a pure Java implementation. Reference Type: Journal Article Record Number: 169 Author: A. Bauer, M. Leucker and C. Schallhart Year: 2011 Title: Runtime Verification for LTL and TLTL Journal: ACM Trans. Softw. Eng. Methodol. Volume: 21 Issue: 4 Short Title: Runtime Verification for LTL and TLTL ISSN: 1049-331X DOI: 10.1145/2000799.2000800 Keywords: Assertion checkers Monitors Runtime verification Abstract: This paper studies runtime veri?cation of properties expressed either in lineartime temporal logic (LTL) or timed lineartime temporal logic (TLTL). It classi?es runtime veri?cation in identifying its distinguishing features to model checking and testing, respectively. It introduces a three-valued semantics (with truth values true, false, inconclusive) as an adequate interpretation as to whether a partial observation of a running system meets an LTL or TLTL property. For LTL, a conceptually simple monitor generation procedure is given, which is optimal in two respects: First, the size of the generated deterministic monitor is minimal, and, second,

the monitor identi?es a continuously monitored trace as either satisfying or falsifying a property as early as possible. The feasibility of the developed methodology is demonstrated using a collection of real-world temporal logic speci?cations. Moreover, the presented approach is related to the properties monitorable in general and is compared to existing concepts in the literature. It is shown that the set of monitorable properties does not only encompass the safety and co-safety properties but is strictly larger. For TLTL, the same road map is followed by ?rst de?ning a three-valued semantics. The corresponding construction of a timed monitor is more involved, yet, as shown, possible. Reference Type: Journal Article Record Number: 25 Author: L. Bauer, J. Ligatti and D. Walker Year: 2009 Title: Composing expressive runtime security policies Journal: ACM Trans. Softw. Eng. Methodol. Volume: 18 Issue: 3 Pages: 1-43 Short Title: Composing expressive runtime security policies ISSN: 1049-331X DOI: 10.1145/1525880.1525882 Legal Note: 1525882 Abstract: Program monitors enforce security policies by interposing themselves into the control flow of untrusted software whenever that software attempts to execute securityrelevant actions. At the point of interposition, a monitor has authority to permit or deny (perhaps conditionally) the untrusted software's attempted action. Program monitors are common security enforcement mechanisms and integral parts of operating systems, virtual machines, firewalls, network auditors, and antivirus and antispyware tools. Unfortunately, the runtime policies we require program monitors to enforce grow more complex, both as the monitored software is given new capabilities and as policies are refined in response to attacks and user feedback. We propose dealing with policy complexity by organizing policies in such a way as to make them composable, so that complex policies can be specified more simply as compositions of smaller subpolicy modules. We present a fully implemented language and system called Polymer that allows security engineers to specify and enforce composable policies on Java applications. We formalize the central workings of Polymer by defining an unambiguous semantics for our language. Using this formalization, we state and prove an uncircumventability theorem which guarantees that monitors will intercept all securityrelevant actions of untrusted software. Reference Type: Journal Article Record Number: 137

Author: J.-R. Beauvais, E. Rutten, T. Gautier, R. Houdebine, P. L. Guernic and Y.-M. Tang Year: 2001 Title: Modeling statecharts and activitycharts as signal equations Journal: ACM Trans. Softw. Eng. Methodol. Volume: 10 Issue: 4 Pages: 397-451 Short Title: Modeling statecharts and activitycharts as signal equations ISSN: 1049-331X DOI: 10.1145/384189.384191 Legal Note: 384191 Abstract: The languages for modeling reactive systems are of different styles, like the imperative, state-based ones and the declarative, data-flow ones. They are adapted to different application domains. This paper, through the example of the languages Statecharts and Signal, shows a way to give a model of an imperative specification (Statecharts) in a declarative, equational one (Signal). This model constitutes a formal model of the Statemate semantics of Statecharts, upon which formal analysis techniques can be applied. Being a transformation from an imperative to a declarative structure, it involves the definition of generic models for the explicit management of state (in the case of control as well as of data). In order to obtain a structural construction of the model, a hierarchical and modular organization is proposed, including proper management and propagation of control along the hierarchy. The results presented here cover the essential features of Statecharts as well as of another language of Statemate: Activitycharts. As a translation, it makes multiformalism specification possible, and provides support for the integrated operation of the languages. The motivation lies also in the perspective of gaining access to the various formal analysis and implementation tools of the synchronous technology, using the DC1 exchange format, as in the Sacres programming environment. Reference Type: Journal Article Record Number: 123 Author: M. Bernardo, P. Ciancarini and L. Donatiello Year: 2002 Title: Architecting families of software systems with process algebras Journal: ACM Trans. Softw. Eng. Methodol. Volume: 11 Issue: 4 Pages: 386-426 Short Title: Architecting families of software systems with process algebras ISSN: 1049-331X DOI: 10.1145/606612.606614 Legal Note: 606614 Abstract: Software components can give rise to several kinds of architectural mismatches when assembled together in order to form a software system. A formal

description of the architecture of the resulting component-based software system may help to detect such architectural mismatches and to single out the components that cause the mismatches. In this article, we concentrate on deadlock-related architectural mismatches arising from three different causes that we identify: incompatibility between two components due to a single interaction, incompatibility between two components due to the combination of several interactions, and lack of interoperability among a set of components forming a cyclic topology. We develop a process algebra-based architectural description language called PADL, which deals with all three causes through an architectural compatibility check and an architectural interoperability check relying on standard observational equivalences. The adequacy of the architectural compatibility check is assessed on a compressing proxy system, while the adequacy of the architectural interoperability check is assessed on a cruise control system. We then address the issue of scaling the architectural compatibility and interoperability checks to architectural styles through an extension of PADL. The formalization of an architectural style is complicated by the presence of two degrees of freedom within the set of instances of the style: variability of the internal behavior of the components and variability of the topology formed by the components. As a first step towards the solution of the problem, we propose an intermediate abstraction called architectural type, whose instances differ only for the internal behavior of their components. We define an efficient architectural conformity check based on a standard observational equivalence to verify whether an architecture is an instance of an architectural type. We show that all the architectures conforming to the same architectural type possess the same compatibility and interoperability properties. Reference Type: Journal Article Record Number: 91 Author: J. Berstel, S. C. Reghizzi, G. Roussel and P. S. Pietro Year: 2005 Title: A scalable formal method for design and automatic checking of user interfaces Journal: ACM Trans. Softw. Eng. Methodol. Volume: 14 Issue: 2 Pages: 124-167 Short Title: A scalable formal method for design and automatic checking of user interfaces ISSN: 1049-331X DOI: 10.1145/1061254.1061256 Legal Note: 1061256 Abstract: The article addresses the formal specification, design and implementation of the behavioral component of graphical user interfaces. The complex sequences of visual events and actions that constitute dialogs are specified by means of modular, communicating grammars called VEG (Visual Event Grammars), which extend traditional BNF grammars to make them more convenient to model dialogs.A VEG specification is independent of the actual layout of the GUI, but it can easily be integrated with various layout design toolkits. Moreover, a VEG specification may be

verified with the model checker SPIN, in order to test consistency and correctness, to detect deadlocks and unreachable states, and also to generate test cases for validation purposes.Efficient code is automatically generated by the VEG toolkit, based on compiler technology. Realistic applications have been specified, verified and implemented, like a Notepad-style editor, a graph construction library and a large real application to medical software. It is also argued that VEG can be used to specify and test voice interfaces and multimodal dialogs. The major contribution of our work is blending together a set of features coming from GUI design, compilers, software engineering and formal verification. Even though we do not claim novelty in each of the techniques adopted for VEG, they have been united into a toolkit supporting all GUI design phases, that is, specification, design, verification and validation, linking to applications and coding. Reference Type: Journal Article Record Number: 142 Author: J. Bible, G. Rothermel and D. S. Rosenblum Year: 2001 Title: A comparative study of coarse- and fine-grained safe regression test-selection techniques Journal: ACM Trans. Softw. Eng. Methodol. Volume: 10 Issue: 2 Pages: 149-183 Short Title: A comparative study of coarse- and fine-grained safe regression testselection techniques ISSN: 1049-331X DOI: 10.1145/367008.367015 Legal Note: 367015 Abstract: Regression test-selection techniques reduce the cost of regression testing by selecting a subset of an existing test suite to use in retesting a modified program. Over the past two decades, numerous regression test-selection techniques have been described in the literature. Initial empirical studies of some of these techniques have suggested that they can indeed benefit testers, but so far, few studies have empirically compared different techniques. In this paper, we present the results of a comparative empirical study of two safe regression test-selection techniques. The techniques we studied have been implemented as the tools DejaVu and TestTube; we compared these tools in terms of a cost model incorporating precision (ability to eliminate unnecessary test cases), analysis cost, and test execution cost. Our results indicate, that in many instances, despite its relative lack of precision, TestTube can reduce the time required for regression testing as much as the more precise DejaVu. In other instances, particularly where the time required to execute test cases is long, DejaVu's superior precision gives it a clear advantage over TestTube. Such variations in relative performance can complicate a tester's choice of which tool to use. Our experimental results suggest that a hybrid regression test-selection tool that combines features of TestTube and DejaVu may be an answer to these complications; we present an initial

case study that demonstrates the potential benefit of such a tool. Reference Type: Journal Article Record Number: 64 Author: D. Binkley, N. Gold and M. Harman Year: 2007 Title: An empirical study of static program slice size Journal: ACM Trans. Softw. Eng. Methodol. Volume: 16 Issue: 2 Pages: 8 Short Title: An empirical study of static program slice size ISSN: 1049-331X DOI: 10.1145/1217295.1217297 Legal Note: 1217297 Abstract: This article presents results from a study of all slices from 43 programs, ranging up to 136,000 lines of code in size. The study investigates the effect of five aspects that affect slice size. Three slicing algorithms are used to study two algorithmic aspects: calling-context treatment and slice granularity. The remaining three aspects affect the upstream dependencies considered by the slicer. These include collapsing structure fields, removal of dead code, and the influence of points-to analysis. The results show that for the most precise slicer, the average slice contains just under one-third of the program. Furthermore, ignoring calling context causes a 50% increase in slice size, and while (coarse-grained) function-level slices are 33% larger than corresponding statement-level slices, they may be useful predictors of the (finergrained) statement-level slice size. Finally, upstream analyses have an order of magnitude less influence on slice size. Reference Type: Journal Article Record Number: 72 Author: M. Brambilla, S. Ceri, P. Fraternali and I. Manolescu Year: 2006 Title: Process modeling in Web applications Journal: ACM Trans. Softw. Eng. Methodol. Volume: 15 Issue: 4 Pages: 360-409 Short Title: Process modeling in Web applications ISSN: 1049-331X DOI: 10.1145/1178625.1178627 Legal Note: 1178627 Abstract: While Web applications evolve towards ubiquitous, enterprise-wide or multienterprise information systems, they face new requirements, such as the capability

of managing complex processes spanning multiple users and organizations, by interconnecting software provided by different organizations. Significant efforts are currently being invested in application integration, to support the composition of business processes of different companies, so as to create complex, multiparty business scenarios. In this setting, Web applications, which were originally conceived to allow the user-to-system dialogue, are extended with Web services, which enable system-to-system interaction, and with process control primitives, which permit the implementation of the required business constraints. This article presents new Web engineering methods for the high-level specification of applications featuring business processes and remote services invocation. Process- and service-enabled Web applications benefit from the high-level modeling and automatic code generation techniques that have been fruitfully applied to conventional Web applications, broadening the class of Web applications that take advantage of these powerful software engineering techniques. All the concepts presented in this article are fully implemented within a CASE tool. Reference Type: Journal Article Record Number: 114 Author: M. G. J. v. d. Brand, P. Klint and J. J. Vinju Year: 2003 Title: Term rewriting with traversal functions Journal: ACM Trans. Softw. Eng. Methodol. Volume: 12 Issue: 2 Pages: 152-190 Short Title: Term rewriting with traversal functions ISSN: 1049-331X DOI: 10.1145/941566.941568 Legal Note: 941568 Abstract: Term rewriting is an appealing technique for performing program analysis and program transformation. Tree (term) traversal is frequently used but is not supported by standard term rewriting. We extend many-sorted, first-order term rewriting with traversal functions that automate tree traversal in a simple and type-safe way. Traversal functions can be bottom-up or top-down traversals and can either traverse all nodes in a tree or can stop the traversal at a certain depth as soon as a matching node is found. They can either define sort-preserving transformations or mappings to a fixed sort. We give small and somewhat larger examples of traversal functions and describe their operational semantics and implementation. An assessment of various applications and a discussion conclude the article. Reference Type: Journal Article Record Number: 30 Author: T. D. Breaux, A. I. Ant\, \#243 and J. Doyle Year: 2008

Title: Semantic parameterization: A process for modeling domain descriptions Journal: ACM Trans. Softw. Eng. Methodol. Volume: 18 Issue: 2 Pages: 1-27 Short Title: Semantic parameterization: A process for modeling domain descriptions ISSN: 1049-331X DOI: 10.1145/1416563.1416565 Legal Note: 1416565 Abstract: Software engineers must systematically account for the broad scope of environmental behavior, including nonfunctional requirements, intended to coordinate the actions of stakeholders and software systems. The Inquiry Cycle Model (ICM) provides engineers with a strategy to acquire and refine these requirements by having domain experts answer six questions: who, what, where, when, how, and why. Goalbased requirements engineering has led to the formalization of requirements to answer the ICM questions about when, how, and why goals are achieved, maintained, or avoided. In this article, we present a systematic process called Semantic Parameterization for expressing natural language domain descriptions of goals as specifications in description logic. The formalization of goals in description logic allows engineers to automate inquiries using who, what, and where questions, completing the formalization of the ICM questions. The contributions of this approach include new theory to conceptually compare and disambiguate goal specifications that enables querying goals and organizing goals into specialization hierarchies. The artifacts in the process include a dictionary that aligns the domain lexicon with unique concepts, distinguishing between synonyms and polysemes, and several natural language patterns that aid engineers in mapping common domain descriptions to formal specifications. Semantic Parameterization has been empirically validated in three case studies on policy and regulatory descriptions that govern information systems in the finance and health-care domains. Reference Type: Journal Article Record Number: 14 Author: A. Brogi, R. Popescu and M. Tanca Year: 2010 Title: Design and implementation of Sator: A web service aggregator Journal: ACM Trans. Softw. Eng. Methodol. Volume: 19 Issue: 3 Pages: 1-21 Short Title: Design and implementation of Sator: A web service aggregator ISSN: 1049-331X DOI: 10.1145/1656250.1656254 Legal Note: 1656254 Abstract: Our long-term objective is to develop a general methodology for deploying (Web) service aggregation and adaptation middleware, capable of suitably overcoming

syntactic and behavioral mismatches in view of application integration within and across organizational boundaries. This article focuses on describing the core aggregation process, which generates the workflow of a composite service from a set of service workflows to be aggregated and a data-flow mapping linking service parameters. Notes: Software Construction Tools > Program Editors Reference Type: Journal Article Record Number: 70 Author: M. Broy, I. H. Kr\, \#252, ger and M. Meisinger Year: 2007 Title: A formal model of services Journal: ACM Trans. Softw. Eng. Methodol. Volume: 16 Issue: 1 Pages: 5 Short Title: A formal model of services ISSN: 1049-331X DOI: 10.1145/1189748.1189753 Legal Note: 1189753 Abstract: Service-oriented software systems rapidly gain importance across application domains: They emphasize functionality (services), rather structural entities (components), as the basic building block for system composition. More specifically, services coordinate the interplay of components to accomplish specific tasks. In this article, we establish a foundation of service orientation: Based on the Focus theory of distributed systems (see Broy and Stlen [2001]), we introduce a theory and formal model of services. In Focus, systems are composed of interacting components. A component is a total behavior. We introduce a formal model of services where, in contrast, a service is a partial behavior. For services and components, we work out foundational specification techniques and outline methodological development steps. We show how services can be structured and how software architectures can be composed of services and components. Although our emphasis is on a theoretical foundation of the notion of services, we demonstrate utility of the concepts we introduce by means of a running example from the automotive domain. Reference Type: Journal Article Record Number: 181 Author: Changhai Nie and H. Leung Year: 2011 Title: The Minimal Failure-Causing Schema of Combinatorial Testing Journal: ACM Trans. Softw. Eng. Methodol. Volume: 20 Issue: 4

Short Title: The Minimal Failure-Causing Schema of Combinatorial Testing ISSN: 1049-331X DOI: 10.1145/2000799.2000801 Keywords: Combinatorial testing Failure diagnosis Abstract: Combinatorial Testing (CT) involves the design of a small test suite to cover the parameter value combinations so as to detect failures triggered by the interactions among these parameters. To make full use of CT and to extend its advantages, this article first gives a model of CT and then presents a theory of the Minimal Failurecausing Schema (MFS), including the concept of the MFS, proof of its existence, some of its properties, and a method of finding the MFS. Then we propose a methodology for CT based on this MFS theory and the existing research. Our MFS-based methodology emphasizes that CT should work on accurate testing requirements, and has the following advantages: 1) Detect failure to the greatest degree with the least cost. 2) Effectiveness is improved by emphasizing mining of the information in software and making full use of the information gained from test design and execution. 3) Determine the root causes of failures and reveal related faults near the exposed ones. 4) Provide a foundation and model for regression testing and software quality evaluation of CT. A case study is presented to illustrate the MFS-based CT methodology, and an empirical study on a real software developed by us shows that the MFS really exists and the methodology based on MFS can considerably improve CT. Reference Type: Journal Article Record Number: 108 Author: M. Chechik, B. Devereux, S. Easterbrook and A. Gurfinkel Year: 2003 Title: Multi-valued symbolic model-checking Journal: ACM Trans. Softw. Eng. Methodol. Volume: 12 Issue: 4 Pages: 371-408 Short Title: Multi-valued symbolic model-checking ISSN: 1049-331X DOI: 10.1145/990010.990011 Legal Note: 990011 Abstract: This article introduces the concept of multi-valued model-checking and describes a multi-valued symbolic model-checker, Chek. Multi-valued model-checking is a generalization of classical model-checking, useful for analyzing models that contain uncertainty (lack of essential information) or inconsistency (contradictory information, often occurring when information is gathered from multiple sources). Multi-valued logics support the explicit modeling of uncertainty and disagreement by providing additional truth values in the logic.This article provides a theoretical basis for multi-valued modelchecking and discusses some of its applications. A companion article [Chechik et al. 2002b] describes implementation issues in detail. The model-checker works for any member of a large class of multi-valued logics. Our modeling language is based on a

generalization of Kripke structures, where both atomic propositions and transitions between states may take any of the truth values of a given multi-valued logic. Properties are expressed in CTL, our multi-valued extension of the temporal logic CTL.We define the class of logics, present the theory of multi-valued sets and multi-valued relations used in our model-checking algorithm, and define the multi-valued extensions of CTL and Kripke structures. We explore the relationship between CTL and CTL, and provide a symbolic model-checking algorithm for CTL. We also address the use of fairness in multi-valued model-checking. Finally, we discuss some applications of the multi-valued model-checking approach. Reference Type: Journal Article Record Number: 10 Author: C. Chen, J. S. Dong, J. Sun and A. Martin Year: 2010 Title: A verification system for interval-based specification languages Journal: ACM Trans. Softw. Eng. Methodol. Volume: 19 Issue: 4 Pages: 1-36 Short Title: A verification system for interval-based specification languages ISSN: 1049-331X DOI: 10.1145/1734229.1734232 Legal Note: 1734232 Abstract: Interval-based specification languages have been used to formally model and rigorously reason about real-time computing systems. This usually involves logical reasoning and mathematical computation with respect to continuous or discrete time. When these systems are complex, analyzing their models by hand becomes error-prone and difficult. In this article, we develop a verification system to facilitate the formal analysis of interval-based specification languages with machine-assisted proof support. The verification system is developed using a generic theorem prover, Prototype Verification System (PVS). Our system elaborately encodes a highly expressive setbased notation, Timed Interval Calculus (TIC), and can rigorously carry out the verification of TIC models at an interval level. We validated all TIC reasoning rules and discovered subtle flaws in the original rules. We also apply TIC to model Duration Calculus (DC), which is a popular interval-based specification language, and thus expand the capacity of the verification system. We can check the correctness of DC axioms, and execute DC proofs in a manner similar to the corresponding pencil-andpaper DC arguments. Notes: Software Requirements Tools > Requirements Modeling Tools Reference Type: Journal Article Record Number: 146 Author: H. Y. Chen, T. H. Tse and T. Y. Chen Year: 2001

Title: TACCLE: a methodology for object-oriented software testing at the class and cluster levels Journal: ACM Trans. Softw. Eng. Methodol. Volume: 10 Issue: 1 Pages: 56-109 Short Title: TACCLE: a methodology for object-oriented software testing at the class and cluster levels ISSN: 1049-331X DOI: 10.1145/366378.366380 Legal Note: 366380 Abstract: Object-oriented programming consists of several different levels of abstraction, namely, the algorithmic level, class level, cluster level, and system level. The testing of object-oriented software at the algorithmic and system levels is similar to conventional program testing. Testing at the class and cluster levels poses new challenges. Since methods and objects may interact with one another with unforeseen combinations and invocations, they are much more complex to simulate and test than the hierarchy of functional calls in conventional programs. In this paper, we propose a methodology for object-oriented software testing at the class and cluster levels. In classlevel testing, it is essential to determine whether objects produced from the execution of implemented systems would preserve the properties defined by the specification, such as behavioral equivalence and nonequivalence. Our class-level testing methodology addresses both of these aspects. For the testing of behavioral equivalence, we propose to select fundamental pairs of equivalent ground terms as test cases using a black-box technique based on algebraic specifications, and then determine by means of a whitebox technique whether the objects resulting from executing such test cases are observationally equivalent. To address the testing of behavioral nonequivalence, we have identified and analyzed several nontrivial problems in the current literature. We propose to classify term equivalence into four types, thereby setting up new concepts and deriving important properties. Based on these results, we propose an approach to deal with the problems in the generation of nonequivalent ground terms as test cases. Relatively little research has contributed to cluster-level testing. In this paper, we also discuss black-box testing at the cluster level. We illustrate the feasibility of using contract, a formal specification language for the behavioral dependencies and interactions among cooperating objects of different classes in a given cluster. We propose an approach to test the interactions among different classes using every individual message-passing rule in the given Contract specification. We also present an approach to examine the interactions among composite message-passing sequences. We have developed four testing tools to support our methodology. Reference Type: Journal Article Record Number: 44 Author: T. Y. Chen and R. Merkel Year: 2008 Title: An upper bound on software testing effectiveness

Journal: ACM Trans. Softw. Eng. Methodol. Volume: 17 Issue: 3 Pages: 1-27 Short Title: An upper bound on software testing effectiveness ISSN: 1049-331X DOI: 10.1145/1363102.1363107 Legal Note: 1363107 Abstract: Failure patterns describe typical ways in which inputs revealing program failure are distributed across the input domainin many cases, clustered together in contiguous regions. Based on these observations several debug testing methods have been developed. We examine the upper bound of debug testing effectiveness improvements possible through making assumptions about the shape, size and orientation of failure patterns. We consider the bounds for testing strategies with respect to minimizing the F-measure, maximizing the P-measure, and maximizing the Emeasure. Surprisingly, we find that the empirically measured effectiveness of some existing methods that are not based on these assumptions is close to the theoretical upper bound of these strategies. The assumptions made to obtain the upper bound, and its further implications, are also examined. Reference Type: Journal Article Record Number: 319 Author: Christian Kstner, A Sven Apel, Thomas Thm and G. Saake Year: 2012 Title: Type checking annotation-based product lines Journal: ACM Trans. Softw. Eng. Methodol. Volume: 21 Issue: 3 Pages: 1-39 Short Title: Type checking annotation-based product lines ISSN: 1049-331X DOI: 10.1145/2211616.2211617 Keywords: Conditional compilation Design Featherweight Java Languages Preprocessors Program editors Software product lines Abstract: Software product line engineering is an efficient means of generating a family of program variants for a domain from a single code base. However, because of the potentially high number of possible program variants, it is difficult to test them all and ensure properties like type safety for the entire product line. We present a product-lineaware type system that can type check an entire software product line without generating each variant in isolation. Specifically, we extend the Featherweight Java

calculus with feature annotations for product-line development and prove formally that all program variants generated from a well typed product line are well typed. Furthermore, we present a solution to the problem of typing mutually exclusive features. We discuss how results from our formalization helped implement our own product-line tool CIDE for full Java and report of our experience with detecting type errors in four existing software product line implementations. Reference Type: Journal Article Record Number: 47 Author: J. M. Cobleigh, G. S. Avrunin and L. A. Clarke Year: 2008 Title: Breaking up is hard to do: An evaluation of automated assume-guarantee reasoning Journal: ACM Trans. Softw. Eng. Methodol. Volume: 17 Issue: 2 Pages: 1-52 Short Title: Breaking up is hard to do: An evaluation of automated assume-guarantee reasoning ISSN: 1049-331X DOI: 10.1145/1348250.1348253 Legal Note: 1348253 Abstract: Finite-state verification techniques are often hampered by the state-explosion problem. One proposed approach for addressing this problem is assume-guarantee reasoning, where a system under analysis is partitioned into subsystems and these subsystems are analyzed individually. By composing the results of these analyses, it can be determined whether or not the system satisfies a property. Because each subsystem is smaller than the whole system, analyzing each subsystem individually may reduce the overall cost of verification. Often the behavior of a subsystem is dependent on the subsystems with which it interacts, and thus it is usually necessary to provide assumptions about the environment in which a subsystem executes. Because developing assumptions has been a difficult manual task, the evaluation of assumeguarantee reasoning has been limited. Using recent advances for automatically generating assumptions, we undertook a study to determine if assume-guarantee reasoning provides an advantage over monolithic verification. In this study, we considered all two-way decompositions for a set of systems and properties, using two different verifiers, FLAVERS and LTSA. By increasing the number of repeated tasks in these systems, we evaluated the decompositions as they were scaled. We found that in only a few cases can assume-guarantee reasoning verify properties on larger systems than monolithic verification can, and in these cases the systems that can be analyzed are only a few sizes larger. Although these results are discouraging, they provide insight about research directions that should be pursued and highlight the importance of experimental evaluation in this area.

Reference Type: Journal Article Record Number: 113 Author: A. Coen-Porisini, M. Pradella, M. Rossi and D. Mandrioli Year: 2003 Title: A formal approach for designing CORBA-based applications Journal: ACM Trans. Softw. Eng. Methodol. Volume: 12 Issue: 2 Pages: 107-151 Short Title: A formal approach for designing CORBA-based applications ISSN: 1049-331X DOI: 10.1145/941566.941567 Legal Note: 941567 Abstract: The design of distributed applications in a CORBA-based environment can be carried out by means of an incremental approach, which starts from the specification and leads to the high-level architectural design. This article discusses a methodology to transform a formal specification written in TRIO into a high-level design document written in an extension of TRIO, named TRIO/CORBA (TC). The TC language is suited to formally describe the high-level architecture of a CORBA-based application. As a result, designers are offered high-level concepts that precisely define the architectural elements of an application. Furthermore, TC offers mechanisms to extend its base semantics, and can be adapted to future developments and enhancements in the CORBA standard. The methodology and the associated language are presented through a case study derived from a real Supervision and Control System. Reference Type: Journal Article Record Number: 111 Author: Y. Cohen and Y. A. Feldman Year: 2003 Title: Automatic high-quality reengineering of database programs by abstraction, transformation and reimplementation Journal: ACM Trans. Softw. Eng. Methodol. Volume: 12 Issue: 3 Pages: 285-316 Short Title: Automatic high-quality reengineering of database programs by abstraction, transformation and reimplementation ISSN: 1049-331X DOI: 10.1145/958961.958962 Legal Note: 958962 Abstract: Old-generation database models, such as the indexed-sequential, hierarchical, or network models, provide record-level access to their data, with all application logic residing in the hosting program. In contrast, relational databases can perform complex operations, such as filter, aggregation, and join, on multiple records without an external specification of the record-access logic. Programs written for

relational databases attempt to move as much of the application logic as possible into the database, in order to make the most of the optimizations performed internally by the database.This conceptual gap between the programming styles makes automatic highquality translation of programs written for the older database models to the relational model difficult. It is not enough to convert just the database-access operations, since this would result in unacceptably inefficient programs. It is necessary to convert parts of the application logic from the procedural style of the hosting program (which is almost always Cobol) to the declarative style of SQL.This article describes an automatic system, called MIDAS, that performs high-quality reengineering of legacy database programs in this way. MIDAS is based on the paradigm of translation by abstraction, transformation, and reimplementation. The abstract representation is based on the Plan Calculus, with the addition of Query Graphs, introduced in this article in order to abstract the temporal behavior of database access patterns.The results of MIDAS's translation were found to be superior to those of the naive translation that only converts databaseaccess operations in terms of readability, size of code, speed, and network data traffic. Initial industrial experience with MIDAS also demonstrates the high quality of its translations on large-scale programs. Reference Type: Journal Article Record Number: 6 Author: K. Conboy and B. Fitzgerald Year: 2010 Title: Method and developer characteristics for effective agile method tailoring: A study of XP expert opinion Journal: ACM Trans. Softw. Eng. Methodol. Volume: 20 Issue: 1 Pages: 1-30 Short Title: Method and developer characteristics for effective agile method tailoring: A study of XP expert opinion ISSN: 1049-331X DOI: 10.1145/1767751.1767753 Legal Note: 1767753 Abstract: It has long been acknowledged that software methods should be tailored if they are to achieve optimum effect. However comparatively little research has been carried out to date on this topic in general, and more notably, on agile methods in particular. This dearth of evidence in the case of agile methods is especially significant in that it is reasonable to expect that such methods would particularly lend themselves to tailoring. In this research, we present a framework based on interviews with 20 senior software development researchers and a review of the extant literature. The framework is comprised of two sets of factorscharacteristics of the method, and developer practicesthat can improve method tailoring effectiveness. Drawing on the framework, we then interviewed 16 expert XP practitioners to examine the current state and effectiveness of XP tailoring efforts, and to shed light on issues the framework identified as being important. The article concludes with a set of recommendations for research

and practice that would advance our understanding of the method tailoring area. Reference Type: Journal Article Record Number: 99 Author: G. Costagliola, V. Deufemia and G. Polese Year: 2004 Title: A framework for modeling and implementing visual notations with applications to software engineering Journal: ACM Trans. Softw. Eng. Methodol. Volume: 13 Issue: 4 Pages: 431-487 Short Title: A framework for modeling and implementing visual notations with applications to software engineering ISSN: 1049-331X DOI: 10.1145/1040291.1040293 Legal Note: 1040293 Abstract: We present a framework for modeling visual notations and for generating the corresponding visual programming environments. The framework can be used for modeling the diagrammatic notations of software development methodologies, and to generate visual programming environments with CASE tools functionalities. This is accomplished through an underlying modeling process based on the visual notation syntactic model of eXtended Positional Grammars (XPG, for short), and the associated parsing methodology, XpLR. In particular, the process requires the modeling of the basic elements (visual symbols) of a visual notation, their syntactic properties, the relations between them, the syntactic rules to formally define the set of feasible visual sentences, and a set of semantic routines performing additional checks and translation tasks. Such a process is completely supported by the VLDesk system, which enables the automatic generation of an editor for drawing visual sentences, as well as a processor for their recognition, parsing, and translation into other notations.The proposed framework also provides the basis for the definition of a meta-CASE technology. In fact, we can customize the generated visual programming environment in terms of the supported visual notation, its syntactic properties, and the translation rules. We have used this framework to model several diagrammatic notations used in software development methodologies, including those of the Unified Modeling Language. Reference Type: Journal Article Record Number: 77 Author: S. Counsell, S. Swift and J. Crampton Year: 2006 Title: The interpretation and utility of three cohesion metrics for object-oriented design Journal: ACM Trans. Softw. Eng. Methodol. Volume: 15 Issue: 2

Pages: 123-149 Short Title: The interpretation and utility of three cohesion metrics for object-oriented design ISSN: 1049-331X DOI: 10.1145/1131421.1131422 Legal Note: 1131422 Abstract: The concept of cohesion in a class has been the subject of various recent empirical studies and has been measured using many different metrics. In the structured programming paradigm, the software engineering community has adopted an informal yet meaningful and understandable definition of cohesion based on the work of Yourdon and Constantine. The object-oriented (OO) paradigm has formalised various cohesion measures, but the argument over the most meaningful of those metrics continues to be debated. Yet achieving highly cohesive software is fundamental to its comprehension and thus its maintainability. In this article we subject two object-oriented cohesion metrics, CAMC and NHD, to a rigorous mathematical analysis in order to better understand and interpret them. This analysis enables us to offer substantial arguments for preferring the NHD metric to CAMC as a measure of cohesion. Furthermore, we provide a complete understanding of the behaviour of these metrics, enabling us to attach a meaning to the values calculated by the CAMC and NHD metrics. In addition, we introduce a variant of the NHD metric and demonstrate that it has several advantages over CAMC and NHD. While it may be true that a generally accepted formal and informal definition of cohesion continues to elude the OO software engineering community, there seems considerable value in being able to compare, contrast, and interpret metrics which attempt to measure the same features of software. Reference Type: Journal Article Record Number: 48 Author: C. Csallner, Y. Smaragdakis and T. Xie Year: 2008 Title: DSD-Crasher: A hybrid analysis tool for bug finding Journal: ACM Trans. Softw. Eng. Methodol. Volume: 17 Issue: 2 Pages: 1-37 Short Title: DSD-Crasher: A hybrid analysis tool for bug finding ISSN: 1049-331X DOI: 10.1145/1348250.1348254 Legal Note: 1348254 Abstract: DSD-Crasher is a bug finding tool that follows a three-step approach to program analysis: D. Capture the program's intended execution behavior with dynamic invariant detection. The derived invariants exclude many unwanted values from the program's input domain. S. Statically analyze the program within the restricted input domain to explore many

paths. D. Automatically generate test cases that focus on reproducing the predictions of the static analysis. Thereby confirmed results are feasible. This three-step approach yields benefits compared to past two-step combinations in the literature. In our evaluation with third-party applications, we demonstrate higher precision over tools that lack a dynamic step and higher efficiency over tools that lack a static step. Notes: Software Testing Tools > Test Generators Reference Type: Journal Article Record Number: 314 Author: Dario Fischbein, Nicolas DIppolito, Greg Brunet, Marsha Chechik and S. Uchitel Year: 2012 Title: Weak Alphabet Merging of Partial Behavior Models Journal: ACM Trans. Softw. Eng. Methodol. Volume: 21 Issue: 2 Pages: 1-47 Short Title: Weak Alphabet Merging of Partial Behavior Models ISSN: 1049-331X DOI: 10.1145/2089116.2089119 Keywords: Algorithms Design Merge mts Partial behavior Models Requirements/specifications Temporal Logic Theory Abstract: Constructing comprehensive operational models of intended system behavior is a complex and costly task, which can be mitigated by the construction of partial behavior models, providing early feedback and subsequently elaborating them iteratively. However, how should partial behavior models with different viewpoints covering different aspects of behavior be composed? How should partial models of component instances of the same type be put together? In this article, we propose model merging of modal transition systems (MTSs) as a solution to these questions. MTS models are a natural extension of labelled transition systems that support explicit modeling of what is currently unknown about system behavior. We formally define model merging based on weak alphabet refinement, which guarantees property preservation, and show that merging consistent models is a process that should result

in a minimal common weak alphabet refinement (MCR). In this article, we provide theoretical results and algorithms that support such a process. Finally, because in practice MTS merging is likely to be combined with other operations over MTSs such as parallel composition, we also study the algebraic properties of merging and apply these, together with the algorithms that support MTS merging, in a case study. Reference Type: Journal Article Record Number: 93 Author: E. M. Dashofy, Andr\, \#233, v. d. Hoek and R. N. Taylor Year: 2005 Title: A comprehensive approach for the development of modular software architecture description languages Journal: ACM Trans. Softw. Eng. Methodol. Volume: 14 Issue: 2 Pages: 199-245 Short Title: A comprehensive approach for the development of modular software architecture description languages ISSN: 1049-331X DOI: 10.1145/1061254.1061258 Legal Note: 1061258 Abstract: Research over the past decade has revealed that modeling software architecture at the level of components and connectors is useful in a growing variety of contexts. This has led to the development of a plethora of notations for representing software architectures, each focusing on different aspects of the systems being modeled. In general, these notations have been developed without regard to reuse or extension. This makes the effort in adapting an existing notation to a new purpose commensurate with developing a new notation from scratch. To address this problem, we have developed an approach that allows for the rapid construction of new architecture description languages (ADLs). Our approach is unique because it encapsulates ADL features in modules that are composed to form ADLs. We achieve this by leveraging the extension mechanisms provided by XML and XML schemas. We have defined a set of generic, reusable ADL modules called xADL 2.0, useful as an ADL by itself, but also extensible to support new applications and domains. To support this extensibility, we have developed a set of reflective syntax-based tools that adapt to language changes automatically, as well as several semantically-aware tools that provide support for advanced features of xADL 2.0. We demonstrate the effectiveness, scalability, and flexibility of our approach through a diverse set of experiences. First, our approach has been applied in industrial contexts, modeling software architectures for aircraft software and spacecraft systems. Second, we show how xADL 2.0 can be extended to support the modeling features found in two different representations for modeling product-line architectures. Finally, we show how our infrastructure has been used to support its own development. The technical contribution of our infrastructure is augmented by several research contributions: the first decomposition of an architecture description language into modules, insights about how to develop new language

modules and a process for integrating them, and insights about the roles of different kinds of tools in a modular ADL-based infrastructure. Reference Type: Journal Article Record Number: 194 Author: David W. Binkley, Mark Harman and K. Lakhotia Year: 2011 Title: FlagRemover: A testability transformation for transforming loop-assigned flags Journal: ACM Trans. Softw. Eng. Methodol. Volume: 20 Issue: 3 Short Title: FlagRemover: A testability transformation for transforming loop-assigned flags ISSN: 1049-331X DOI: 10.1145/2000791.2000796 Keywords: Empirical evaluation Evolutionary testing Testability transformation Testing and debugging Abstract: SearchBased Testing is a widely studied technique for automatically generating test inputs, with the aim of reducing the cost of software engineering activities that rely upon testing. However, searchbased approaches degenerate to random testing in the presence of flag variables, because flags create spikes and plateaux in the fitness landscape. Both these features are known to denote hard optimization problems for all searchbased optimization techniques. Several authors have studied flag removal transformations and fitness function refinements to address the issue of flags, but the problem of loopassigned flags remains unsolved. This paper introduces a testability transformation along with a tool that transforms programs with loopassigned flags into flagfree equivalents, so that existing searchbased test data generation approaches can successfully be applied. The paper presents the results of an empirical study that demonstrates the effectiveness and efficiency of the testability transformation on programs including those made up of open source and industrial production code, as well as test data generation problems specifically created to denote hard optimization problems. Notes: Software Testing Tools> Test evaluation tools Reference Type: Journal Article Record Number: 324 Author: Dawei Qi, Abhik Roychoudhury, Zhenkai Liang and K. Vaswani Year: 2012 Title: An approach to debugging evolving programs Journal: ACM Trans. Softw. Eng. Methodol. Volume: 21 Issue: 3

Pages: 1-29 Short Title: An approach to debugging evolving programs ISSN: 1049-331X DOI: 10.1145/2211616.2211622 Keywords: Debuggers Debugging Aids Experimentation Reliability Software debugging Software evolution Symbolic execution Abstract: Bugs in programs are often introduced when programs evolve from a stable version to a new version. In this article, we propose a new approach called DARWIN for automatically finding potential root causes of such bugs. Given two programsa reference program and a modified programand an input that fails on the modified program, our approach uses symbolic execution to automatically synthesize a new input that (a) is very similar to the failing input and (b) does not fail. We find the potential cause(s) of failure by comparing control-flow behavior of the passing and failing inputs and identifying code fragments where the control flows diverge. A notable feature of our approach is that it handles hard-to-explain bugs, like code missing errors, by pointing to code in the reference program. We have implemented this approach and conducted experiments using several real-world applications, such as the Apache Web server, libPNG (a library for manipulating PNG images), and TCPflow (a program for displaying data sent through TCP connections). In each of these applications, DARWIN was able to localize bugs with high accuracy. Even though these applications contain several thousands of lines of code, DARWIN could usually narrow down the potential root cause(s) to less than ten lines. In addition, we find that the inputs synthesized by DARWIN provide additional value by revealing other undiscovered errors. Reference Type: Journal Article Record Number: 325 Author: Dawei Qi, Abhik Roychoudhury, Zhenkai Liang and K. Vaswani Year: 2012 Title: DARWIN: An Approach for Debugging Evolving Programs Journal: ACM Trans. Softw. Eng. Methodol. Volume: 21 Issue: 3 Short Title: DARWIN: An Approach for Debugging Evolving Programs ISSN: 1049-331X DOI: 10.1145/2211616.2211622 Keywords: debuggers debugging aids

experimentation reliability software debugging software evolution symbolic execution symbolic execution Abstract: Bugs in programs are often introduced when programs evolve from a stable version to a new version. In this article, we propose a new approach called DARWIN for automatically finding potential root causes of such bugs. Given two programsa reference program and a modified programand an input that fails on the modified program, our approach uses symbolic execution to automatically synthesize a new input that (a) is very similar to the failing input and (b) does not fail. We find the potential cause(s) of failure by comparing control-flow behavior of the passing and failing inputs and identifying code fragments where the control flows diverge. A notable feature of our approach is that it handles hard-to-explain bugs, like code missing errors, by pointing to code in the reference program. We have implemented this approach and conducted experiments using several real-world applications, such as the Apache Web server, libPNG (a library for manipulating PNG images), and TCPflow (a program for displaying data sent through TCP connections). In each of these applications, DARWIN was able to localize bugs with high accuracy. Even though these applications contain several thousands of lines of code, DARWIN could usually narrow down the potential root cause(s) to less than ten lines. In addition, we find that the inputs synthesized by DARWIN provide additional value by revealing other undiscovered errors. Reference Type: Journal Article Record Number: 17 Author: N. Desai, A. K. Chopra and M. P. Singh Year: 2009 Title: Amoeba: A methodology for modeling and evolving cross-organizational business processes Journal: ACM Trans. Softw. Eng. Methodol. Volume: 19 Issue: 2 Pages: 1-45 Short Title: Amoeba: A methodology for modeling and evolving cross-organizational business processes ISSN: 1049-331X DOI: 10.1145/1571629.1571632 Legal Note: 1571632 Abstract: Business service engagements involve processes that extend across two or more autonomous organizations. Because of regulatory and competitive reasons, requirements for cross-organizational business processes often evolve in subtle ways. The changes may concern the business transactions supported by a process, the organizational structure of the parties participating in the process, or the contextual policies that apply to the process. Current business process modeling approaches

handle such changes in an ad hoc manner, and lack a principled means for determining what needs to be changed and where. Cross-organizational settings exacerbate the shortcomings of traditional approaches because changes in one organization can potentially affect the workings of another. This article describes Amoeba, a methodology for business processes that is based on business protocols. Protocols capture the business meaning of interactions among autonomous parties via commitments. Amoeba includes guidelines for (1) specifying cross-organizational processes using business protocols, and (2) handling the evolution of requirements via a novel application of protocol composition. This article evaluates Amoeba using enhancements of a real-life business scenario of auto-insurance claim processing, and an aerospace case study. Reference Type: Journal Article Record Number: 316 Author: Devdatta Kulkarni, Tanvir Ahmed and A. Tripathi Year: 2012 Title: A Generative Programming Framework for Context-Aware CSCW Applications Journal: ACM Trans. Softw. Eng. Methodol. Volume: 21 Issue: 2 Pages: 1-35 Short Title: A Generative Programming Framework for Context-Aware CSCW Applications ISSN: 1049-331X DOI: 10.1145/2089116.2089121 Keywords: Access controls Context-aware computing design Abstract: We present a programming framework based on the paradigm of generative application development for building context-aware collaborative applications. In this approach, context-aware applications are implemented using a domain-specific design model, and their execution environment is generated and maintained by the middleware. The key features of this design model include support for context-based service discovery and binding, context-based access control, context-based multiuser coordination, and context-triggered automated task executions. The middleware uses the technique of policy-based specialization for generating application-specific middleware components from the generic middleware components. Through a casestudy example, we demonstrate this approach and present the evaluations of the design model and the middleware. Reference Type: Journal Article Record Number: 7 Author: E. Duala-Ekoko and M. P. Robillard Year: 2010

Title: Clone region descriptors: Representing and tracking duplication in source code Journal: ACM Trans. Softw. Eng. Methodol. Volume: 20 Issue: 1 Pages: 1-31 Short Title: Clone region descriptors: Representing and tracking duplication in source code ISSN: 1049-331X DOI: 10.1145/1767751.1767754 Legal Note: 1767754 Abstract: Source code duplication, commonly known as code cloning, is considered an obstacle to software maintenance because changes to a cloned region often require consistent changes to other regions of the source code. Research has provided evidence that the elimination of clones may not always be practical, feasible, or costeffective. We present a clone management approach that describes clone regions in a robust way that is independent from the exact text of clone regions or their location in a file, and that provides support for tracking clones in evolving software. Our technique relies on the concept of abstract clone region descriptors (CRDs), which describe clone regions using a combination of their syntactic, structural, and lexical information. We present our definition of CRDs, and describe a clone tracking system capable of producing CRDs from the output of different clone detection tools, notifying developers of modifications to clone regions, and supporting updates to the documented clone relationships. We evaluated the performance and usefulness of our approach across three clone detection tools and five subject systems, and the results indicate that CRDs are a practical and robust representation for tracking code clones in evolving software. Notes: Software Maintenance Tools > Comprehension Tools Reference Type: Journal Article Record Number: 116 Author: L. Durante, R. Sisto and A. Valenzano Year: 2003 Title: Automatic testing equivalence verification of spi calculus specifications Journal: ACM Trans. Softw. Eng. Methodol. Volume: 12 Issue: 2 Pages: 222-284 Short Title: Automatic testing equivalence verification of spi calculus specifications ISSN: 1049-331X DOI: 10.1145/941566.941570 Legal Note: 941570 Abstract: Testing equivalence is a powerful means for expressing the security properties of cryptographic protocols, but its formal verification is a difficult task because of the quantification over contexts on which it is based. Previous articles have provided insights into using theorem-proving for the verification of testing equivalence of spi calculus specifications. This article addresses the same verification problem, but uses a

state exploration approach. The verification technique is based on the definition of an environment-sensitive, labeled transition system representing a spi calculus specification. Trace equivalence defined on such a transition system coincides with testing equivalence. Symbolic techniques are used to keep the set of traces finite. If a difference in the traces of two spi descriptions (typically a specification and the corresponding implementation of a protocol) is found, it can be used to automatically build the spi calculus description of an intruder process that can exploit the difference. Reference Type: Journal Article Record Number: 98 Author: M. B. Dwyer, L. A. Clarke, J. M. Cobleigh and G. Naumovich Year: 2004 Title: Flow analysis for verifying properties of concurrent software systems Journal: ACM Trans. Softw. Eng. Methodol. Volume: 13 Issue: 4 Pages: 359-430 Short Title: Flow analysis for verifying properties of concurrent software systems ISSN: 1049-331X DOI: 10.1145/1040291.1040292 Legal Note: 1040292 Abstract: This article describes FLAVERS, a finite-state verification approach that analyzes whether concurrent systems satisfy user-defined, behavioral properties. FLAVERS automatically creates a compact, event-based model of the system that supports efficient dataflow analysis. FLAVERS achieves this efficiency at the cost of precision. Analysts, however, can improve the precision of analysis results by selectively and judiciously incorporating additional semantic information into an analysis.We report on an empirical study of the performance of the FLAVERS/Ada toolset applied to a collection of multitasking Ada systems. This study indicates that sufficient precision for proving system properties can usually be achieved and that the cost for such analysis typically grows as a low-order polynomial in the size of the system. Reference Type: Journal Article Record Number: 4 Author: R. Dyer and H. Rajan Year: 2010 Title: Supporting dynamic aspect-oriented features Journal: ACM Trans. Softw. Eng. Methodol. Volume: 20 Issue: 2 Pages: 1-34 Short Title: Supporting dynamic aspect-oriented features ISSN: 1049-331X DOI: 10.1145/1824760.1824764

Legal Note: 1824764 Abstract: Dynamic aspect-oriented (AO) features have important software engineering benefits such as allowing unanticipated software evolution and maintenance. It is thus important to efficiently support these features in language implementations. Current implementations incur unnecessary design-time and runtime overhead due to the lack of support in underlying intermediate language (IL) models. To address this problem, we present a flexible and dynamic IL model that we call Nu. The Nu model provides a higher level of abstraction compared to traditional object-oriented ILs, making it easier to efficiently support dynamic AO features. We demonstrate these benefits by providing an industrial-strength VM implementation for Nu, by showing translation strategies from dynamic source-level constructs to Nu and by analyzing the performance of the resulting IL code. Notes: Software Construction Tools > Compilers and Code Generators Reference Type: Journal Article Record Number: 125 Author: A. Egyed Year: 2002 Title: Automated abstraction of class diagrams Journal: ACM Trans. Softw. Eng. Methodol. Volume: 11 Issue: 4 Pages: 449-491 Short Title: Automated abstraction of class diagrams ISSN: 1049-331X DOI: 10.1145/606612.606616 Legal Note: 606616 Abstract: Designers can easily become overwhelmed with details when dealing with large class diagrams. This article presents an approach for automated abstraction that allows designers to "zoom out" on class diagrams to investigate and reason about their bigger picture. The approach is based on a large number of abstraction rules that individually are not very powerful, but when used together, can abstract complex class structures quickly. This article presents those abstraction rules and an algorithm for applying them. The technique was validated on over a dozen models where it was shown to be well suited for model understanding, consistency checking, and reverse engineering. Reference Type: Journal Article Record Number: 38 Author: W. Emmerich, M. Aoyama and J. Sventek Year: 2008 Title: The impact of research on the development of middleware technology Journal: ACM Trans. Softw. Eng. Methodol. Volume: 17

Issue: 4 Pages: 1-48 Short Title: The impact of research on the development of middleware technology ISSN: 1049-331X DOI: 10.1145/13487689.13487692 Legal Note: 13487692 Abstract: The middleware market represents a sizable segment of the overall Information and Communication Technology market. In 2005, the annual middleware license revenue was reported by Gartner to be in the region of $8.5 billion. In this article we address the question whether research had any involvement in the creation of the technology that is being sold in this market? We attempt a scholarly discourse. We present the research method that we have applied to answer this question. We then present a brief introduction into the key middleware concepts that provide the foundation for this market. It would not be feasible to investigate any possible impact that research might have had. Instead we select a few very successful technologies that are representative for the middleware market as a whole and show the existence of impact of research results in the creation of these technologies. We investigate the origins of Web services middleware, distributed transaction processing middleware, message-oriented middleware, distributed object middleware and remote procedure call systems. For each of these technologies we are able to show ample influence of research and conclude that without the research conducted by PhD students and researchers in university computer science labs at Brown, CMU, Cambridge, Newcastle, MIT, Vrije, and University of Washington as well as research in industrial labs at APM, AT&T Bell Labs, DEC Systems Research, HP Labs, IBM Research, and Xerox PARC we would not have middleware technology in its current form. We summarise the article by distilling lessons that can be learnt from this evidenced impact for future technology transfer undertakings. Reference Type: Journal Article Record Number: 122 Author: A. T. o. S. Engineering and M. staff Year: 2002 Title: Obituary Journal: ACM Trans. Softw. Eng. Methodol. Volume: 11 Issue: 4 Pages: 385-385 Short Title: Obituary ISSN: 1049-331X DOI: 10.1145/606612.606613 Legal Note: 606613 Reference Type: Journal Article Record Number: 121

Author: A. T. o. S. Engineering and M. staff Year: 2003 Title: Reviewers 2002 Journal: ACM Trans. Softw. Eng. Methodol. Volume: 12 Issue: 1 Pages: 105-105 Short Title: Reviewers 2002 ISSN: 1049-331X DOI: 10.1145/839268.839273 Legal Note: 839273 Reference Type: Journal Article Record Number: 94 Author: A. T. o. S. Engineering and M. staff Year: 2005 Title: Acknowledgement of referees 2004 Journal: ACM Trans. Softw. Eng. Methodol. Volume: 14 Issue: 2 Pages: 246-246 Short Title: Acknowledgement of referees 2004 ISSN: 1049-331X DOI: 10.1145/1061254.1061259 Legal Note: 1061259 Reference Type: Journal Article Record Number: 92 Author: M. Erwig and Z. Fu Year: 2005 Title: Software reuse for scientific computing through program generation Journal: ACM Trans. Softw. Eng. Methodol. Volume: 14 Issue: 2 Pages: 168-198 Short Title: Software reuse for scientific computing through program generation ISSN: 1049-331X DOI: 10.1145/1061254.1061257 Legal Note: 1061257 Abstract: We present a program-generation approach to address a software-reuse challenge in the area of scientific computing. More specifically, we describe the design of a program generator for the specification of subroutines that can be generic in the dimensions of arrays, parameter lists, and called subroutines. We describe the application of that approach to a real-world problem in scientific computing which

requires the generic description of inverse ocean modeling tools. In addition to a compiler that can transform generic specifications into efficient Fortran code for models, we have also developed a type system that can identify possible errors already in the specifications. This type system is important for the acceptance of the program generator among scientists because it prevents a large class of errors in the generated code. Reference Type: Journal Article Record Number: 80 Author: R. Eshuis Year: 2006 Title: Symbolic model checking of UML activity diagrams Journal: ACM Trans. Softw. Eng. Methodol. Volume: 15 Issue: 1 Pages: 1-38 Short Title: Symbolic model checking of UML activity diagrams ISSN: 1049-331X DOI: 10.1145/1125808.1125809 Legal Note: 1125809 Abstract: Two translations from activity diagrams to the input language of NuSMV, a symbolic model verifier, are presented. Both translations map an activity diagram into a finite state machine and are inspired by existing statechart semantics. The requirements-level translation defines state machines that can be efficiently verified, but are a bit unrealistic since they assume the perfect synchrony hypothesis. The implementation-level translation defines state machines that cannot be verified so efficiently, but that are more realistic since they do not use the perfect synchrony hypothesis. To justify the use of the requirements-level translation, we show that for a large class of activity diagrams and certain properties, both translations are equivalent: regardless of which translation is used, the outcome of model checking is the same. Moreover, for linear stuttering-closed properties, the implementation-level translation is equivalent to a slightly modified version of the requirements-level translation. We use the two translations to model check data integrity constraints for an activity diagram and a set of class diagrams that specify the data manipulated in the activities. Both translations have been implemented in two tools. We discuss our experiences in applying both translations to model check some large example activity diagrams. Reference Type: Journal Article Record Number: 84 Author: J. Estublier, D. Leblang, Andr\, \#233, v. d. Hoek, R. Conradi, G. Clemm, W. Tichy and D. Wiborg-Weber Year: 2005 Title: Impact of software engineering research on the practice of software configuration management

Journal: ACM Trans. Softw. Eng. Methodol. Volume: 14 Issue: 4 Pages: 383-430 Short Title: Impact of software engineering research on the practice of software configuration management ISSN: 1049-331X DOI: 10.1145/1101815.1101817 Legal Note: 1101817 Abstract: Software Configuration Management (SCM) is an important discipline in professional software development and maintenance. The importance of SCM has increased as programs have become larger, more long lasting, and more mission and life critical. This article discusses the evolution of SCM technology from the early days of software development to the present, with a particular emphasis on the impact that university and industrial research has had along the way. Based on an analysis of the publication history and evolution in functionality of the available SCM systems, we trace the critical ideas in the field from their early inception to their eventual maturation in commercially and freely available SCM systems. In doing so, this article creates a detailed record of the critical value of SCM research and illustrates how research results have shaped the functionality of today's SCM systems. Reference Type: Journal Article Record Number: 129 Author: M. Felder, M. Pezz\ and \#232 Year: 2002 Title: A formal design notation for real-time systems Journal: ACM Trans. Softw. Eng. Methodol. Volume: 11 Issue: 2 Pages: 149-190 Short Title: A formal design notation for real-time systems ISSN: 1049-331X DOI: 10.1145/505145.505146 Legal Note: 505146 Abstract: The development of real-time systems is based on a variety of different methods and notations. Despite the purported benefits of formal methods, informal techniques still play a predominant role in current industrial practice. Formal and informal methods have been combined in various ways to smoothly introduce formal methods in industrial practice. The combination of real-time structured analysis (SA-RT) with Petri nets is among the most popular approaches, but has been applied only to requirements specifications. This paper extends SA-RT to specifications of the detailed design of embedded real-time systems, and combines the proposed notation with Petri nets.

Reference Type: Journal Article Record Number: 118 Author: A. P. Felty and K. S. Namjoshi Year: 2003 Title: Feature specification and automated conflict detection Journal: ACM Trans. Softw. Eng. Methodol. Volume: 12 Issue: 1 Pages: 3-27 Short Title: Feature specification and automated conflict detection ISSN: 1049-331X DOI: 10.1145/839268.839270 Legal Note: 839270 Abstract: Large software systems, especially in the telecommunications field, are often specified as a collection of features. We present a formal specification language for describing features, and a method of automatically detecting conflicts ("undesirable interactions") amongst features at the specification stage. Conflict detection at this early stage can help prevent costly and time consuming problem fixes during implementation. Features are specified using temporal logic; two features conflict essentially if their specifications are mutually inconsistent under axioms about the underlying system behavior. We show how this inconsistency check may be performed automatically with existing model checking tools. In addition, the model checking tools can be used to provide witness scenarios, both when two features conflict as well as when the features are mutually consistent. Both types of witnesses are useful for refining the specifications. We have implemented a conflict detection tool, FIX (Feature Interaction eXtractor), which uses the model checker COSPAN for the inconsistency check. We describe our experience in applying this tool to a collection of telecommunications feature specifications obtained from the Telcordia (Bellcore) standards. Using FIX, we were able to detect most known interactions and some new ones, fully automatically, in a few hours processing time. Reference Type: Journal Article Record Number: 110 Author: G.-L. Ferrari, S. Gnesi, U. Montanari and M. Pistore Year: 2003 Title: A model-checking verification environment for mobile processes Journal: ACM Trans. Softw. Eng. Methodol. Volume: 12 Issue: 4 Pages: 440-473 Short Title: A model-checking verification environment for mobile processes ISSN: 1049-331X DOI: 10.1145/990010.990013 Legal Note: 990013 Abstract: This article presents a semantic-based environment for reasoning about the

behavior of mobile systems. The verification environment, called HAL, exploits a novel automata-like model that allows finite-state verification of systems specified in the calculus. The HAL system is able to interface with several efficient toolkits (e.g. modelcheckers) to determine whether or not certain properties hold for a given specification. We report experimental results on some case studies. Reference Type: Journal Article Record Number: 49 Author: S. J. Fink, E. Yahav, N. Dor, G. Ramalingam and E. Geay Year: 2008 Title: Effective typestate verification in the presence of aliasing Journal: ACM Trans. Softw. Eng. Methodol. Volume: 17 Issue: 2 Pages: 1-34 Short Title: Effective typestate verification in the presence of aliasing ISSN: 1049-331X DOI: 10.1145/1348250.1348255 Legal Note: 1348255 Abstract: This article addresses the challenge of sound typestate verification, with acceptable precision, for real-world Java programs. We present a novel framework for verification of typestate properties, including several new techniques to precisely treat aliases without undue performance costs. In particular, we present a flow-sensitive, context-sensitive, integrated verifier that utilizes a parametric abstract domain combining typestate and aliasing information. To scale to real programs without compromising precision, we present a staged verification system in which faster verifiers run as early stages which reduce the workload for later, more precise, stages. We have evaluated our framework on a number of real Java programs, checking correct API usage for various Java standard libraries. The results show that our approach scales to hundreds of thousands of lines of code, and verifies correctness for 93% of the potential points of failure. Reference Type: Journal Article Record Number: 86 Author: M. F. Frias, C. G. L\, \#243, p. Pombo, G. A. Baum, N. M. Aguirre and T. S. E. Maibaum Year: 2005 Title: Reasoning about static and dynamic properties in alloy: A purely relational approach Journal: ACM Trans. Softw. Eng. Methodol. Volume: 14 Issue: 4 Pages: 478-526

Short Title: Reasoning about static and dynamic properties in alloy: A purely relational approach ISSN: 1049-331X DOI: 10.1145/1101815.1101819 Legal Note: 1101819 Abstract: We study a number of restrictions associated with the first-order relational specification language Alloy. The main shortcomings we address are:---the lack of a complete calculus for deduction in Alloy's underlying formalism, the so called relational logic,---the inappropriateness of the Alloy language for describing (and analyzing) properties regarding execution traces.The first of these points was not regarded as an important issue during the genesis of Alloy, and therefore has not been taken into account in the design of the relational logic. The second point is a consequence of the static nature of Alloy specifications, and has been partly solved by the developers of Alloy; however, their proposed solution requires a complicated and unstructured characterization of executions.We propose to overcome the first problem by translating relational logic to the equational calculus of fork algebras. Fork algebras provide a purely relational formalism close to Alloy, which possesses a complete equational deductive calculus. Regarding the second problem, we propose to extend Alloy by adding actions. These actions, unlike Alloy functions, do modify the state. Much the same as programs in dynamic logic, actions can be sequentially composed and iterated, allowing them to state properties of execution traces at an appropriate level of abstraction.Since automatic analysis is one of Alloy's main features, and this article aims to provide a deductive calculus for Alloy, we show that:---the extension hereby proposed does not sacrifice the possibility of using SAT solving techniques for automated analysis,---the complete calculus for the relational logic is straightforwardly extended to a complete calculus for the extension of Alloy. Reference Type: Journal Article Record Number: 55 Author: M. F. Frias, C. G. L. Pombo, J. P. Galeotti and N. M. Aguirre Year: 2007 Title: Efficient Analysis of DynAlloy Specifications Journal: ACM Trans. Softw. Eng. Methodol. Volume: 17 Issue: 1 Pages: 1-34 Short Title: Efficient Analysis of DynAlloy Specifications ISSN: 1049-331X DOI: 10.1145/1314493.1314497 Legal Note: 1314497 Abstract: DynAlloy is an extension of Alloy to support the definition of actions and the specification of assertions regarding execution traces. In this article we show how we can extend the Alloy tool so that DynAlloy specifications can be automatically analyzed in an efficient way. We also demonstrate that DynAlloy's semantics allows for a sound technique that we call program atomization, which improves the analyzability of

properties regarding execution traces by considering certain programs as atomic steps in a trace. We present the foundations, case studies, and empirical results indicating that the analysis of DynAlloy specifications can be performed efficiently. Reference Type: Journal Article Record Number: 65 Author: A. Gamati\, \#233, T. Gautier, P. L. Guernic and J.-P. Talpin Year: 2007 Title: Polychronous design of embedded real-time applications Journal: ACM Trans. Softw. Eng. Methodol. Volume: 16 Issue: 2 Pages: 9 Short Title: Polychronous design of embedded real-time applications ISSN: 1049-331X DOI: 10.1145/1217295.1217298 Legal Note: 1217298 Abstract: Embedded real-time systems consist of hardware and software that controls the behavior of a device or plant. They are ubiquitous in today's technological landscape and found in domains such as telecommunications, nuclear power, avionics, and medical technology. These systems are difficult to design and build because they must satisfy both functional and timing requirements to work correctly in their intended environment. Furthermore, embedded systems are often critical systems, where failure can lead to loss of life, loss of mission, or serious financial consequences. Because of the difficulty in creating these systems and the consequences of failure, they require rigorous and reliable design approaches. The synchronous approach is one possible answer to this demand. Its mathematical basis provides formal concepts that favor the trusted design of embedded real-time systems. The multiclock or polychronous model stands out from other synchronous specification models by its capability to enable the design of systems where each component holds its own activation clock as well as single-clocked systems in a uniform way. A great advantage is its convenience for component-based design approaches that enable modular development of increasingly complex modern systems. The expressiveness of its underlying semantics allows dealing with several issues of real-time design. This article exposes insights gained during recent years from the design of real-time applications within the polychronous framework. In particular, it shows promising results about the design of applications from the avionics domain. Reference Type: Journal Article Record Number: 139 Author: A. Gargantini and A. Morzenti Year: 2001

Title: Automated deductive requirements analysis of critical systems Journal: ACM Trans. Softw. Eng. Methodol. Volume: 10 Issue: 3 Pages: 255-307 Short Title: Automated deductive requirements analysis of critical systems ISSN: 1049-331X DOI: 10.1145/383876.383877 Legal Note: 383877 Abstract: We advocate the need for automated support to System Requirement Analysis in the development of time- and safety-critical computer-based systems. To this end we pursue an approach based on deductive analysis: high-level, real-world entities and notions, such as events, states, finite variability, cause-effect relations, are modeled through the temporal logic TRIO, and the resulting deductive system is implemented by means of the theorem prover PVS. Throughout the paper, the constructs and features of the deductive system are illustrated and validated by applying them to the well-known example of the Generalized Railway Crossing. Reference Type: Journal Article Record Number: 43 Author: C. Gencel and O. Demirors Year: 2008 Title: Functional size measurement revisited Journal: ACM Trans. Softw. Eng. Methodol. Volume: 17 Issue: 3 Pages: 1-36 Short Title: Functional size measurement revisited ISSN: 1049-331X DOI: 10.1145/1363102.1363106 Legal Note: 1363106 Abstract: There are various approaches to software size measurement. Among these, the metrics and methods based on measuring the functionality attribute have become widely used since the original method was introduced in 1979. Although functional size measurement methods have gone a long way, they still provide challenges for software managers. This article identifies improvement opportunities based on empirical studies we performed on ongoing projects. We also compare our findings with the extended dataset provided by the International Software Benchmarking Standards Group (ISBSG). Reference Type: Journal Article Record Number: 88 Author: V. Gervasi and D. Zowghi Year: 2005

Title: Reasoning about inconsistencies in natural language requirements Journal: ACM Trans. Softw. Eng. Methodol. Volume: 14 Issue: 3 Pages: 277-330 Short Title: Reasoning about inconsistencies in natural language requirements ISSN: 1049-331X DOI: 10.1145/1072997.1072999 Legal Note: 1072999 Abstract: The use of logic in identifying and analyzing inconsistency in requirements from multiple stakeholders has been found to be effective in a number of studies. Nonmonotonic logic is a theoretically well-founded formalism that is especially suited for supporting the evolution of requirements. However, direct use of logic for expressing requirements and discussing them with stakeholders poses serious usability problems, since in most cases stakeholders cannot be expected to be fluent with formal logic. In this article, we explore the integration of natural language parsing techniques with default reasoning to overcome these difficulties. We also propose a method for automatically discovering inconsistencies in the requirements from multiple stakeholders, using both theorem-proving and model-checking techniques, and show how to deal with them in a formal manner. These techniques were implemented and tested in a prototype tool called CARL. The effectiveness of the techniques and of the tool are illustrated by a classic example involving conflicting requirements from multiple stakeholders. Reference Type: Journal Article Record Number: 67 Author: C. Ghezzi Year: 2007 Title: Editorial Journal: ACM Trans. Softw. Eng. Methodol. Volume: 16 Issue: 1 Pages: 2 Short Title: Editorial ISSN: 1049-331X DOI: 10.1145/1189748.1189750 Legal Note: 1189750 Reference Type: Journal Article Record Number: 21 Author: A. Goel, A. Roychoudhury and P. S. Thiagarajan Year: 2009 Title: Interacting process classes Journal: ACM Trans. Softw. Eng. Methodol.

Volume: 18 Issue: 4 Pages: 1-47 Short Title: Interacting process classes ISSN: 1049-331X DOI: 10.1145/1538942.1538943 Legal Note: 1538943 Abstract: Many reactive control systems consist of classes of active objects involving both intraclass interactions (i.e., objects belonging to the same class interacting with each other) and interclass interactions. Such reactive control systems appear in domains such as telecommunication, transportation and avionics. In this article, we propose a modeling and simulation technique for interacting process classes. Our modeling style uses standard notations to capture behavior. In particular, the control flow of a process class is captured by a labeled transition system, unit interactions between process objects are described as transactions, and the structural relations are captured via class diagrams. The key feature of our approach is that our execution semantics leads to an abstract simulation technique which involves (i) grouping together active objects into equivalence classes according their potential futures, and (ii) keeping track of the number of objects in an equivalence class rather than their identities. Our simulation strategy is both time and memory efficient and we demonstrate this on wellstudied nontrivial examples of reactive systems. We also present a case study involving a weather-update controller from NASA to demonstrate the use of our simulator for debugging realistic designs. Notes: Software Design Tools Reference Type: Journal Article Record Number: 143 Author: T. L. Graves, M. J. Harrold, J.-M. Kim, A. Porter and G. Rothermel Year: 2001 Title: An empirical study of regression test selection techniques Journal: ACM Trans. Softw. Eng. Methodol. Volume: 10 Issue: 2 Pages: 184-208 Short Title: An empirical study of regression test selection techniques ISSN: 1049-331X DOI: 10.1145/367008.367020 Legal Note: 367020 Abstract: Regression testing is the process of validating modified software to detect whether new errors have been introduced into previously tested code and to provide confidence that modifications are correct. Since regression testing is an expensive process, researchers have proposed regression test selection techniques as a way to reduce some of this expense. These techniques attempt to reduce costs by selecting and running only a subset of the test cases in a program's existing test suite. Although there have been some analytical and empirical evaluations of individual techniques, to

our knowledge only one comparative study, focusing on one aspect of two of these techniques, has been reported in the literature. We conducted an experiment to examine the relative costs and benefits of several regression test selection techniques. The experiment examined five techniques for reusing test cases, focusing on their relative ablilities to reduce regression testing effort and uncover faults in modified programs. Our results highlight several differences between the techiques, and expose essential trade-offs that should be considered when choosing a technique for practical application. Reference Type: Journal Article Record Number: 39 Author: T. M. Gruschke, M. J\, \#248 and rgensen Year: 2008 Title: The role of outcome feedback in improving the uncertainty assessment of software development effort estimates Journal: ACM Trans. Softw. Eng. Methodol. Volume: 17 Issue: 4 Pages: 1-35 Short Title: The role of outcome feedback in improving the uncertainty assessment of software development effort estimates ISSN: 1049-331X DOI: 10.1145/13487689.13487693 Legal Note: 13487693 Abstract: Previous studies report that software developers are over-confident in the accuracy of their effort estimates. Aim: This study investigates the role of outcome feedback, that is, feedback about the discrepancy between the estimated and the actual effort, in improving the uncertainty assessments. Method: We conducted two in-depth empirical studies on uncertainty assessment learning. Study 1 included five student developers and Study 2, 10 software professionals. In each study the developers repeatedly assessed the uncertainty of their effort estimates of a programming task, solved the task, and received estimation accuracy outcome feedback. Results: We found that most, but not all, developers were initially over-confident in the accuracy of their effort estimates and remained over-confident in spite of repeated and timely outcome feedback. One important, but not sufficient, condition for improvement based on outcome feedback seems to be the use of explicitly formulated, instead of purely intuition-based, uncertainty assessment strategies. Reference Type: Journal Article Record Number: 26 Author: T. Hall, N. Baddoo, S. Beecham, H. Robinson and H. Sharp Year: 2009 Title: A systematic review of theory use in studies investigating the motivations of software engineers

Journal: ACM Trans. Softw. Eng. Methodol. Volume: 18 Issue: 3 Pages: 1-29 Short Title: A systematic review of theory use in studies investigating the motivations of software engineers ISSN: 1049-331X DOI: 10.1145/1525880.1525883 Legal Note: 1525883 Abstract: Motivated software engineers make a critical contribution to delivering successful software systems. Understanding the motivations of software engineers and the impact of motivation on software engineering outcomes could significantly affect the industry's ability to deliver good quality software systems. Understanding the motivations of people generally in relation to their work is underpinned by eight classic motivation theories from the social sciences. We would expect these classic motivation theories to play an important role in developing a rigorous understanding of the specific motivations of software engineers. In this article we investigate how this theoretical basis has been exploited in previous studies of software engineering. We analyzed 92 studies of motivation in software engineering that were published in the literature between 1980 and 2006. Our main findings are that many studies of software engineers' motivations are not explicitly underpinned by reference to the classic motivation theories. Furthermore, the findings presented in these studies are often not explicitly interpreted in terms of those theories, despite the fact that in many cases there is a relationship between those findings and the theories. Our conclusion is that although there has been a great deal of previous work looking at motivation in software engineering, the lack of reference to classic theories of motivation means that the current body of work in the area is weakened and our understanding of motivation in software engineering is not as rigorous as it may at first appear. This weakness in the current state of knowledge highlights important areas for future researchers to contribute towards developing a rigorous and usable body of knowledge in motivating software engineers. Reference Type: Journal Article Record Number: 28 Author: D. Hamlet Year: 2009 Title: Tools and experiments supporting a testing-based theory of component composition Journal: ACM Trans. Softw. Eng. Methodol. Volume: 18 Issue: 3 Pages: 1-41 Short Title: Tools and experiments supporting a testing-based theory of component composition ISSN: 1049-331X

DOI: 10.1145/1525880.1525885 Legal Note: 1525885 Abstract: Development of software using off-the-shelf components seems to offer a chance for improving product quality and developer productivity. This article reviews a foundational testing-based theory of component composition, describes tools that implement the theory, and presents experiments with functional and nonfunctional component/system properties that validate the theory and illuminate issues in component composition. The context for this work is an ideal form of Component-Based Software Development (CBSD) supported by tools. Component developers describe their components by measuring approximations to functional and nonfunctional behavior on a finite collection of subdomains. Systems designers describe an application-system structure by the component connections that form it. From measured component descriptions and a system structure, a CAD tool synthesizes the system properties, predicting how the system will behave. The system is not built, nor are any test executions performed. Neither the component sources nor executables are needed by systems designers. From CAD calculations a designer can learn (approximately) anything that could be learned by testing an actual system implementation. The CAD tool is often more efficient than it would be to assemble and execute an actual system. Using tools that support an ideal separation between component- and system development, experiments were conducted to investigate two related questions: (1) To what extent can unit (that is, component) testing replace system testing? (2) What properties of software and subdomains influence the quality of subdomain testing? Reference Type: Journal Article Record Number: 42 Author: J. Henkel, C. Reichenbach and A. Diwan Year: 2008 Title: Developing and debugging algebraic specifications for Java classes Journal: ACM Trans. Softw. Eng. Methodol. Volume: 17 Issue: 3 Pages: 1-37 Short Title: Developing and debugging algebraic specifications for Java classes ISSN: 1049-331X DOI: 10.1145/1363102.1363105 Legal Note: 1363105 Abstract: Modern programs make extensive use of reusable software libraries. For example, a study of a number of large Java applications shows that between 17% and 30% of the classes in those applications use container classes defined in the java.util package. Given this extensive code reuse in Java programs, it is important for the interfaces of reusable classes to be well documented. An interface is well documented if it satisfies the following requirements: (1) the documentation completely describes how

to use the interface; (2) the documentation is clear; (3) the documentation is unambiguous; and (4) any deviation between the documentation and the code is machine detectable. Unfortunately, documentation in natural language, which is the norm, does not satisfy the above requirements. Formal specifications can satisfy them but they are difficult to develop, requiring significant effort on the part of programmers. To address the practical difficulties with formal specifications, we describe and evaluate a tool to help programmers write and debug algebraic specifications. Given an algebraic specification of a class, our interpreter generates a prototype that can be used within an application like a regular Java class. When running an application that uses the prototype, the interpreter prints error messages that tell the developer in which way the specification is incomplete or inconsistent with a hand-coded implementation of the class. We use case studies to demonstrate the usefulness of our system. Notes: Software Requirements Tools > Requirements Modeling Tools Reference Type: Journal Article Record Number: 124 Author: R. M. Hierons Year: 2002 Title: Comparing test sets and criteria in the presence of test hypotheses and fault domains Journal: ACM Trans. Softw. Eng. Methodol. Volume: 11 Issue: 4 Pages: 427-448 Short Title: Comparing test sets and criteria in the presence of test hypotheses and fault domains ISSN: 1049-331X DOI: 10.1145/606612.606615 Legal Note: 606615 Abstract: A number of authors have considered the problem of comparing test sets and criteria. Ideally test sets are compared using a preorder with the property that test set T1 is at least as strong as T2 if whenever T2 determines that an implementation p is faulty, T1 will also determine that p is faulty. This notion can be extended to test criteria. However, it has been noted that very few test sets and criteria are comparable under such an ordering; instead orderings are based on weaker properties such as subsumes. This article explores an alternative approach, in which comparisons are made in the presence of a test hypothesis or fault domain. This approach allows strong statements about fault detecting ability to be made and yet for a number of test sets and criteria to be comparable. It may also drive incremental test generation. Reference Type: Journal Article Record Number: 74 Author: R. M. Hierons

Year: 2006 Title: Avoiding coincidental correctness in boundary value analysis Journal: ACM Trans. Softw. Eng. Methodol. Volume: 15 Issue: 3 Pages: 227-241 Short Title: Avoiding coincidental correctness in boundary value analysis ISSN: 1049-331X DOI: 10.1145/1151695.1151696 Legal Note: 1151696 Abstract: In partition analysis we divide the input domain to form subdomains on which the system's behaviour should be uniform. Boundary value analysis produces test inputs near each subdomain's boundaries to find failures caused by incorrect implementation of the boundaries. However, boundary value analysis can be adversely affected by coincidental correctness---the system produces the expected output, but for the wrong reason. This article shows how boundary value analysis can be adapted in order to reduce the likelihood of coincidental correctness. The main contribution is to cases of automated test data generation in which we cannot rely on the expertise of a tester. Reference Type: Journal Article Record Number: 22 Author: R. M. Hierons Year: 2009 Title: Verdict functions in testing with a fault domain or test hypotheses Journal: ACM Trans. Softw. Eng. Methodol. Volume: 18 Issue: 4 Pages: 1-19 Short Title: Verdict functions in testing with a fault domain or test hypotheses ISSN: 1049-331X DOI: 10.1145/1538942.1538944 Legal Note: 1538944 Abstract: In state-based testing, it is common to include verdicts within test cases, the result of the test case being the verdict reached by the test run. In addition, approaches that reason about test effectiveness or produce tests that are guaranteed to find certain classes of faults are often based on either a fault domain or a set of test hypotheses. This article considers how the presence of a fault domain or test hypotheses affects our notion of a test verdict. The analysis reveals the need for new verdicts that provide more information than the current verdicts and for verdict functions that return a verdict based on a set of test runs rather than a single test run. The concepts are illustrated in the contexts of testing from a nondeterministic finite state machine and the testing of a datatype specified using an algebraic specification language but are potentially relevant whenever fault domains or test hypotheses are used.

Reference Type: Journal Article Record Number: 31 Author: S. S. Huang, D. Zook and Y. Smaragdakis Year: 2008 Title: Domain-specific languages and program generation with meta-AspectJ Journal: ACM Trans. Softw. Eng. Methodol. Volume: 18 Issue: 2 Pages: 1-32 Short Title: Domain-specific languages and program generation with meta-AspectJ ISSN: 1049-331X DOI: 10.1145/1416563.1416566 Legal Note: 1416566 Abstract: Meta-AspectJ (MAJ) is a language for generating AspectJ programs using code templates. MAJ itself is an extension of Java, so users can interleave arbitrary Java code with AspectJ code templates. MAJ is a structured metaprogramming tool: a well-typed generator implies a syntactically correct generated program. MAJ promotes a methodology that combines aspect-oriented and generative programming. A valuable application is in implementing small domain-specific language extensions as generators using unobtrusive annotations for syntax extension and AspectJ as a back-end. The advantages of this approach are twofold. First, the generator integrates into an existing software application much as a regular API or library, instead of as a language extension. Second, a mature language implementation is easy to achieve with little effort since AspectJ takes care of the low-level issues of interfacing with the base Java language. In addition to its practical value, MAJ offers valuable insights to metaprogramming tool designers. It is a mature metaprogramming tool for AspectJ (and, by extension, Java): a lot of emphasis has been placed on context-sensitive parsing and error reporting. As a result, MAJ minimizes the number of metaprogramming (quote/unquote) operators and uses type inference to reduce the need to remember type names for syntactic entities. Notes: Software Construction Tools > Compilers & Code Generators Reference Type: Journal Article Record Number: 132 Author: D. Jackson Year: 2002 Title: Alloy: a lightweight object modelling notation Journal: ACM Trans. Softw. Eng. Methodol. Volume: 11 Issue: 2 Pages: 256-290 Short Title: Alloy: a lightweight object modelling notation ISSN: 1049-331X

DOI: 10.1145/505145.505149 Legal Note: 505149 Abstract: Alloy is a little language for describing structural properties. It offers a declaration syntax compatible with graphical object models, and a set-based formula syntax powerful enough to express complex constraints and yet amenable to a fully automatic semantic analysis. Its meaning is given by translation to an even smaller (formally defined) kernel. This paper presents the language in its entirety, and explains its motivation, contributions and deficiencies. Reference Type: Journal Article Record Number: 36 Author: P. Jalote, B. Murphy and V. S. Sharma Year: 2008 Title: Post-release reliability growth in software products Journal: ACM Trans. Softw. Eng. Methodol. Volume: 17 Issue: 4 Pages: 1-20 Short Title: Post-release reliability growth in software products ISSN: 1049-331X DOI: 10.1145/13487689.13487690 Legal Note: 13487690 Abstract: Most software reliability growth models work under the assumption that reliability of software grows due to the removal of bugs that cause failures. However, another phenomenon has often been observedthe failure rate of a software product following its release decreases with time even if no bugs are corrected. In this article we present a simple model to represent this phenomenon. We introduce the concept of initial transient failure rate of the product and assume that it decays with a factor per unit time thereby increasing the product reliability with time. When the transient failure rate decays away, the product displays a steady state failure rate. We discuss how the parameters in this modelinitial transient failure rate, decay factor, and steady state failure ratecan be determined from the failure and sales data of a product. We also describe how, using the model, we can determine the product stabilization timea product quality metric that describes how long it takes a product to reach close to its stable failure rate. We provide many examples where this model has been applied to data from released products. Reference Type: Journal Article Record Number: 322 Author: Jaymie Strecker and A. M. Memon Year: 2012 Title: Accounting for defect characteristics in evaluations of testing techniques Journal: ACM Trans. Softw. Eng. Methodol. Volume: 21

Issue: 3 Pages: 1-43 Short Title: Accounting for defect characteristics in evaluations of testing techniques ISSN: 1049-331X DOI: 10.1145/2211616.2211620 Keywords: Defects Experimentation Faults GUI Testing measurement Product metrics Testing tools Abstract: As new software-testing techniques are developed, before they can achieve widespread acceptance, their effectiveness at detecting defects must be evaluated. The most common way of evaluating testing techniques is with empirical studies, in which one or more techniques are tried out on software with known defects. However, the defects used can affect the performance of the techniques. To complicate matters, it is not even clear how to effectively describe or characterize defects. To address these problems, this article describes an experiment architecture for empirically evaluating testing techniques which takes both defect and test-suite characteristics into account. As proof of concept, an experiment on GUI-testing techniques is conducted. It provides evidence that the defect characteristics proposed do help explain defect detection, at least for GUI testing, and it explores the relationship between the coverage of defective code and the detection of defects. Reference Type: Journal Article Record Number: 313 Author: Jehad Al Dallal and L. C. Briand Year: 2012 Title: A Precise Method-Method Interaction-Based Cohesion Metric for Object-Oriented Classes Journal: ACM Trans. Softw. Eng. Methodol. Volume: 21 Issue: 2 Pages: 1-34 Short Title: A Precise Method-Method Interaction-Based Cohesion Metric for ObjectOriented Classes ISSN: 1049-331X DOI: 10.1145/2089116.2089118 Keywords: Attribute class, Cohesion distribution Maintenance, Enhancement Abstract: The building of highly cohesive classes is an important objective in objectoriented design. Class cohesion refers to the relatedness of the class members, and it

indicates one important aspect of the class design quality. A meaningful class cohesion metric helps object-oriented software developers detect class design weaknesses and refactor classes accordingly. Several class cohesion metrics have been proposed in the literature. Most of these metrics are applicable based on low-level design information such as attribute references in methods. Some of these metrics capture class cohesion by counting the number of method pairs that share common attributes. A few metrics measure cohesion more precisely by considering the degree of interaction, through attribute references, between each pair of methods. However, the formulas applied by these metrics to measure the degree of interaction cause the metrics to violate important mathematical properties, thus undermining their construct validity and leading to misleading cohesion measurement. In this paper, we propose a formula that precisely measures the degree of interaction between each pair of methods, and we use it as a basis to introduce a low-level design class cohesion metric (LSCC). We verify that the proposed formula does not cause the metric to violate important mathematical properties. In addition, we provide a mechanism to use this metric as a useful indicator for refactoring weakly cohesive classes, thus showing its usefulness in improving class cohesion. Finally, we empirically validate LSCC. Using four open source software systems and eleven cohesion metrics, we investigate the relationship between LSCC, other cohesion metrics, and fault occurrences in classes. Our results show that LSCC is one of three metrics that explains more accurately the presence of faults in classes. LSCC is the only one among the three metrics to comply with important mathematical properties, and statistical analysis shows it captures a measurement dimension of its own. This suggests that LSCC is a better alternative, when taking into account both theoretical and empirical results, as a measure to guide the refactoring of classes. From a more general standpoint, the results suggest that class quality, as measured in terms of fault occurrences, can be more accurately explained by cohesion metrics that account for the degree of interaction between each pair of methods. Reference Type: Journal Article Record Number: 231 Author: Jinjun Chen and Y. Yang Year: 2011 Title: Temporal dependency-based checkpoint selection for dynamic verification of temporal constraints in scientific workflow systems Journal: ACM Trans. Softw. Eng. Methodol. Volume: 20 Issue: 3 Short Title: Temporal dependency-based checkpoint selection for dynamic verification of temporal constraints in scientific workflow systems ISSN: 1049-331X DOI: 10.1145/2000791.2000793 Keywords: Scientific workflows Software/program verification Temporal constraints Verification theory

Abstract: In a scientific workflow system, a checkpoint selection strategy is used to select checkpoints along scientific workflow execution for verifying temporal constraints so that we can identify any temporal violations and handle them in time in order to ensure overall temporal correctness of the execution that is often essential for the usefulness of execution results. The problem of existing representative strategies is that they do not differentiate temporal constraints as, once a checkpoint is selected, they verify all temporal constraints. However, such a checkpoint does not need to be taken for those constraints whose consistency can be deduced from others. The corresponding verification of such constraints is consequently unnecessary and can severely impact overall temporal verification efficiency while the efficiency determines whether temporal violations can be identified quickly for handling in time. To address the problem, in this article, we develop a new temporal-dependency based checkpoint selection strategy which can select checkpoints in accordance with different temporal constraints. With our strategy, the corresponding unnecessary verification can be avoided. The comparison and experimental simulation further demonstrate that our new strategy can improve the efficiency of overall temporal verification significantly over the existing representative strategies. Reference Type: Journal Article Record Number: 232 Author: John Anvik and G. C. Murphy Year: 2011 Title: Reducing the effort of bug report triage: Recommenders for development-oriented decisions Journal: ACM Trans. Softw. Eng. Methodol. Volume: 20 Issue: 3 Short Title: Reducing the effort of bug report triage: Recommenders for developmentoriented decisions ISSN: 1049-331X DOI: 10.1145/2000791.2000794 Keywords: Classifier design and evaluation Configuration assistance Machine learning Abstract: A key collaborative hub for many software development projects is the bug report repository. Although its use can improve the software development process in a number of ways, reports added to the repository need to be triaged. A triager determines if a report is meaningful. Meaningful reports are then organized for integration into the project's development process. To assist triagers with their work, this article presents a machine learning approach to create recommenders that assist with a variety of decisions aimed at streamlining the development process. The recommenders created with this approach are accurate; for instance, recommenders for which developer to assign a report that we have created using this approach have a precision between 70% and 98% over five open source projects. As the configuration of a recommender for a particular project can require substantial effort and be time

consuming, we also present an approach to assist the configuration of such recommenders that significantly lowers the cost of putting a recommender in place for a project. We show that recommenders for which developer should fix a bug can be quickly configured with this approach and that the configured recommenders are within 15% precision of hand-tuned developer recommenders. Reference Type: Journal Article Record Number: 233 Author: Josh Dehlinger and R. R. Lutz Year: 2011 Title: Gaia-PL: A Product Line Engineering Approach for Efficiently Designing Multiagent Systems Journal: ACM Trans. Softw. Eng. Methodol. Volume: 20 Issue: 4 Short Title: Gaia-PL: A Product Line Engineering Approach for Efficiently Designing Multiagent Systems ISSN: 1049-331X DOI: 10.1145/2000799.2000803 Keywords: Agent-oriented software engineering Design documentation Domain engineering methodologies Multiagent systems Software product line engineering Abstract: Agent-oriented software engineering (AOSE) has provided powerful and natural, high-level abstractions in which software developers can understand, model and develop complex, distributed systems. Yet, the realization of AOSE partially depends on whether agent-based software systems can achieve reductions in development time and cost similar to other reuse-conscious development methods. Specifically, AOSE does not adequately address requirements specifications as reusable assets. Software product line engineering is a reuse technology that supports the systematic development of a set of similar software systems through understanding, controlling, and managing their common, core characteristics and their differing variation points. In this article, we present an extension to the Gaia AOSE methodology, named Gaia-PL (Gaia-Product Line), for agent-based distributed software systems that enable requirements specifications to be easily reused. We show how our methodology uses a product line perspective to promote reuse in agent-based software systems early in the development life cycle so that software assets can be reused throughout system development and evolution. We also present results from an application to show how Gaia-PL provided reuse that reduced the design and development effort for a large, multiagent system. Reference Type: Journal Article Record Number: 59

Author: K. Kapoor and J. P. Bowen Year: 2007 Title: Test conditions for fault classes in Boolean specifications Journal: ACM Trans. Softw. Eng. Methodol. Volume: 16 Issue: 3 Pages: 10 Short Title: Test conditions for fault classes in Boolean specifications ISSN: 1049-331X DOI: 10.1145/1243987.1243988 Legal Note: 1243988 Abstract: Fault-based testing of software checks the software implementation for a set of faults. Two previous papers on fault-based testing [Kuhn 1999; Tsuchiya and Kikuno 2002] represent the required behavior of the software as a Boolean specification represented in Disjunctive Normal Form (DNF) and then show that faults may be organized in a hierarchy. This article extends these results by identifying necessary and sufficient conditions for fault-based testing. Unlike previous solutions, the formal analysis used to derive these conditions imposes no restrictions (such as DNF) on the form of the Boolean specification. Reference Type: Journal Article Record Number: 33 Author: M. R. Karam, T. J. Smedley and S. M. Dascalu Year: 2008 Title: Unit-level test adequacy criteria for visual dataflow languages and a testing methodology Journal: ACM Trans. Softw. Eng. Methodol. Volume: 18 Issue: 1 Pages: 1-40 Short Title: Unit-level test adequacy criteria for visual dataflow languages and a testing methodology ISSN: 1049-331X DOI: 10.1145/1391984.1391985 Legal Note: 1391985 Abstract: Visual dataflow languages (VDFLs), which include commercial and research systems, have had a substantial impact on end-user programming. Like any other programming languages, whether visual or textual, VDFLs often contain faults. A desire to provide programmers of these languages with some of the benefits of traditional testing methodologies has been the driving force behind our effort in this work. In this article we introduce, in the context of prograph, a testing methodology for VDFLs based on structural test adequacy criteria and coverage. This article also reports on the results of two empirical studies. The first study was conducted to obtain meaningful information about, in particular, the effectiveness of our all-Dus criteria in detecting a reasonable percentage of faults in VDFLs. The second study was conducted to evaluate, under the

same criterion, the effectiveness of our methodology in assisting users to visually localize faults by reducing their search space. Both studies were conducted using a testing system that we have implemented in Prograph's IDE. Reference Type: Journal Article Record Number: 135 Author: I. Keidar, R. I. Khazan, N. Lynch and A. Shvartsman Year: 2002 Title: An inheritance-based technique for building simulation proofs incrementally Journal: ACM Trans. Softw. Eng. Methodol. Volume: 11 Issue: 1 Pages: 63-91 Short Title: An inheritance-based technique for building simulation proofs incrementally ISSN: 1049-331X DOI: 10.1145/504087.504090 Legal Note: 504090 Abstract: This paper presents a formal technique for incremental construction of system specifications, algorithm descriptions, and simulation proofs showing that algorithms meet their specifications.The technique for building specifications and algorithms incrementally allows a child specification or algorithm to inherit from its parent by two forms of incremental modification: (a) signature extension, where new actions are added to the parent, and (b) specialization (subtyping), where the child's behavior is a specialization (restriction) of the parent's behavior. The combination of signature extension and specialization provides a powerful and expressive incremental modification mechanism for introducing new types of behavior without overriding behavior of the parent; this mechanism corresponds to the subclassing for extension form of inheritance.In the case when incremental modifications are applied to both a parent specification S and a parent algorithm A, the technique allows a simulation proof showing that the child algorithm A implements the child specification S to be constructed incrementally by extending a simulation proof that algorithm A implements specification S. The new proof involves reasoning about the modifications only, without repeating the reasoning done in the original simulation proof.The paper presents the technique mathematically, in terms of automata. The technique has been used to model and verify a complex middleware system; the methodology and results of that experiment are summarized in this paper. Reference Type: Journal Article Record Number: 89 Author: P. Klint, R. L\, \#228, mmel and C. Verhoef Year: 2005 Title: Toward an engineering discipline for grammarware Journal: ACM Trans. Softw. Eng. Methodol. Volume: 14

Issue: 3 Pages: 331-380 Short Title: Toward an engineering discipline for grammarware ISSN: 1049-331X DOI: 10.1145/1072997.1073000 Legal Note: 1073000 Abstract: Grammarware comprises grammars and all grammar-dependent software. The term grammar is meant here in the sense of all established grammar formalisms and grammar notations including context-free grammars, class dictionaries, and XML schemas as well as some forms of tree and graph grammars. The term grammardependent software refers to all software that involves grammar knowledge in an essential manner. Archetypal examples of grammar-dependent software are parsers, program converters, and XML document processors. Despite the pervasive role of grammars in software systems, the engineering aspects of grammarware are insufficiently understood. We lay out an agenda that is meant to promote research on increasing the productivity of grammarware development and on improving the quality of grammarware. To this end, we identify the problems with the current grammarware practices, the barriers that currently hamper research, and the promises of an engineering discipline for grammarware, its principles, and the research challenges that have to be addressed. Reference Type: Journal Article Record Number: 1 Author: A. J. Ko and B. A. Myers Year: 2010 Title: Extracting and answering why and why not questions about Java program output Journal: ACM Trans. Softw. Eng. Methodol. Volume: 20 Issue: 2 Pages: 1-36 Short Title: Extracting and answering why and why not questions about Java program output ISSN: 1049-331X DOI: 10.1145/1824760.1824761 Legal Note: 1824761 Abstract: When software developers want to understand the reason for a program's behavior, they must translate their questions about the behavior into a series of questions about code, speculating about the causes in the process. The Whyline is a new kind of debugging tool that avoids such speculation by instead enabling developers to select a question about program output from a set of why did and why didn't questions extracted from the program's code and execution. The tool then finds one or more possible explanations for the output in question. These explanations are derived using a static and dynamic slicing, precise call graphs, reachability analyses, and new algorithms for determining potential sources of values. Evaluations of the tool on two debugging tasks showed that developers with the Whyline were three times more

successful and twice as fast at debugging, compared to developers with traditional breakpoint debuggers. The tool has the potential to simplify debugging and program understanding in many software development contexts. Notes: Software Construction Tools > Debuggers Reference Type: Journal Article Record Number: 101 Author: S. Kramer and H. Kaindl Year: 2004 Title: Coupling and cohesion metrics for knowledge-based systems using frames and rules Journal: ACM Trans. Softw. Eng. Methodol. Volume: 13 Issue: 3 Pages: 332-358 Short Title: Coupling and cohesion metrics for knowledge-based systems using frames and rules ISSN: 1049-331X DOI: 10.1145/1027092.1027094 Legal Note: 1027094 Abstract: Software systems and in particular also knowledge-based systems (KBS) become increasingly large and complex. In response to this challenge, software engineering has a long tradition of advocating modularity. This has also heavily influenced object-oriented development. For measuring certain important aspects of modularity, coupling and cohesion metrics have been developed. Metrics have also attracted considerable attention for object-oriented development. For KBS development, however, no such metrics are available yet. This article presents the core of the first metrics suite for KBS development, its coupling and cohesion metrics. These metrics measure modularity in terms of the relations induced between slots of frames through their common references in rules. We show the soundness of these metrics according to theory and report on their usefulness in practice. As a consequence, we propose using our metrics in order to improve KBS development, and developing other important metrics and assessing their theoretical soundness along these lines. Reference Type: Journal Article Record Number: 63 Author: S. Krishnamurthi and K. Fisler Year: 2007 Title: Foundations of incremental aspect model-checking Journal: ACM Trans. Softw. Eng. Methodol. Volume: 16 Issue: 2

Pages: 7 Short Title: Foundations of incremental aspect model-checking ISSN: 1049-331X DOI: 10.1145/1217295.1217296 Legal Note: 1217296 Abstract: Programs are increasingly organized around features, which are encapsulated using aspects and other linguistic mechanisms. Despite their growing popularity amongst developers, there is a dearth of techniques for computer-aided verification of programs that employ these mechanisms. We present the theoretical underpinnings for applying model checking to programs (expressed as state machines) written using these mechanisms. The analysis is incremental, examining only components that change rather than verifying the entire system every time one part of it changes. Our technique assumes that the set of pointcut designators is known statically, but the actual advice can vary. It handles both static and dynamic pointcut designators. We present the algorithm, prove it sound, and address several subtleties that arise, including cascading advice application and problems of circular reasoning. Reference Type: Journal Article Record Number: 87 Author: M. F. Lau and Y. T. Yu Year: 2005 Title: An extended fault class hierarchy for specification-based testing Journal: ACM Trans. Softw. Eng. Methodol. Volume: 14 Issue: 3 Pages: 247-276 Short Title: An extended fault class hierarchy for specification-based testing ISSN: 1049-331X DOI: 10.1145/1072997.1072998 Legal Note: 1072998 Abstract: Kuhn, followed by Tsuchiya and Kikuno, have developed a hierarchy of relationships among several common types of faults (such as variable and expression faults) for specification-based testing by studying the corresponding fault detection conditions. Their analytical results can help explain the relative effectiveness of various fault-based testing techniques previously proposed in the literature. This article extends and complements their studies by analyzing the relationships between variable and literal faults, and among literal, operator, term, and expression faults. Our analysis is more comprehensive and produces a richer set of findings that interpret previous empirical results, can be applied to the design and evaluation of test methods, and inform the way that test cases should be prioritized for earlier detection of faults. Although this work originated from the detection of faults related to specifications, our results are equally applicable to program-based predicate testing that involves logic expressions.

Reference Type: Journal Article Record Number: 242 Author: Lee Naish, Hua Jie Lee and K. Ramamohanarao Year: 2011 Title: A model for spectra-based software diagnosis Journal: ACM Trans. Softw. Eng. Methodol. Volume: 20 Issue: 3 Short Title: A model for spectra-based software diagnosis ISSN: 1049-331X DOI: 10.1145/2000791.2000795 Keywords: Debugging aids Fault localization Statistical debugging Abstract: This paper presents an improved approach to assist diagnosis of failures in software (fault localisation) by ranking program statements or blocks according to how likely they are to be buggy. We present a very simple single-bug program to model the problem. By examining different possible execution paths through this model program over a number of test cases, the effectiveness of different proposed spectral ranking methods can be evaluated in idealised conditions. The results are remarkably consistent to those arrived at empirically using the Siemens test suite and Space benchmarks. The model also helps identify groups of metrics which are equivalent for ranking. Due to the simplicity of the model, an optimal ranking method can be devised. This new method out-performs previously proposed methods for the model program, the Siemens test suite and Space. It also helps provide insight into other ranking methods. Reference Type: Journal Article Record Number: 128 Author: D. Liang and M. J. Harrold Year: 2002 Title: Equivalence analysis and its application in improving the efficiency of program slicing Journal: ACM Trans. Softw. Eng. Methodol. Volume: 11 Issue: 3 Pages: 347-383 Short Title: Equivalence analysis and its application in improving the efficiency of program slicing ISSN: 1049-331X DOI: 10.1145/567793.567796 Legal Note: 567796 Abstract: Existing methods for handling pointer variables during dataflow analyses can make such analyses inefficient in both time and space because the data-flow analyses must store and propagate large sets of data facts that are introduced by dereferences of

pointer variables. This article presents equivalence analysis, a general technique to improve the efficiency of data-flow analyses in the presence of pointer variables. The technique identifies equivalence relations among the memory locations accessed by a procedure, and ensures that two equivalent memory locations share the same set of data facts in a procedure and in the procedures that are called by that procedure. Thus, a data-flow analysis needs to compute the data-flow information for only a representative memory location in an equivalence class. The data-flow information for other memory locations in the equivalence class can be derived from that of the representative memory location. The article also shows the extension to an interprocedural slicing algorithm that uses equivalence analysis to improve the efficiency of the algorithm. Our empirical studies suggest that equivalence analysis may effectively improve the efficiency of many data-flow analyses. Reference Type: Journal Article Record Number: 34 Author: P. Louridas, D. Spinellis and V. Vlachos Year: 2008 Title: Power laws in software Journal: ACM Trans. Softw. Eng. Methodol. Volume: 18 Issue: 1 Pages: 1-26 Short Title: Power laws in software ISSN: 1049-331X DOI: 10.1145/1391984.1391986 Legal Note: 1391986 Abstract: A single statistical framework, comprising power law distributions and scalefree networks, seems to fit a wide variety of phenomena. There is evidence that power laws appear in software at the class and function level. We show that distributions with long, fat tails in software are much more pervasive than previously established, appearing at various levels of abstraction, in diverse systems and languages. The implications of this phenomenon cover various aspects of software engineering research and practice. Reference Type: Journal Article Record Number: 56 Author: A. D. Lucia, F. Fasano, R. Oliveto and G. Tortora Year: 2007 Title: Recovering traceability links in software artifact management systems using information retrieval methods Journal: ACM Trans. Softw. Eng. Methodol. Volume: 16 Issue: 4 Pages: 13

Short Title: Recovering traceability links in software artifact management systems using information retrieval methods ISSN: 1049-331X DOI: 10.1145/1276933.1276934 Legal Note: 1276934 Abstract: The main drawback of existing software artifact management systems is the lack of automatic or semi-automatic traceability link generation and maintenance. We have improved an artifact management system with a traceability recovery tool based on Latent Semantic Indexing (LSI), an information retrieval technique. We have assessed LSI to identify strengths and limitations of using information retrieval techniques for traceability recovery and devised the need for an incremental approach. The method and the tool have been evaluated during the development of seventeen software projects involving about 150 students. We observed that although tools based on information retrieval provide a useful support for the identification of traceability links during software development, they are still far to support a complete semi-automatic recovery of all links. The results of our experience have also shown that such tools can help to identify quality problems in the textual description of traced artifacts. Reference Type: Journal Article Record Number: 246 Author: M. Diep, Matthew B. Dwyer and S. Elbaum Year: 2011 Title: Lattice-Based Sampling for Path Property Monitoring Madeline Journal: ACM Trans. Softw. Eng. Methodol. Volume: 21 Issue: 1 Short Title: Lattice-Based Sampling for Path Property Monitoring Madeline ISSN: 1049-331X DOI: 10.1145/2063239.2063244 Keywords: Deployed Reliability Runtime monitoring Sequencing and path properties tracing verification Abstract: Runtime monitoring can provide important insights about a programs behavior and, for simple properties, it can be done efficiently. Monitoring properties describing sequences of program states and events, however, can result in significant runtime overhead. This is particularly critical when monitoring programs deployed at user sites that have low tolerance for overhead. In this paper we present a novel approach to reducing the cost of runtime monitoring of path properties. A set of original properties are composed to form a single integrated property that is then systematically decomposed into a set of properties that encode necessary conditions for property violations. The resulting set of properties forms a lattice whose structure is exploited to select a sample of properties that can lower monitoring cost, while preserving violation detection power relative to the original properties. The lattice is then complemented with a weighting scheme that assigns each property a different priority that can be adjusted

continuously to better drive the property sampling process. Our evaluation using the Hibernate API reveals that our approach produces a rich, structured set of properties that enables control of monitoring overhead, while detecting more violations more quickly than alternative techniques. Reference Type: Journal Article Record Number: 320 Author: Macneil Shonle, William G. Griswold and S. Lerner Year: 2012 Title: A framework for the checking and refactoring of crosscutting concepts Journal: ACM Trans. Softw. Eng. Methodol. Volume: 21 Issue: 3 Pages: 1-47 Short Title: A framework for the checking and refactoring of crosscutting concepts ISSN: 1049-331X DOI: 10.1145/2211616.2211618 Keywords: Design Design patterns Domain-specific architectures Languages Patterns Refactoring Abstract: Programmers employ crosscutting concepts, such as design patterns and other programming idioms, when their design ideas cannot be efficiently or effectively modularized in the underlying programming language. As a result, implementations of these crosscutting concepts can be hard to change even when the code is well structured. In this article, we describe Arcum, a system that supports the modular maintenance of crosscutting concepts. Arcum can be used to both check essential constraints of crosscutting concepts and to substitute crosscutting concept implementations with alternative implementations. Arcum is complementary to existing refactoring systems that focus on meaning-preserving program transformations at the programminglanguage-semantics level, because Arcum focuses on transformations at the conceptual level. We present the underpinnings of the Arcum approach and show how Arcum can be used to address several classical software engineering problems. Reference Type: Journal Article Record Number: 23 Author: M. Mamei and F. Zambonelli Year: 2009

Title: Programming pervasive and mobile computing applications: The TOTA approach Journal: ACM Trans. Softw. Eng. Methodol. Volume: 18 Issue: 4 Pages: 1-56 Short Title: Programming pervasive and mobile computing applications: The TOTA approach ISSN: 1049-331X DOI: 10.1145/1538942.1538945 Legal Note: 1538945 Abstract: Pervasive and mobile computing call for suitable middleware and programming models to support the activities of complex software systems in dynamic network environments. In this article we present TOTA (Tuples On The Air), a novel middleware and programming approach for supporting adaptive context-aware activities in pervasive and mobile computing scenarios. The key idea in TOTA is to rely on spatially distributed tuples, adaptively propagated across a network on the basis of application-specific rules, for both representing contextual information and supporting uncoupled interactions between application components. TOTA promotes a simple way of programming that facilitates access to distributed information, navigation in complex environments, and the achievement of complex coordination tasks in a fully distributed and adaptive way, mostly freeing programmers and system managers from the need to take care of low-level issues related to network dynamics. This article includes both application examples to clarify concepts and performance figures to show the feasibility of the approach Reference Type: Journal Article Record Number: 78 Author: I. Marc Fisher, G. Rothermel, D. Brown, M. Cao, C. Cook and M. Burnett Year: 2006 Title: Integrating automated test generation into the WYSIWYT spreadsheet testing methodology Journal: ACM Trans. Softw. Eng. Methodol. Volume: 15 Issue: 2 Pages: 150-194 Short Title: Integrating automated test generation into the WYSIWYT spreadsheet testing methodology ISSN: 1049-331X DOI: 10.1145/1131421.1131423 Legal Note: 1131423 Abstract: Spreadsheet languages, which include commercial spreadsheets and various research systems, have had a substantial impact on end-user computing. Research shows, however, that spreadsheets often contain faults. Thus, in previous work we presented a methodology that helps spreadsheet users test their spreadsheet formulas. Our empirical studies have shown that end users can use this methodology to test

spreadsheets more adequately and efficiently; however, the process of generating test cases can still present a significant impediment. To address this problem, we have been investigating how to incorporate automated test case generation into our testing methodology in ways that support incremental testing and provide immediate visual feedback. We have used two techniques for generating test cases, one involving random selection and one involving a goal-oriented approach. We describe these techniques and their integration into our testing environment, and report results of an experiment examining their effectiveness and efficiency. Reference Type: Journal Article Record Number: 54 Author: M. Marin, A. V. Deursen and L. Moonen Year: 2007 Title: Identifying Crosscutting Concerns Using Fan-In Analysis Journal: ACM Trans. Softw. Eng. Methodol. Volume: 17 Issue: 1 Pages: 1-37 Short Title: Identifying Crosscutting Concerns Using Fan-In Analysis ISSN: 1049-331X DOI: 10.1145/1314493.1314496 Legal Note: 1314496 Abstract: Aspect mining is a reverse engineering process that aims at finding crosscutting concerns in existing systems. This article proposes an aspect mining approach based on determining methods that are called from many different places, and hence have a high fan-in, which can be seen as a symptom of crosscutting functionality. The approach is semiautomatic, and consists of three steps: metric calculation, method filtering, and call site analysis. Carrying out these steps is an interactive process supported by an Eclipse plug-in called FINT. Fan-in analysis has been applied to three open source Java systems, totaling around 200,000 lines of code. The most interesting concerns identified are discussed in detail, which includes several concerns not previously discussed in the aspect-oriented literature. The results show that a significant number of crosscutting concerns can be recognized using fan-in analysis, and each of the three steps can be supported by tools. Reference Type: Journal Article Record Number: 250 Author: Martin Erwig and E. Walkingshaw Year: 2011 Title: The Choice Calculus: A Representation for Software Variation Journal: ACM Trans. Softw. Eng. Methodol. Volume: 21 Issue: 1 Short Title: The Choice Calculus: A Representation for Software Variation

ISSN: 1049-331X DOI: 10.1145/2063239.2063245 Keywords: Maintenance and enhancement Languages representation Software configuration management Theory variation Version control Abstract: Many areas of computer science are concerned with some form of variation in software---from managing changes to software over time to supporting families of related artifacts. We present the choice calculus, a fundamental representation for software variation that can serve as a common language of discourse for variation research, filling a role similar to the lambda calculus in programming language research. We also develop an associated theory of software variation, including sound transformations of variation artifacts, the definition of strategic normal forms, and a design theory for variation structures, which will support the development of better algorithms and tools. Reference Type: Journal Article Record Number: 16 Author: W. Masri and A. Podgurski Year: 2009 Title: Measuring the strength of information flows in programs Journal: ACM Trans. Softw. Eng. Methodol. Volume: 19 Issue: 2 Pages: 1-33 Short Title: Measuring the strength of information flows in programs ISSN: 1049-331X DOI: 10.1145/1571629.1571631 Legal Note: 1571631 Abstract: Dynamic information flow analysis (DIFA) was devised to enable the flow of information among variables in an executing program to be monitored and possibly regulated. It is related to techniques like dynamic slicing and dynamic impact analysis. To better understand the basis for DIFA, we conducted an empirical study in which we measured the strength of information flows identified by DIFA, using information theoretic and correlation-based methods. The results indicate that in most cases the occurrence of a chain of dynamic program dependences between two variables does not indicate a measurable information flow between them. We also explored the relationship between the strength of an information flow and the length of the corresponding dependence chain, and we obtained results indicating that no consistent relationship exists between the length of an information flow and its strength. Finally, we investigated whether data dependence and control dependence makes equal or unequal contributions to flow strength. The results indicate that flows due to data dependences alone are stronger, on average, than flows due to control dependences alone. We present the details of our study and consider the implications of the results

for applications of DIFA and related techniques. Reference Type: Journal Article Record Number: 27 Author: P. McMinn, D. Binkley and M. Harman Year: 2009 Title: Empirical evaluation of a nesting testability transformation for evolutionary testing Journal: ACM Trans. Softw. Eng. Methodol. Volume: 18 Issue: 3 Pages: 1-27 Short Title: Empirical evaluation of a nesting testability transformation for evolutionary testing ISSN: 1049-331X DOI: 10.1145/1525880.1525884 Legal Note: 1525884 Abstract: Evolutionary testing is an approach to automating test data generation that uses an evolutionary algorithm to search a test object's input domain for test data. Nested predicates can cause problems for evolutionary testing, because information needed for guiding the search only becomes available as each nested conditional is satisfied. This means that the search process can overfit to early information, making it harder, and sometimes near impossible, to satisfy constraints that only become apparent later in the search. The article presents a testability transformation that allows the evaluation of all nested conditionals at once. Two empirical studies are presented. The first study shows that the form of nesting handled is prevalent in practice. The second study shows how the approach improves evolutionary test data generation. Reference Type: Journal Article Record Number: 133 Author: N. Medvidovic, D. S. Rosenblum, D. F. Redmiles and J. E. Robbins Year: 2002 Title: Modeling software architectures in the Unified Modeling Language Journal: ACM Trans. Softw. Eng. Methodol. Volume: 11 Issue: 1 Pages: 2-57 Short Title: Modeling software architectures in the Unified Modeling Language ISSN: 1049-331X DOI: 10.1145/504087.504088 Legal Note: 504088 Abstract: The Unified Modeling Language (UML) is a family of design notations that is rapidly becoming a de facto standard software design language. UML provides a variety of useful capabilities to the software designer, including multiple, interrelated design views, a semiformal semantics expressed as a UML meta model, and an associated

language for expressing formal logic constraints on design elements. The primary goal of this work is an assessment of UML's expressive power for modeling software architectures in the manner in which a number of existing software architecture description languages (ADLs) model architectures. This paper presents two strategies for supporting architectural concerns within UML. One strategy involves using UML "as is," while the other incorporates useful features of existing ADLs as UML extensions. We discuss the applicability, strengths, and weaknesses of the two strategies. The strategies are applied on three ADLs that, as a whole, represent a broad cross-section of present-day ADL capabilities. One conclusion of our work is that UML currently lacks support for capturing and exploiting certain architectural concerns whose importance has been demonstrated through the research and practice of software architectures. In particular, UML lacks direct support for modeling and exploiting architectural styles, explicit software connectors, and local and global architectural constraints. Reference Type: Journal Article Record Number: 29 Author: A. M. Memon Year: 2008 Title: Automatically repairing event sequence-based GUI test suites for regression testing Journal: ACM Trans. Softw. Eng. Methodol. Volume: 18 Issue: 2 Pages: 1-36 Short Title: Automatically repairing event sequence-based GUI test suites for regression testing ISSN: 1049-331X DOI: 10.1145/1416563.1416564 Legal Note: 1416564 Abstract: Although graphical user interfaces (GUIs) constitute a large part of the software being developed today and are typically created using rapid prototyping, there are no effective regression testing techniques for GUIs. The needs of GUI regression testing differ from those of traditional software. When the structure of a GUI is modified, test cases from the original GUI's suite are either reusable or unusable on the modified GUI. Because GUI test case generation is expensive, our goal is to make the unusable test cases usable, thereby helping to retain the suite's event coverage. The idea of reusing these unusable (obsolete) test cases has not been explored before. This article shows that a large number of test cases become unusable for GUIs. It presents a new GUI regression testing technique that first automatically determines the usable and unusable test cases from a test suite after a GUI modification, then determines the unusable test cases that can be repaired so that they can execute on the modified GUI, and finally uses repairing transformations to repair the test cases. This regression testing technique along with four repairing transformations has been implemented. An empirical study for four open-source applications demonstrates that (1) this approach is effective in that many of the test cases can be repaired, and is practical in terms of its

time performance, (2) certain types of test cases are more prone to becoming unusable, and (3) certain types of dominator events, when modified, make a large number of test cases unusable. Notes: Software Testing Tools > Test Generators Reference Type: Journal Article Record Number: 53 Author: T. M. Meyers and D. Binkley Year: 2007 Title: An empirical study of slice-based cohesion and coupling metrics Journal: ACM Trans. Softw. Eng. Methodol. Volume: 17 Issue: 1 Pages: 1-27 Short Title: An empirical study of slice-based cohesion and coupling metrics ISSN: 1049-331X DOI: 10.1145/1314493.1314495 Legal Note: 1314495 Abstract: Software reengineering is a costly endeavor, due in part to the ambiguity of where to focus reengineering effort. Coupling and Cohesion metrics, particularly quantitative cohesion metrics, have the potential to aid in this identification and to measure progress. The most extensive work on such metrics is with slice-based cohesion metrics. While their use of semantic dependence information should make them an excellent choice for cohesion measurement, their wide spread use has been impeded in part by a lack of empirical study. Recent advances in software tools make, for the first time, a large-scale empirical study of slice-based cohesion and coupling metrics possible. Four results from such a study are presented. First, head-to-head qualitative and quantitative comparisons of the metrics identify which metrics provide similar views of a program and which provide unique views of a program. This study includes statistical analysis showing that slicebased metrics are not proxies for simple size-based metrics such as lines of code. Second, two longitudinal studies show that slice-based metrics quantify the deterioration of a program as it ages. This serves to validate the metrics: the metrics quantify the degradation that exists during development; turning this around, the metrics can be used to measure the progress of a reengineering effort. Third, baseline values for slicebased metrics are provided. These values act as targets for reengineering efforts with modules having values outside the expected range being the most in need of attention. Finally, slice-based coupling is correlated and compared with slice-based cohesion. Reference Type: Journal Article Record Number: 95 Author: A. Milanova, A. Rountev and B. G. Ryder Year: 2005

Title: Parameterized object sensitivity for points-to analysis for Java Journal: ACM Trans. Softw. Eng. Methodol. Volume: 14 Issue: 1 Pages: 1-41 Short Title: Parameterized object sensitivity for points-to analysis for Java ISSN: 1049-331X DOI: 10.1145/1044834.1044835 Legal Note: 1044835 Abstract: The goal of points-to analysis for Java is to determine the set of objects pointed to by a reference variable or a reference object field. We present object sensitivity, a new form of context sensitivity for flow-insensitive points-to analysis for Java. The key idea of our approach is to analyze a method separately for each of the object names that represent run-time objects on which this method may be invoked. To ensure flexibility and practicality, we propose a parameterization framework that allows analysis designers to control the tradeoffs between cost and precision in the objectsensitive analysis.Side-effect analysis determines the memory locations that may be modified by the execution of a program statement. Def-use analysis identifies pairs of statements that set the value of a memory location and subsequently use that value. The information computed by such analyses has a wide variety of uses in compilers and software tools. This work proposes new versions of these analyses that are based on object-sensitive points-to analysis.We have implemented two instantiations of our parameterized object-sensitive points-to analysis. On a set of 23 Java programs, our experiments show that these analyses have comparable cost to a context-insensitive points-to analysis for Java which is based on Andersen's analysis for C. Our results also show that object sensitivity significantly improves the precision of side-effect analysis and call graph construction, compared to (1) context-insensitive analysis, and (2) context-sensitive points-to analysis that models context using the invoking call site. These experiments demonstrate that object-sensitive analyses can achieve substantial precision improvement, while at the same time remaining efficient and practical. Reference Type: Journal Article Record Number: 109 Author: T. Miller and P. Strooper Year: 2003 Title: A framework and tool support for the systematic testing of model-based specifications Journal: ACM Trans. Softw. Eng. Methodol. Volume: 12 Issue: 4 Pages: 409-439 Short Title: A framework and tool support for the systematic testing of model-based specifications ISSN: 1049-331X DOI: 10.1145/990010.990012

Legal Note: 990012 Abstract: Formal specifications can precisely and unambiguously define the required behavior of a software system or component. However, formal specifications are complex artifacts that need to be verified to ensure that they are consistent, complete, and validated against the requirements. Specification testing or animation tools exist to assist with this by allowing the specifier to interpret or execute the specification. However, currently little is known about how to do this effectively.This article presents a framework and tool support for the systematic testing of formal, model-based specifications. Several important generic properties that should be satisfied by modelbased specifications are first identified. Following the idea of mutation analysis, we then use variants or mutants of the specification to check that these properties are satisfied. The framework also allows the specifier to test application-specific properties. All properties are tested for a range of states that are defined by the tester in the form of a testgraph, which is a directed graph that partially models the states and transitions of the specification being tested. Tool support is provided for the generation of the mutants, for automatically traversing the testgraph and executing the test cases, and for reporting any errors. The framework is demonstrated on a small specification and its application to three larger specifications is discussed. Experience indicates that the framework can be used effectively to test small to medium-sized specifications and that it can reveal a significant number of problems in these specifications. Reference Type: Journal Article Record Number: 127 Author: A. Mockus, R. T. Fielding and J. D. Herbsleb Year: 2002 Title: Two case studies of open source software development: Apache and Mozilla Journal: ACM Trans. Softw. Eng. Methodol. Volume: 11 Issue: 3 Pages: 309-346 Short Title: Two case studies of open source software development: Apache and Mozilla ISSN: 1049-331X DOI: 10.1145/567793.567795 Legal Note: 567795 Abstract: According to its proponents, open source style software development has the capacity to compete successfully, and perhaps in many cases displace, traditional commercial development methods. In order to begin investigating such claims, we examine data from two major open source projects, the Apache web server and the Mozilla browser. By using email archives of source code change history and problem reports we quantify aspects of developer participation, core team size, code ownership, productivity, defect density, and problem resolution intervals for these OSS projects. We develop several hypotheses by comparing the Apache project with several commercial projects. We then test and refine several of these hypotheses, based on an analysis of Mozilla data. We conclude with thoughts about the prospects for high-performance

commercial/open source process hybrids. Reference Type: Journal Article Record Number: 41 Author: P. Mohagheghi and R. Conradi Year: 2008 Title: An empirical investigation of software reuse benefits in a large telecom product Journal: ACM Trans. Softw. Eng. Methodol. Volume: 17 Issue: 3 Pages: 1-31 Short Title: An empirical investigation of software reuse benefits in a large telecom product ISSN: 1049-331X DOI: 10.1145/1363102.1363104 Legal Note: 1363104 Abstract: Background. This article describes a case study on the benefits of software reuse in a large telecom product. The reused components were developed in-house and shared in a product-family approach. Methods. Quantitative data mined from company repositories are combined with other quantitative data and qualitative observations. Results. We observed significantly lower fault density and less modified code between successive releases of the reused components. Reuse and standardization of software architecture and processes allowed easier transfer of development when organizational changes happened. Conclusions. The study adds to the evidence of quality benefits of large-scale reuse programs and explores organizational motivations and outcomes. Reference Type: Journal Article Record Number: 76 Author: A. L. Murphy, G. P. Picco and G.-C. Roman Year: 2006 Title: LIME: A coordination model and middleware supporting mobility of hosts and agents Journal: ACM Trans. Softw. Eng. Methodol. Volume: 15 Issue: 3 Pages: 279-328 Short Title: LIME: A coordination model and middleware supporting mobility of hosts and agents ISSN: 1049-331X DOI: 10.1145/1151695.1151698 Legal Note: 1151698 Abstract: LIME (Linda in a mobile environment) is a model and middleware supporting the development of applications that exhibit the physical mobility of hosts, logical

mobility of agents, or both. LIME adopts a coordination perspective inspired by work on the Linda model. The context for computation, represented in Linda by a globally accessible persistent tuple space, is refined in LIME to transient sharing of the identically named tuple spaces carried by individual mobile units. Tuple spaces are also extended with a notion of location and programs are given the ability to react to specified states. The resulting model provides a minimalist set of abstractions that facilitates the rapid and dependable development of mobile applications. In this article we illustrate the model underlying LIME, provide a formal semantic characterization for the operations it makes available to the application developer, present its current design and implementation, and discuss lessons learned in developing applications that involve physical mobility. Reference Type: Journal Article Record Number: 119 Author: C. Nentwich, W. Emmerich, A. Finkelstein and E. Ellmer Year: 2003 Title: Flexible consistency checking Journal: ACM Trans. Softw. Eng. Methodol. Volume: 12 Issue: 1 Pages: 28-63 Short Title: Flexible consistency checking ISSN: 1049-331X DOI: 10.1145/839268.839271 Legal Note: 839271 Abstract: The problem of managing the consistency of heterogeneous, distributed software engineering documents is central to the development of large and complex systems. We show how this problem can be addressed using xlinkit, a lightweight framework for consistency checking that leverages standard Internet technologies. xlinkit provides flexibility, strong diagnostics, and support for distribution and document heterogeneity. We use xlinkit in a comprehensive case study that demonstrates how design, implementation and deployment information of an Enterprise JavaBeans system can be checked for consistency, and rechecked incrementally when changes are made. Reference Type: Journal Article Record Number: 66 Author: D. Notkin Year: 2007 Title: Editorial Journal: ACM Trans. Softw. Eng. Methodol. Volume: 16 Issue: 1 Pages: 1 Short Title: Editorial

ISSN: 1049-331X DOI: 10.1145/1189748.1189749 Legal Note: 1189749 Reference Type: Journal Article Record Number: 35 Author: Ond\, \#345, e. Lhot\, \#225 and L. Hendren Year: 2008 Title: Evaluating the benefits of context-sensitive points-to analysis using a BDD-based implementation Journal: ACM Trans. Softw. Eng. Methodol. Volume: 18 Issue: 1 Pages: 1-53 Short Title: Evaluating the benefits of context-sensitive points-to analysis using a BDDbased implementation ISSN: 1049-331X DOI: 10.1145/1391984.1391987 Legal Note: 1391987 Abstract: We present Paddle, a framework of BDD-based context-sensitive points-to and call graph analyses for Java, as well as client analyses that use their results. Paddle supports several variations of context-sensitive analyses, including call site strings and object sensitivity, and context-sensitively specializes both pointer variables and the heap abstraction. We empirically evaluate the precision of these contextsensitive analyses on significant Java programs. We find that that object-sensitive analyses are more precise than comparable variations of the other approaches, and that specializing the heap abstraction improves precision more than extending the length of context strings. Reference Type: Journal Article Record Number: 103 Author: A. Orso, S. Sinha and M. J. Harrold Year: 2004 Title: Classifying data dependences in the presence of pointers for program comprehension, testing, and debugging Journal: ACM Trans. Softw. Eng. Methodol. Volume: 13 Issue: 2 Pages: 199-239 Short Title: Classifying data dependences in the presence of pointers for program comprehension, testing, and debugging ISSN: 1049-331X DOI: 10.1145/1018210.1018212 Legal Note: 1018212

Abstract: Understanding data dependences in programs is important for many software-engineering activities, such as program understanding, impact analysis, reverse engineering, and debugging. The presence of pointers can cause subtle and complex data dependences that can be difficult to understand. For example, in languages such as C, an assignment made through a pointer dereference can assign a value to one of several variables, none of which may appear syntactically in that statement. In the first part of this article, we describe two techniques for classifying data dependences in the presence of pointer dereferences. The first technique classifies data dependences based on definition type, use type, and path type. The second technique classifies data dependences based on span. We present empirical results to illustrate the distribution of data-dependence types and spans for a set of real C programs. In the second part of the article, we discuss two applications of the classification techniques. First, we investigate different ways in which the classification can be used to facilitate data-flow testing. We outline an approach that uses types and spans of data dependences to determine the appropriate verification technique for different data dependences; we present empirical results to illustrate the approach. Second, we present a new slicing approach that computes slices based on types of data dependences. Based on the new approach, we define an incremental slicing technique that computes a slice in multiple steps. We present empirical results to illustrate the sizes of incremental slices and the potential usefulness of incremental slicing for debugging. Reference Type: Journal Article Record Number: 83 Author: L. Osterweil, C. Ghezzi, J. Kramer and A. Wolf Year: 2005 Title: Editorial Journal: ACM Trans. Softw. Eng. Methodol. Volume: 14 Issue: 4 Pages: 381-382 Short Title: Editorial ISSN: 1049-331X DOI: 10.1145/1101815.1101816 Legal Note: 1101816 Reference Type: Journal Article Record Number: 19 Author: C. Ouyang, M. Dumas, W. M. P. V. D. Aalst, A. H. M. T. Hofstede and J. Mendling Year: 2009 Title: From business process models to process-oriented software systems Journal: ACM Trans. Softw. Eng. Methodol. Volume: 19

Issue: 1 Pages: 1-37 Short Title: From business process models to process-oriented software systems ISSN: 1049-331X DOI: 10.1145/1555392.1555395 Legal Note: 1555395 Abstract: Several methods for enterprise systems analysis rely on flow-oriented representations of business operations, otherwise known as business process models. The Business Process Modeling Notation (BPMN) is a standard for capturing such models. BPMN models facilitate communication between domain experts and analysts and provide input to software development projects. Meanwhile, there is an emergence of methods for enterprise software development that rely on detailed process definitions that are executed by process engines. These process definitions refine their counterpart BPMN models by introducing data manipulation, application binding, and other implementation details. The de facto standard for defining executable processes is the Business Process Execution Language (BPEL). Accordingly, a standards-based method for developing process-oriented systems is to start with BPMN models and to translate these models into BPEL definitions for subsequent refinement. However, instrumenting this method is challenging because BPMN models and BPEL definitions are structurally very different. Existing techniques for translating BPMN to BPEL only work for limited classes of BPMN models. This article proposes a translation technique that does not impose structural restrictions on the source BPMN model. At the same time, the technique emphasizes the generation of readable (block-structured) BPEL code. An empirical evaluation conducted over a large collection of process models shows that the resulting BPEL definitions are largely block-structured. Beyond its direct relevance in the context of BPMN and BPEL, the technique presented in this article addresses issues that arise when translating from graph-oriented to block-structure flow definition languages. Reference Type: Journal Article Record Number: 60 Author: R. F. Paige, P. J. Brooke and J. S. Ostroff Year: 2007 Title: Metamodel-based model conformance and multiview consistency checking Journal: ACM Trans. Softw. Eng. Methodol. Volume: 16 Issue: 3 Pages: 11 Short Title: Metamodel-based model conformance and multiview consistency checking ISSN: 1049-331X DOI: 10.1145/1243987.1243989 Legal Note: 1243989 Abstract: Model-driven development, using languages such as UML and BON, often makes use of multiple diagrams (e.g., class and sequence diagrams) when modeling systems. These diagrams, presenting different views of a system of interest, may be

inconsistent. A metamodel provides a unifying framework in which to ensure and check consistency, while at the same time providing the means to distinguish between valid and invalid models, that is, conformance. Two formal specifications of the metamodel for an object-oriented modeling language are presented, and it is shown how to use these specifications for model conformance and multiview consistency checking. Comparisons are made in terms of completeness and the level of automation each provide for checking multiview consistency and model conformance. The lessons learned from applying formal techniques to the problems of metamodeling, model conformance, and multiview consistency checking are summarized. Reference Type: Journal Article Record Number: 323 Author: Paul Jennings, Arka P. Ghosh and S. Basu Year: 2012 Title: A two-phase approximation for model checking probabilistic unbounded until properties of probabilistic systems Journal: ACM Trans. Softw. Eng. Methodol. Volume: 21 Issue: 3 Pages: 1-35 Short Title: A two-phase approximation for model checking probabilistic unbounded until properties of probabilistic systems ISSN: 1049-331X DOI: 10.1145/2211616.2211621 Keywords: csl ctmc dtmc Formal methods Model checking Statistical methods Temporal logic verification Abstract: We have developed a new approximate probabilistic model-checking method for untimed properties in probabilistic systems, expressed in a probabilistic temporal logic (PCTL, CSL). This method, in contrast to the existing ones, does not require the untimed until properties to be bounded a priori, where the bound refers to the number of discrete steps in the system required to verify the until property. The method consists of two phases. In the first phase, a suitable system- and property-dependent bound k0 is obtained automatically. In the second phase, the probability of satisfying the k0bounded until property is computed as the estimate of the probability of satisfying the original unbounded until property. Both phases require only verification of bounded until properties, which can be effectively performed by simulation-based methods. We prove the correctness of the proposed two-phase method and present its optimized implementation in the widely used PRISM model-checking engine. We compare this implementation with sampling-based model-checking techniques implemented in two tools: PRISM and MRMC. We show that for several models these existing tools fail to

compute the result, while the two-phase method successfully computes the result efficiently with respect to time and space. Reference Type: Journal Article Record Number: 9 Author: J. Payton, C. Julien, G.-C. Roman and V. Rajamani Year: 2010 Title: Semantic self-assessment of query results in dynamic environments Journal: ACM Trans. Softw. Eng. Methodol. Volume: 19 Issue: 4 Pages: 1-33 Short Title: Semantic self-assessment of query results in dynamic environments ISSN: 1049-331X DOI: 10.1145/1734229.1734231 Legal Note: 1734231 Abstract: Queries are convenient abstractions for the discovery of information and services, as they offer content-based information access. In distributed settings, query semantics are well-defined, for example, queries are often designed to satisfy ACID transactional properties. When query processing is introduced in a dynamic network setting, achieving transactional semantics becomes complex due to the open and unpredictable environment. In this article, we propose a query processing model for mobile ad hoc and sensor networks that is suitable for expressing a wide range of query semantics; the semantics differ in the degree of consistency with which query results reflect the state of the environment during query execution. We introduce several distinct notions of consistency and formally express them in our model. A practical and significant contribution of this article is a protocol for query processing that automatically assesses and adaptively provides an achievable degree of consistency given the operational environment throughout its execution. The protocol attaches an assessment of the achieved guarantee to returned query results, allowing precise reasoning about a query with a range of possible semantics. We evaluate the performance of this protocol and demonstrate the benefits accrued to applications through examples drawn from an industrial application. Notes: Software Design Tools Reference Type: Journal Article Record Number: 140 Author: D. E. Perry, H. P. Siy and L. G. Votta Year: 2001 Title: Parallel changes in large-scale software development: an observational case study Journal: ACM Trans. Softw. Eng. Methodol. Volume: 10 Issue: 3

Pages: 308-337 Short Title: Parallel changes in large-scale software development: an observational case study ISSN: 1049-331X DOI: 10.1145/383876.383878 Legal Note: 383878 Abstract: An essential characteristic of large-scale software development is parallel development by teams of developers. How this parallel development is structured and supported has a profound effect on both the quality and timeliness of the product. We conduct an observational case study in which we collect and analyze the change and configuration management history of a legacy system to delineate the boundaries of, and to understand the nature of, the problems encountered in parallel development. The results of our studies are (1) that the degree of parallelism is very highhigher than considered by tool builders; (2) there are multiple levels of parallelism, and the data for some important aspects are uniform and consistent for all levels; (3) the tails of the distributions are long, indicating the tail, rather than the mean, must receive serious attention in providing solutions for these problems; and (4) there is a significant correlation between the degree of parallel work on a given component and the number of quality problems it has. Thus, the results of this study are important both for tool builders and for process and project engineers. Reference Type: Journal Article Record Number: 141 Author: G. P. Picco, G.-C. Roman and P. J. McCann Year: 2001 Title: Reasoning about code mobility with mobile UNITY Journal: ACM Trans. Softw. Eng. Methodol. Volume: 10 Issue: 3 Pages: 338-395 Short Title: Reasoning about code mobility with mobile UNITY ISSN: 1049-331X DOI: 10.1145/383876.383879 Legal Note: 383879 Abstract: Advancements in network technology have led to the emergence of new computing paradigms that challenge established programming practices by employing weak forms of consistency and dynamic forms of binding. Code mobility, for instance, allows for invocation-time binding between a code fragment and the location where it executes. Similarly, mobile computing allows hosts (and the software they execute) to alter their physical location. Despite apparent similarities, the two paradigms are distinct in their treatment of location and movement. This paper seeks to uncover a common foundation for the two paradigms by exploring the manner in which stereotypical forms of code mobility can be expressed in a programming notation developed for mobile computing. Several solutions to a distributed simulation problem are used to illustrate the modeling strategy and the ability to construct assertional-style proofs for programs

that employ code mobility. Reference Type: Journal Article Record Number: 8 Author: J. Ponge, B. Benatallah, F. Casati and F. Toumani Year: 2010 Title: Analysis and applications of timed service protocols Journal: ACM Trans. Softw. Eng. Methodol. Volume: 19 Issue: 4 Pages: 1-38 Short Title: Analysis and applications of timed service protocols ISSN: 1049-331X DOI: 10.1145/1734229.1734230 Legal Note: 1734230 Abstract: Web services are increasingly gaining acceptance as a framework for facilitating application-to-application interactions within and across enterprises. It is commonly accepted that a service description should include not only the interface, but also the business protocol supported by the service. The present work focuses on the formalization of an important category of protocols that includes time-related constraints (called timed protocols), and the impact of time on compatibility and replaceability analysis. We formalized the following timing constraints: C-Invoke constraints define time windows within which a service operation can be invoked while M-Invoke constraints define expiration deadlines. We extended techniques for compatibility and replaceability analysis between timed protocols by using a semantic-preserving mapping between timed protocols and timed automata, leading to the identification of a novel class of timed automata, called protocol timed automata (PTA). PTA exhibit a particular kind of silent transition that strictly increase the expressiveness of the model, yet they are closed under complementation, making every type of compatibility or replaceability analysis decidable. Finally, we implemented our approach in the context of a larger project called ServiceMosaic, a model-driven framework for Web service lifecycle management. Notes: Software Design Tools Reference Type: Journal Article Record Number: 272 Author: R. A. Gandhi and S. W. Lee Year: 2011 Title: Discovering Multidimensional Correlations among Regulatory Requirements to Understand Risk Journal: ACM Trans. Softw. Eng. Methodol. Volume: 20 Issue: 4 Short Title: Discovering Multidimensional Correlations among Regulatory

Requirements to Understand Risk ISSN: 1049-331X DOI: 10.1145/2000799.2000802 Keywords: Certification and accreditation Knowledge engineering Abstract: Security breaches most often occur due to a cascading effect of failure among security constraints that collectively contribute to overall secure system behavior in a socio-technical environment. Therefore, during security certification activities, analysts must systematically take into account the nexus of causal chains that exist among security constraints imposed by regulatory requirements. Numerous regulatory requirements specified in natural language documents or listed in spreadsheets/databases do not facilitate such analysis. The work presented in this article outlines a stepwise methodology to discover and understand the multidimensional correlations among regulatory requirements for the purpose of understanding the potential for risk due to noncompliance during system operation. Our lattice algebraic computational model helps estimate the collective adequacy of diverse security constraints imposed by regulatory requirements and their interdependencies with each other in a bounded scenario of investigation. Abstractions and visual metaphors combine human intuition with metrics available from the methodology to improve the understanding of risk based on the level of compliance with regulatory requirements. In addition, a problem domain ontology that classifies and categorizes regulatory requirements from multiple dimensions of a socio-technical environment promotes a common understanding among stakeholders during certification and accreditation activities. A preliminary empirical investigation of our theoretical propositions has been conducted in the domain of The United States Department of Defense Information Technology Security Certification and Accreditation Process (DITSCAP). This work contributes a novel approach to understand the level of compliance with regulatory requirements in terms of the potential for risk during system operation. Reference Type: Journal Article Record Number: 20 Author: H. Rajan and K. J. Sullivan Year: 2009 Title: Unifying aspect- and object-oriented design Journal: ACM Trans. Softw. Eng. Methodol. Volume: 19 Issue: 1 Pages: 1-41 Short Title: Unifying aspect- and object-oriented design ISSN: 1049-331X DOI: 10.1145/1555392.1555396 Legal Note: 1555396 Abstract: The contribution of this work is the design and evaluation of a programming language model that unifies aspects and classes as they appear in AspectJ-like

languages. We show that our model preserves the capabilities of AspectJ-like languages, while improving the conceptual integrity of the language model and the compositionality of modules. The improvement in conceptual integrity is manifested by the reduction of specialized constructs in favor of uniform orthogonal constructs. The enhancement in compositionality is demonstrated by better modularization of integration and higher-order crosscutting concerns. Reference Type: Journal Article Record Number: 37 Author: M. P. Robillard Year: 2008 Title: Topology analysis of software dependencies Journal: ACM Trans. Softw. Eng. Methodol. Volume: 17 Issue: 4 Pages: 1-36 Short Title: Topology analysis of software dependencies ISSN: 1049-331X DOI: 10.1145/13487689.13487691 Legal Note: 13487691 Abstract: Before performing a modification task, a developer usually has to investigate the source code of a system to understand how to carry out the task. Discovering the code relevant to a change task is costly because it is a human activity whose success depends on a large number of unpredictable factors, such as intuition and luck. Although studies have shown that effective developers tend to explore a program by following structural dependencies, no methodology is available to guide their navigation through the thousands of dependency paths found in a nontrivial program. We describe a technique to automatically propose and rank program elements that are potentially interesting to a developer investigating source code. Our technique is based on an analysis of the topology of structural dependencies in a program. It takes as input a set of program elements of interest to a developer and produces a fuzzy set describing other elements of potential interest. Empirical evaluation of our technique indicates that it can help developers quickly select program elements worthy of investigation while avoiding less interesting ones. Reference Type: Journal Article Record Number: 115 Author: M. P. Robillard and G. C. Murphy Year: 2003 Title: Static analysis to support the evolution of exception structure in object-oriented systems Journal: ACM Trans. Softw. Eng. Methodol. Volume: 12 Issue: 2

Pages: 191-221 Short Title: Static analysis to support the evolution of exception structure in objectoriented systems ISSN: 1049-331X DOI: 10.1145/941566.941569 Legal Note: 941569 Abstract: Exception-handling mechanisms in modern programming languages provide a means to help software developers build robust applications by separating the normal control flow of a program from the control flow of the program under exceptional situations. Separating the exceptional structure from the code associated with normal operations bears some consequences. One consequence is that developers wishing to improve the robustness of a program must figure out which exceptions, if any, can flow to a point in the program. Unfortunately, in large programs, this exceptional control flow can be difficult, if not impossible, to determine.In this article, we present a model that encapsulates the minimal concepts necessary for a developer to determine exception flow for object-oriented languages that define exceptions as objects. Using these concepts, we describe why exception-flow information is needed to build and evolve robust programs. We then describe Jex, a static analysis tool we have developed to provide exception-flow information for Java systems based on this model. The Jex tool provides a view of the actual exception types that might arise at different program points and of the handlers that are present. Use of this tool on a collection of Java library and application source code demonstrates that the approach can be helpful to support both local and global improvements to the exception-handling structure of a system. Reference Type: Journal Article Record Number: 68 Author: M. P. Robillard and G. C. Murphy Year: 2007 Title: Representing concerns in source code Journal: ACM Trans. Softw. Eng. Methodol. Volume: 16 Issue: 1 Pages: 3 Short Title: Representing concerns in source code ISSN: 1049-331X DOI: 10.1145/1189748.1189751 Legal Note: 1189751 Abstract: A software modification task often addresses several concerns. A concern is anything a stakeholder may want to consider as a conceptual unit, including features, nonfunctional requirements, and design idioms. In many cases, the source code implementing a concern is not encapsulated in a single programming language module, and is instead scattered and tangled throughout a system. Inadequate separation of concerns increases the difficulty of evolving software in a correct and cost-effective manner. To make it easier to modify concerns that are not well modularized, we propose an approach in which the implementation of concerns is documented in artifacts, called

concern graphs. Concern graphs are abstract models that describe which parts of the source code are relevant to different concerns. We present a formal model for concern graphs and the tool support we developed to enable software developers to create and use concern graphs during software evolution tasks. We report on five empirical studies, providing evidence that concern graphs support views and operations that facilitate the task of modifying the code implementing scattered concerns, are cost-effective to create and use, and robust enough to be used with different versions of a software system. Reference Type: Journal Article Record Number: 104 Author: R. Roshandel, Andr\, \#233, V. D. Hoek, M. Mikic-Rakic and N. Medvidovic Year: 2004 Title: Mae---a system model and environment for managing architectural evolution Journal: ACM Trans. Softw. Eng. Methodol. Volume: 13 Issue: 2 Pages: 240-276 Short Title: Mae---a system model and environment for managing architectural evolution ISSN: 1049-331X DOI: 10.1145/1018210.1018213 Legal Note: 1018213 Abstract: As with any other artifact produced as part of the software life cycle, software architectures evolve and this evolution must be managed. One approach to doing so would be to apply any of a host of existing configuration management systems, which have long been used successfully at the level of source code. Unfortunately, such an approach leads to many problems that prevent effective management of architectural evolution. To overcome these problems, we have developed an alternative approach centered on the use of an integrated architectural and configuration management system model. Because the system model combines architectural and configuration management concepts in a single representation, it has the distinct benefit that all architectural changes can be precisely captured and clearly related to each other---both at the fine-grained level of individual architectural elements and at the coarse-grained level of architectural configurations. To support the use of the system model, we have developed Mae, an architectural evolution environment through which users can specify architectures in a traditional manner, manage the evolution of the architectures using a check-out/check-in mechanism that tracks all changes, select a specific architectural configuration, and analyze the consistency of a selected configuration. We demonstrate the benefits of our approach by showing how the system model and its accompanying environment were used in the context of several representative projects. Reference Type: Journal Article Record Number: 147 Author: G. Rothermel, M. Burnett, L. Li, C. Dupuis and A. Sheretov

Year: 2001 Title: A methodology for testing spreadsheets Journal: ACM Trans. Softw. Eng. Methodol. Volume: 10 Issue: 1 Pages: 110-147 Short Title: A methodology for testing spreadsheets ISSN: 1049-331X DOI: 10.1145/366378.366385 Legal Note: 366385 Abstract: Spreadsheet languages, which include commercial spreadsheets and various research systems, have had a substantial impact on end-user computing. Research shows, however, that spreadsheets often contain faults; thus, we would like to provide at least some of the benefits of formal testing methodologies to the creators of spreadsheets. This article presents a testing methodology that adapts data flow adequacy criteria and coverage monitoring to the task of testing spreadsheets. To accommodate the evaluation model used with spreadsheets, and the interactive process by which they are created, our methodology is incremental. To accommodate the users of spreadsheet languages, we provide an interface to our methodology that does not require an understanding of testing theory. We have implemented our testing methodology in the context of the Forms/3 visual spreadsheet language. We report on the methodology, its time and space costs, and the mapping from the testing strategy to the user interface. In an empirical study, we found that test suites created according to our methodology detected, on average, 81% of the faults in a set of faulty spreadsheets, significantly outperforming randomly generated test suites. Reference Type: Journal Article Record Number: 100 Author: G. Rothermel, S. Elbaum, A. G. Malishevsky, P. Kallakuri and X. Qiu Year: 2004 Title: On test suite composition and cost-effective regression testing Journal: ACM Trans. Softw. Eng. Methodol. Volume: 13 Issue: 3 Pages: 277-331 Short Title: On test suite composition and cost-effective regression testing ISSN: 1049-331X DOI: 10.1145/1027092.1027093 Legal Note: 1027093 Abstract: Regression testing is an expensive testing process used to revalidate software as it evolves. Various methodologies for improving regression testing processes have been explored, but the cost-effectiveness of these methodologies has been shown to vary with characteristics of regression test suites. One such characteristic involves the way in which test inputs are composed into test cases within a test suite. This article reports the results of controlled experiments examining the

effects of two factors in test suite composition---test suite granularity and test input grouping---on the costs and benefits of several regression-testing-related methodologies: retest-all, regression test selection, test suite reduction, and test case prioritization. These experiments consider the application of several specific techniques, from each of these methodologies, across ten releases each of two substantial software systems, using seven levels of test suite granularity and two types of test input grouping. The effects of granularity, technique, and grouping on the cost and faultdetection effectiveness of regression testing under the given methodologies are analyzed. This analysis shows that test suite granularity significantly affects several cost-benefit factors for the methodologies considered, while test input grouping has limited effects. Further, the results expose essential tradeoffs affecting the relationship between test suite design and regression testing cost-effectiveness, with several implications for practice. Reference Type: Journal Article Record Number: 85 Author: B. G. Ryder, M. L. Soffa and M. Burnett Year: 2005 Title: The impact of software engineering research on modern progamming languages Journal: ACM Trans. Softw. Eng. Methodol. Volume: 14 Issue: 4 Pages: 431-477 Short Title: The impact of software engineering research on modern progamming languages ISSN: 1049-331X DOI: 10.1145/1101815.1101818 Legal Note: 1101818 Abstract: Software engineering research and programming language design have enjoyed a symbiotic relationship, with traceable impacts since the 1970s, when these areas were first distinguished from one another. This report documents this relationship by focusing on several major features of current programming languages: data and procedural abstraction, types, concurrency, exceptions, and visual programming mechanisms. The influences are determined by tracing references in publications in both fields, obtaining oral histories from language designers delineating influences on them, and tracking cotemporal research trends and ideas as demonstrated by workshop topics, special issue publications, and invited talks in the two fields. In some cases there is conclusive data supporting influence. In other cases, there are circumstantial arguments (i.e., cotemporal ideas) that indicate influence. Using this approach, this study provides evidence of the impact of software engineering research on modern programming language design and documents the close relationship between these two fields. Reference Type: Journal Article

Record Number: 136 Author: M. Schrefl and M. Stumptner Year: 2002 Title: Behavior-consistent specialization of object life cycles Journal: ACM Trans. Softw. Eng. Methodol. Volume: 11 Issue: 1 Pages: 92-148 Short Title: Behavior-consistent specialization of object life cycles ISSN: 1049-331X DOI: 10.1145/504087.504091 Legal Note: 504091 Abstract: Object-oriented design methodologies represent the behavior of instances of an object type not merely by a set of operations, but also by providing an overall description on how instances evolve over time. Such a description is often referred to as "object life cycle."Object-oriented systems organize object types in hierarchies in which subtypes inherit and specialize the structure and behavior of their supertypes. Past experience has shown that unrestricted use of inheritance mechanisms leads to system architectures that are hard to understand and to maintain, since arbitrary differences between supertype and subtype are possible. Evidently, this is not a desirable state of affairs and the behavior of a subtype should specialize the behavior of its supertype according to some clearly defined consistency criteria. Such criteria have been formulated in terms of type systems for semantic data models and object-oriented programming languages. But corresponding criteria for the specialization of object life cycles have so far not been thoroughly investigated.This paper defines such criteria in the realm of Object Behavior Diagrams, which have been originally developed for the design of object-oriented databases. Its main contributions are necessary and sufficient rules for checking behavior consistency between object life cycles of object types in specialization hierarchies with multiple inheritance. Reference Type: Journal Article Record Number: 282 Author: Shahar Maoz, David Harel and A. Kleinbort Year: 2011 Title: A Compiler for Multimodal Scenarios: Transforming LSCs into AspectJ Journal: ACM Trans. Softw. Eng. Methodol. Volume: 20 Issue: 4 Short Title: A Compiler for Multimodal Scenarios: Transforming LSCs into AspectJ ISSN: 1049-331X DOI: 10.1145/2000799.2000804 Keywords: Aspect oriented programming Code generation design Abstract: We exploit the main similarity between the aspect-oriented programming paradigm and the inter-object, scenario-based approach to specification, in order to

construct a new way of executing systems based on the latter. Specifically, we transform multimodal scenario-based specifications, given in the visual language of live sequence charts (LSC), into what we call scenario aspects, implemented in AspectJ. Unlike synthesis approaches, which attempt to take the inter-object scenarios and construct intra-object state-based per-object specifications or a single controller automaton, we follow the ideas behind the LSC play-out algorithm to coordinate the simultaneous monitoring and direct execution of the specified scenarios. Thus, the structure of the specification is reflected in the structure of the generated code; the highlevel inter-object requirements and their structure are not lost in the translation. The transformation/compilation scheme is fully implemented in a UML2-compliant tool we term the S2A compiler (for Scenarios to Aspects), which provides full code generation of reactive behavior from inter-object multimodal scenarios. S2A supports advanced scenario-based programming features, such as multiple instances and exact and symbolic parameters. We demonstrate our work with an application whose inter-object behaviors are specified using LSCs. We discuss advantages and challenges of the compilation scheme in the context of the more general vision of scenario-based programming. Notes: Software Construction Tools> Compilers and code generators Reference Type: Journal Article Record Number: 50 Author: S. F. Siegel, A. Mironova, G. S. Avrunin and L. A. Clarke Year: 2008 Title: Combining symbolic execution with model checking to verify parallel numerical programs Journal: ACM Trans. Softw. Eng. Methodol. Volume: 17 Issue: 2 Pages: 1-34 Short Title: Combining symbolic execution with model checking to verify parallel numerical programs ISSN: 1049-331X DOI: 10.1145/1348250.1348256 Legal Note: 1348256 Abstract: We present a method to verify the correctness of parallel programs that perform complex numerical computations, including computations involving floatingpoint arithmetic. This method requires that a sequential version of the program be provided, to serve as the specification for the parallel one. The key idea is to use model checking, together with symbolic execution, to establish the equivalence of the two programs. In this approach the path condition from symbolic execution of the sequential program is used to constrain the search through the parallel program. To handle floating-point operations, three different types of equivalence are supported. Several examples are presented, demonstrating the approach and actual errors that were found. Limitations and directions for future research are also described.

Reference Type: Journal Article Record Number: 284 Author: Simon Miles, Paul Groth, Steve Munroe and L. Moreau Year: 2011 Title: PrIMe: A methodology for developing provenance-aware applications Journal: ACM Trans. Softw. Eng. Methodol. Volume: 20 Issue: 3 Short Title: PrIMe: A methodology for developing provenance-aware applications ISSN: 1049-331X DOI: 10.1145/2000791.2000792 Keywords: Design methodology Provenance Abstract: PrIMe is a methodology for adapting applications to make them provenanceaware, that is to enable them to document their execution in order to answer provenance questions. A provenance-aware application can satisfy provenance use cases, where a use case is a description of a scenario in which a user interacts with a system by performing particular functions on that system, and a provenance use case requires documentation of past processes in order to achieve the functions. In this report the PrIMe is described. In order to illustrate the steps necessary to make an application provenance aware, an Organ Transplant Management example application is used. Reference Type: Journal Article Record Number: 3 Author: P. V. Singh Year: 2010 Title: The small-world effect: The influence of macro-level properties of developer collaboration networks on open-source project success Journal: ACM Trans. Softw. Eng. Methodol. Volume: 20 Issue: 2 Pages: 1-27 Short Title: The small-world effect: The influence of macro-level properties of developer collaboration networks on open-source project success ISSN: 1049-331X DOI: 10.1145/1824760.1824763 Legal Note: 1824763 Abstract: In this study we investigate the impact of community-level networks relationships that exist among developers in an OSS communityon the productivity of member developers. Specifically, we argue that OSS community networks characterized by small-world properties would positively influence the productivity of the member developers by providing them with speedy and reliable access to more quantity and variety of information and knowledge resources. Specific hypotheses are developed

and tested using longitudinal data on a large panel of 4,279 projects from 15 different OSS communities hosted at Sourceforge. Our results suggest that significant variation exists in small-world properties of OSS communities at Sourceforge. After accounting for project, foundry, and time-specific observed and unobserved effects, we found a statistically significant relationship between small-world properties of a community and the technical and commercial success of the software produced by its members. In contrast to the findings of prior research, we also found the lack of a significant relationship between closeness and betweenness centralities of the project teams and their success. These results were robust to a number of controls and model specifications. Reference Type: Journal Article Record Number: 75 Author: A. Sinha and C. Smidts Year: 2006 Title: HOTTest: A model-based test design technique for enhanced testing of domainspecific applications Journal: ACM Trans. Softw. Eng. Methodol. Volume: 15 Issue: 3 Pages: 242-278 Short Title: HOTTest: A model-based test design technique for enhanced testing of domain-specific applications ISSN: 1049-331X DOI: 10.1145/1151695.1151697 Legal Note: 1151697 Abstract: Model-based testing is an effective black-box test generation technique for applications. Existing model-based testing techniques, however, fail to capture implicit domain-specific properties, as they overtly rely on software artifacts such as design documents, requirement specifications, etc., for completeness of the test model. This article presents a technique, HOTTest, which uses a strongly typed domain-specific language to model the system under test. This allows extraction of type-related system invariants, which can be related to various domain-specific properties of the application. Thus, using HOTTest, it is possible to automatically extract and embed domain-specific requirements into the test models. In this article we describe HOTTest, its principles and methodology, and how it is possible to relate domain-specific properties to specific type constraints. HOTTest is described using the example of HaskellDB, which is a Haskellbased embedded domain-specific language for relational databases. We present an example application of the technique and compare the results to some other commonly used Model-based test automation techniques like ASML-based testing, UML-based testing, and EFSM-based testing. Reference Type: Journal Article Record Number: 144

Author: S. Sinha, M. J. Harrold and G. Rothermel Year: 2001 Title: Interprocedural control dependence Journal: ACM Trans. Softw. Eng. Methodol. Volume: 10 Issue: 2 Pages: 209-254 Short Title: Interprocedural control dependence ISSN: 1049-331X DOI: 10.1145/367008.367022 Legal Note: 367022 Abstract: Program-dependence information is useful for a variety of applications, such as software testing and maintenance tasks, and code optimization. Properly defined, control and data dependences can be used to identify semantic dependences. To function effectively on whole programs, tools that utilize dependence information require information about interprocedural dependences: dependences that are identified by analyzing the interactions among procedures. Many techniques for computing interprocedural data dependences exist; however, virtually no attention has been paid to interprocedural control dependence. Analysis techniques that fail to account for interprocedural control dependences can suffer unnecessary imprecision and loss of safety. This article presents a definition of interprocedural control dependence that supports the relationship of control and data dependence to semantic dependence. The article presents two approaches for computing interprocedural control dependences, and empirical results pertaining to teh use of those approaches. Reference Type: Journal Article Record Number: 131 Author: Y. Smaragdakis and D. Batory Year: 2002 Title: Mixin layers: an object-oriented implementation technique for refinements and collaboration-based designs Journal: ACM Trans. Softw. Eng. Methodol. Volume: 11 Issue: 2 Pages: 215-255 Short Title: Mixin layers: an object-oriented implementation technique for refinements and collaboration-based designs ISSN: 1049-331X DOI: 10.1145/505145.505148 Legal Note: 505148 Abstract: A "refinement" is a functionality addition to a software project that can affect multiple dispersed implementation entities (functions, classes, etc.). In this paper, we examine large-scale refinements in terms of a fundamental object-oriented technique called collaboration-based design. We explain how collaborations can be expressed in existing programming languages or can be supported with new language constructs

(which we have implemented as extensions to the Java language). We present a specific expression of large-scale refinements called mixin layers, and demonstrate how it overcomes the scalability difficulties that plagued prior work. We also show how we used mixin layers as the primary implementation technique for building an extensible Java compiler, JTS. Reference Type: Journal Article Record Number: 73 Author: G. Snelting, T. Robschink and J. Krinke Year: 2006 Title: Efficient path conditions in dependence graphs for software safety analysis Journal: ACM Trans. Softw. Eng. Methodol. Volume: 15 Issue: 4 Pages: 410-457 Short Title: Efficient path conditions in dependence graphs for software safety analysis ISSN: 1049-331X DOI: 10.1145/1178625.1178628 Legal Note: 1178628 Abstract: A new method for software safety analysis is presented which uses program slicing and constraint solving to construct and analyze path conditions, conditions defined on a program's input variables which must hold for information flow between two points in a program. Path conditions are constructed from subgraphs of a program's dependence graph, specifically, slices and chops. The article describes how constraint solvers can be used to determine if a path condition is satisfiable and, if so, to construct a witness for a safety violation, such as an information flow from a program point at one security level to another program point at a different security level. Such a witness can prove useful in legal matters.The article reviews previous research on path conditions in program dependence graphs; presents new extensions of path conditions for arrays, pointers, abstract data types, and multithreaded programs; presents new decomposition formulae for path conditions; demonstrates how interval analysis and BDDs (binary decision diagrams) can be used to reduce the scalability problem for path conditions; and presents case studies illustrating the use of path conditions in safety analysis. Applying interval analysis and BDDs is shown to overcome the combinatorial explosion that can occur in constructing path conditions. Case studies and empirical data demonstrate the usefulness of path conditions for analyzing practical programs, in particular, how illegal influences on safety-critical programs can be discovered and analyzed. Reference Type: Journal Article Record Number: 82 Author: C. Snook and M. Butler Year: 2006 Title: UML-B: Formal modeling and design aided by UML

Journal: ACM Trans. Softw. Eng. Methodol. Volume: 15 Issue: 1 Pages: 92-122 Short Title: UML-B: Formal modeling and design aided by UML ISSN: 1049-331X DOI: 10.1145/1125808.1125811 Legal Note: 1125811 Abstract: The emergence of the UML as a de facto standard for object-oriented modeling has been mirrored by the success of the B method as a practically useful formal modeling technique. The two notations have much to offer each other. The UML provides an accessible visualization of models facilitating communication of ideas but lacks formal precise semantics. B, on the other hand, has the precision to support animation and rigorous verification but requires significant effort in training to overcome the mathematical barrier that many practitioners perceive. We utilize a derivation of the B notation as an action and constraint language for the UML and define the semantics of UML entities via a translation into B. Through the UML-B profile we provide specializations of UML entities to support model refinement. The result is a formally precise variant of UML that can be used for refinement based, object-oriented behavioral modeling. The design of UML-B has been guided by industrial applications. Reference Type: Journal Article Record Number: 97 Author: I. Sommerville and J. Ransom Year: 2005 Title: An empirical study of industrial requirements engineering process assessment and improvement Journal: ACM Trans. Softw. Eng. Methodol. Volume: 14 Issue: 1 Pages: 85-117 Short Title: An empirical study of industrial requirements engineering process assessment and improvement ISSN: 1049-331X DOI: 10.1145/1044834.1044837 Legal Note: 1044837 Abstract: This article describes an empirical study in industry of requirements engineering process maturity assessment and improvement. Our aims were to evaluate a requirements engineering process maturity model and to assess if improvements in requirements engineering process maturity lead to business improvements. We first briefly describe the process maturity model that we used and modifications to this model to accommodate process improvement. We present initial maturity assessment results for nine companies, describe how process improvements were selected and present data on how RE process maturity changed after these improvements were introduced. We discuss how business benefits were assessed and the difficulties of relating process

maturity improvements to these business benefits. All companies reported business benefits and satisfaction with their participation in the study. Our conclusions are that the RE process maturity model is useful in supporting maturity assessment and in identifying process improvements and there is some evidence to suggest that process improvement leads to business benefits. However, whether these business benefits were a consequence of the changes to the RE process or whether these benefits resulted from side-effects of the study such as greater self-awareness of business processes remains an open question. Reference Type: Journal Article Record Number: 5 Author: F. Steimann, T. Pawlitzki, S. Apel, C. K\, \#228 and stner Year: 2010 Title: Types and modularity for implicit invocation with implicit announcement Journal: ACM Trans. Softw. Eng. Methodol. Volume: 20 Issue: 1 Pages: 1-43 Short Title: Types and modularity for implicit invocation with implicit announcement ISSN: 1049-331X DOI: 10.1145/1767751.1767752 Legal Note: 1767752 Abstract: Through implicit invocation, procedures are called without explicitly referencing them. Implicit announcement adds to this implicitness by not only keeping implicit which procedures are called, but also where or whenunder implicit invocation with implicit announcement, the call site contains no signs of that, or what it calls. Recently, aspect-oriented programming has popularized implicit invocation with implicit announcement as a possibility to separate concerns that lead to interwoven code if conventional programming techniques are used. However, as has been noted elsewhere, as currently implemented it establishes strong implicit dependencies between components, hampering independent software development and evolution. To address this problem, we present a type-based modularization of implicit invocation with implicit announcement that is inspired by how interfaces and exceptions are realized in Java. By extending an existing compiler and by rewriting several programs to make use of our proposed language constructs, we found that the imposed declaration clutter tends to be moderate; in particular, we found that, for general applications of implicit invocation with implicit announcement, fears that programs utilizing our form of modularization become unreasonably verbose are unjustified. Reference Type: Journal Article Record Number: 2 Author: K. Sullivan, W. G. Griswold, H. Rajan, Y. Song, Y. Cai, M. Shonle and N. Tewari Year: 2010 Title: Modular aspect-oriented design with XPIs

Journal: ACM Trans. Softw. Eng. Methodol. Volume: 20 Issue: 2 Pages: 1-42 Short Title: Modular aspect-oriented design with XPIs ISSN: 1049-331X DOI: 10.1145/1824760.1824762 Legal Note: 1824762 Abstract: The emergence of aspect-oriented programming (AOP) languages has provided software designers with new mechanisms and strategies for decomposing programs into modules and composing modules into systems. What we do not yet fully understand is how best to use such mechanisms consistent with common modularization objectives such as the comprehensibility of programming code, its parallel development, dependability, and ease of change. The main contribution of this work is a new form of information-hiding interface for AOP that we call the crosscut programming interface, or XPI. XPIs abstract crosscutting behaviors and make these abstractions explicit. XPIs can be used, albeit with limited enforcement of interface rules, with existing AOP languages, such as AspectJ. To evaluate our notion of XPIs, we have applied our XPI-based design methodology to a medium-sized network overlay application called Hypercast. A qualitative and quantitative analysis of existing AO design methods and XPI-based design method shows that our approach produces improvements in program comprehensibility, in opportunities for parallel development, and in the ease when code can be developed and changed. Reference Type: Journal Article Record Number: 294 Author: Susan Elliott Sim, Medha Umarji, Sukanya Ratanotayanon and C. V. Lopes Year: 2011 Title: How Well Do Search Engines Support Code Retrieval on the Web? Journal: ACM Trans. Softw. Eng. Methodol. Volume: 21 Issue: 1 Short Title: How Well Do Search Engines Support Code Retrieval on the Web? ISSN: 1049-331X DOI: 10.1145/2063239.2063243 Keywords: Design Empirical study Human factors Languages open source Reusable software Abstract: Software developers search the Web for various kinds of source code for diverse reasons. In a previous study, we found that searches varied along two dimensions: the size of the search target (e.g., block, subsystem, or system) and the motivation for the search (e.g., reference example or as-is reuse). Would each of these kinds of searches require different search technologies? To answer this question, we

conducted an experiment with 36 participants to evaluate three diverse approaches (general purpose information retrieval, source code search, and component reuse), as represented by five Web sites (Google, Koders, Krugle, Google Code Search, and SourceForge). The independent variables were search engine, size of search target, and motivation for search. The dependent variable was the participants judgement of the relevance of the first ten hits. We found that it was easier to find reference examples than components for as-is reuse and that participants obtained the best results using a general-purpose information retrieval site. However, we also found an interaction effect: code-specific search engines worked better in searches for subsystems, but Google worked better on searches for blocks. These results can be used to guide the creation of new tools for retrieving source code from the Web. Reference Type: Journal Article Record Number: 15 Author: H. B. K. Tan, Y. Zhao and H. Zhang Year: 2009 Title: Conceptual data model-based software size estimation for information systems Journal: ACM Trans. Softw. Eng. Methodol. Volume: 19 Issue: 2 Pages: 1-37 Short Title: Conceptual data model-based software size estimation for information systems ISSN: 1049-331X DOI: 10.1145/1571629.1571630 Legal Note: 1571630 Abstract: Size estimation plays a key role in effort estimation that has a crucial impact on software projects in the software industry. Some information required by existing software sizing methods is difficult to predict in the early stage of software development. A conceptual data model is widely used in the early stage of requirements analysis for information systems. Lines of code (LOC) is a commonly used software size measure. This article proposes a novel LOC estimation method for information systems from their conceptual data models through using a multiple linear regression model. We have validated the proposed method using samples from both the software industry and open-source systems. Reference Type: Journal Article Record Number: 71 Author: P. Thiran, J.-L. Hainaut, G.-J. Houben and D. Benslimane Year: 2006 Title: Wrapper-based evolution of legacy information systems Journal: ACM Trans. Softw. Eng. Methodol. Volume: 15 Issue: 4

Pages: 329-359 Short Title: Wrapper-based evolution of legacy information systems ISSN: 1049-331X DOI: 10.1145/1178625.1178626 Legal Note: 1178626 Abstract: System evolution most often implies the integration of legacy components, such as databases, with newly developed ones, leading to mixed architectures that suffer from severe heterogeneity problems. For instance, incorporating a new program in a legacy database application can create an integrity mismatch, since the database model and the program data view can be quite different (e.g. standard file model versus OO model). In addition, neither the legacy DBMS (too weak to address integrity issues correctly) nor the new program (that relies on data server responsibility) correctly cope with data integrity management. The component that can reconciliate these mismatched subsystems is the R/W wrapper, which allows any client program to read, but also to update the legacy data, while controlling the integrity constraints that are ignored by the legacy DBMS.This article describes a generic, technology-independent, R/W wrapper architecture, a methodology for specifying them in a disciplined way, and a CASE tool for generating most of the corresponding code.The key concept is that of implicit construct, which is a structure or a constraint that has not been declared in the database, but which is controlled by the legacy application code. The implicit constructs are elicited through reverse engineering techniques, and then translated into validation code in the wrapper. For instance, a wrapper can be generated for a collection of COBOL files in order to allow external programs to access them through a relational, object-oriented or XML interface, while offering referential integrity control. The methodology is based on a transformational approach that provides a formal way to build the wrapper schema and to specify inter-schema mappings. Reference Type: Journal Article Record Number: 297 Author: E. Tilevich and S. Gopal Year: 2011 Title: Expressive and Extensible Parameter Passing for Distributed Object Systems Journal: ACM Trans. Softw. Eng. Methodol. Volume: 21 Issue: 1 Short Title: Expressive and Extensible Parameter Passing for Distributed Object Systems ISSN: 1049-331X DOI: 10.1145/2063239.2063242 Keywords: Aspect-oriented programming Automatic programming Declarative programming Abstract: In modern distributed object systems, reference parameters to a remote method are passed according to their runtime type. This design choice limits the expressiveness, readability, and maintainability of distributed applications. Further, to

extend the built-in set of parameter passing semantics of a distributed object system, the programmer has to understand and modify the underlying middleware implementation. To address these design shortcomings, this article presents (i) a declarative and extensible approach to remote parameter passing that decouples parameter passing semantics from parameter types, and (ii) a plugin-based framework, DeXteR, which enables the programmer to extend the built-in set of remote parameter passing semantics, without having to understand or modify the underlying middleware implementation. DeXteR treats remote parameter passing as a distributed cross-cutting concern and uses aspect-oriented and generative techniques. DeXteR enables the implementation of different parameter passing semantics as reusable application-level plugins, applicable to application, system, and third-party library classes. The expressiveness, flexibility, and extensibility of the approach is validated by adding several nontrivial remote parameter passing semantics (i.e., copy-restore, lazy, streaming) to Java Remote Method Invocation (RMI) as DeXteR plugins. Notes: Software Design Tools Reference Type: Journal Article Record Number: 18 Author: E. Tilevich and Y. Smaragdakis Year: 2009 Title: J-Orchestra: Enhancing Java programs with distribution capabilities Journal: ACM Trans. Softw. Eng. Methodol. Volume: 19 Issue: 1 Pages: 1-40 Short Title: J-Orchestra: Enhancing Java programs with distribution capabilities ISSN: 1049-331X DOI: 10.1145/1555392.1555394 Legal Note: 1555394 Abstract: J-Orchestra is a system that enhances centralized Java programs with distribution capabilities. Operating at the bytecode level, J-Orchestra transforms a centralized Java program (i.e., running on a single Java Virtual Machine (JVM)) into a distributed one (i.e., running across multiple JVMs). This transformation effectively separates distribution concerns from the core functionality of a program. J-Orchestra follows a semiautomatic transformation process. Through a GUI, the user selects program elements (at class granularity) and assigns them to network locations. Based on the user's input, the J-Orchestra backend automatically partitions the program through compiler-level techniques, without changes to the JVM or to the Java Runtime Environment (JRE) classes. By means of bytecode engineering and code generation, JOrchestra substitutes method calls with remote method calls, direct object references with proxy references, etc. It also translates Java language features (e.g., static methods and fields, inheritance, inner classes, new object construction, etc.) for efficient distributed execution. We detail the main technical issues that J-Orchestra addresses, including its

mechanism for program transformation in the presence of unmodifiable code (e.g., in JRE classes) and the translation of concurrency and synchronization constructs to work correctly over the network. We further discuss a case study of transforming a large, commercial, third-party application for efficient execution in a client server environment and outline the architectural characteristics of centralized programs that are amenable to automated distribution with J-Orchestra. Notes: Software Construction Tools > Compilers & Code Generators Reference Type: Journal Article Record Number: 145 Author: F. Tip and T. B. Dinesh Year: 2001 Title: A slicing-based approach for locating type errors Journal: ACM Trans. Softw. Eng. Methodol. Volume: 10 Issue: 1 Pages: 5-55 Short Title: A slicing-based approach for locating type errors ISSN: 1049-331X DOI: 10.1145/366378.366379 Legal Note: 366379 Abstract: The effectiveness of a type-checking tool strongly depends on the accuracy of the positional information that is associated with type errors. We present an approach where the location associated with an error message e is defined as a slice Pe of the program P being type-checked. We show that this approach yields highly accurate positional information: Pe is a program that contains precisely those program constructs in P that caused error e. Semantically, we have the interesting property that typechecking Pe is guaranteed to produce the same error e. Our approach is completely language-independent and has been implemented for a significant subset of Pascal. We also report on experiments with object-oriented type systems, and with a subset of ML. Reference Type: Journal Article Record Number: 51 Author: A. Tiwana Year: 2008 Title: Impact of classes of development coordination tools on software development performance: A multinational empirical study Journal: ACM Trans. Softw. Eng. Methodol. Volume: 17 Issue: 2 Pages: 1-47 Short Title: Impact of classes of development coordination tools on software development performance: A multinational empirical study ISSN: 1049-331X

DOI: 10.1145/1348250.1348257 Legal Note: 1348257 Abstract: Although a diverse variety of software development coordination tools are widely used in practice, considerable debate surrounds their impact on software development performance. No large-scale field research has systematically examined their impact on software development performance. This paper reports the results of a multinational field study of software projects in 209 software development organizations to empirically examine the influence of six key classes of development coordination tools on the efficiency (reduction of development rework, budget compliance) and effectiveness (defect reduction) of software development performance. Based on an in-depth field study, the article conceptualizes six holistic classes of development coordination tools. The results provide nuanced insightssome counter to prevailing beliefsinto the relationships between the use of various classes of development coordination tools and software development performance. The overarching finding is that the performance benefits of development coordination tools are contingent on the salient types of novelty in a project. The dimension of development performanceefficiency or effectivenessthat each class of tools is associated with varies systematically with whether a project involves conceptual novelty, process novelty, multidimensional novelty (both process and conceptual novelty), or neither. Another noteworthy insight is that the use of some classes of tools introduces an efficiency-effectiveness tradeoff. Collectively, the findings are among the first to offer empirical support for the varied performance impacts of various classes of development coordination tools and have important implications for software development practice. The paper also identifies several promising areas for future research. Reference Type: Journal Article Record Number: 134 Author: T. Tsuchiya and T. Kikuno Year: 2002 Title: On fault classes and error detection capability of specification-based testing Journal: ACM Trans. Softw. Eng. Methodol. Volume: 11 Issue: 1 Pages: 58-62 Short Title: On fault classes and error detection capability of specification-based testing ISSN: 1049-331X DOI: 10.1145/504087.504089 Legal Note: 504089 Abstract: In a previous paper, Kuhn [1999] showed that faults in Boolean specifications constitute a hierarchy with respect to detectability, and drew the conclusion that missing condition faults should be hypothesized to generate tests. However this conclusion was premature, since the relationships between missing condition faults and faults in other classes have not been sufficiently analyzed. In this note, we investigate such relationships, aiming to complement the work of Kuhn. As a result, we obtain an

extended hierarchy of fault classes and reach a different conclusion. Reference Type: Journal Article Record Number: 106 Author: S. Uchitel, J. Kramer and J. Magee Year: 2004 Title: Incremental elaboration of scenario-based specifications and behavior models using implied scenarios Journal: ACM Trans. Softw. Eng. Methodol. Volume: 13 Issue: 1 Pages: 37-85 Short Title: Incremental elaboration of scenario-based specifications and behavior models using implied scenarios ISSN: 1049-331X DOI: 10.1145/1005561.1005563 Legal Note: 1005563 Abstract: Behavior modeling has proved to be successful in helping uncover design flaws of concurrent and distributed systems. Nevertheless, it has not had a widespread impact on practitioners because model construction remains a difficult task and because the benefits of behavior analysis appear at the end of the model construction effort. In contrast, scenario-based specifications have a wide acceptance in industry and are well suited for developing first approximations of intended behavior; however, they are still maturing with respect to rigorous semantics and analysis tools.This article proposes a process for elaborating system behavior that exploits the potential benefits of behavior modeling and scenario-based specifications yet ameliorates their shortcomings. The concept that drives the elaboration process is that of implied scenarios. Implied scenarios identify gaps in scenario-based specifications that arise from specifying the global behavior of a system that will be implemented component-wise. They are the result of a mismatch between the behavioral and architectural aspects of scenariobased specifications. Due to the partial nature of scenario-based specifications, implied scenarios need to be validated as desired or undesired behavior. The scenario specifications are then updated accordingly with new positive or negative scenarios. By iteratively detecting and validating implied scenarios, it is possible to incrementally elaborate the behavior described both in the scenario-based specification and models. The proposed elaboration process starts with a message sequence chart (MSC) specification that includes basic, high-level and negative MSCs. Implied scenario detection is performed by synthesis and automated analysis of behavior models. The final outcome consists of four artifacts: (1) an MSC specification that has been evolved from its original form to cover important aspects of the concurrent nature of the system that were under-specified or absent in the original specification, (2) a behavior model that captures the component structure of the system that, combined with (3) a constraint model and (4) a property model that provides the basis for modeling and reasoning about system design.

Reference Type: Journal Article Record Number: 107 Author: N. Venkatasubramanian, C. Talcott and G. A. Agha Year: 2004 Title: A formal model for reasoning about adaptive QoS-enabled middleware Journal: ACM Trans. Softw. Eng. Methodol. Volume: 13 Issue: 1 Pages: 86-147 Short Title: A formal model for reasoning about adaptive QoS-enabled middleware ISSN: 1049-331X DOI: 10.1145/1005561.1005564 Legal Note: 1005564 Abstract: Systems that provide distributed multimedia services are subject to constant evolution; customizable middleware is required to effectively manage this change. Middleware services for resource management execute concurrently with each other, and with application activities, and can, therefore, potentially interfere with each other. To ensure cost-effective QoS in distributed multimedia systems, safe composability of resource management services is essential. In this article, we present a metaarchitectural framework, the Two-Level Actor Model (TLAM) for customizable QoSbased middleware, based on the actor model of concurrent active objects. Using TLAM, a semantic model for specifying and reasoning about components of open distributed systems, we show how a QoS brokerage service can be used to coordinate multimedia resource management services in a safe, flexible, and efficient manner. In particular, we show a system in which the multimedia actor behaviors satisfy the specified requirements and provide the required multimedia service. The behavior specification leaves open the possibility of a variety of algorithms for resource management. Furthermore, constraints are identified that are sufficient to guarantee noninterference among the multiple broker resource management services, as well as providing guidelines for the safe composition of additional services. Reference Type: Journal Article Record Number: 57 Author: G. Wassermann, C. Gould, Z. Su and P. Devanbu Year: 2007 Title: Static checking of dynamically generated queries in database applications Journal: ACM Trans. Softw. Eng. Methodol. Volume: 16 Issue: 4 Pages: 14 Short Title: Static checking of dynamically generated queries in database applications ISSN: 1049-331X DOI: 10.1145/1276933.1276935 Legal Note: 1276935

Abstract: Many data-intensive applications dynamically construct queries in response to client requests and execute them. Java servlets, for example, can create strings that represent SQL queries and then send the queries, using JDBC, to a database server for execution. The servlet programmer enjoys static checking via Java's strong type system. However, the Java type system does little to check for possible errors in the dynamically generated SQL query strings. Thus, a type error in a generated selection query (e.g., comparing a string attribute with an integer) can result in an SQL runtime exception. Currently, such defects must be rooted out through careful testing, or (worse) might be found by customers at runtime. In this article, we present a sound, static program analysis technique to verify that dynamically generated query strings do not contain type errors. We describe our analysis technique and provide soundness results for our static analysis algorithm. We also describe the details of a prototype tool based on the algorithm and present several illustrative defects found in senior softwareengineering student-team projects, online tutorial examples, and a real-world purchase order system written by one of the authors. Reference Type: Journal Article Record Number: 12 Author: J. Whittle and P. K. Jayaraman Year: 2010 Title: Synthesizing hierarchical state machines from expressive scenario descriptions Journal: ACM Trans. Softw. Eng. Methodol. Volume: 19 Issue: 3 Pages: 1-45 Short Title: Synthesizing hierarchical state machines from expressive scenario descriptions ISSN: 1049-331X DOI: 10.1145/1656250.1656252 Legal Note: 1656252 Abstract: There are many examples in the literature of algorithms for synthesizing state machines from scenario-based models. The motivation for these is to automate the transition from scenario-based requirements to early behavioral design models. To date, however, these synthesis algorithms have tended to generate flat state machines which can be difficult to understand or adapt for practical systems. One of the reasons for this is that relationships between scenarios are often not taken into account during synthesis either because the relationships are not explicitly defined or because the synthesis algorithms are not sophisticated enough to cope with them. If relationships are not considered, it is impossible for a synthesis algorithm to know, for example, where one scenario stops and another continues. Furthermore, the lack of relationships makes it difficult to introduce structure into the generated state machines. With the introduction of interaction overview diagrams (IODs) in UML2.0, relationships such as continuation and concurrency can now be specified between scenarios in a way that conforms to the UML standard. But synthesis algorithms do not currently exist that take into account all of these relationships. This article presents a novel synthesis algorithm for an extended

version of interaction overview diagram. This algorithm takes into account not only continuation and concurrency, but also preemption, suspension and the notion of a negative scenario. Furthermore, the synthesis algorithm generates well-structured state machines. These state machines are executable and can therefore be used to validate the scenarios. The hierarchy generated aids readability and so the state machines are more amenable to subsequent design steps. Our IOD extensions have a formal semantics and are supported by a synthesis and execution tool, UCSIM, which is currently provided as a plug-in to IBM Rational Software Modeler. Notes: Software Requirements Tools > Requirement Modeling Tools Reference Type: Journal Article Record Number: 69 Author: Q. Xie and A. M. Memon Year: 2007 Title: Designing and comparing automated test oracles for GUI-based software applications Journal: ACM Trans. Softw. Eng. Methodol. Volume: 16 Issue: 1 Pages: 4 Short Title: Designing and comparing automated test oracles for GUI-based software applications ISSN: 1049-331X DOI: 10.1145/1189748.1189752 Legal Note: 1189752 Abstract: Test designers widely believe that the overall effectiveness and cost of software testing depends largely on the type and number of test cases executed on the software. This article shows that the test oracle, a mechanism that determines whether a software is executed correctly for a test case, also significantly impacts the fault detection effectiveness and cost of a test case. Graphical user interfaces (GUIs), which have become ubiquitous for interacting with today's software, have created new challenges for test oracle development. Test designers manually assert the expected values of specific properties of certain GUI widgets in each test case; during test execution, these assertions are used as test oracles to determine whether the GUI executed correctly. Since a test case for a GUI is a sequence of events, a test designer must decide: (1) what to assert; and (2) how frequently to check an assertion, for example, after each event in the test case or after the entire test case has completed execution. Variations of these two factors significantly impact the fault-detection ability and cost of a GUI test case. A technique to declaratively specify different types of automated GUI test oracles is described. Six instances of test oracles are developed and compared in an experiment on four software systems. The results show that test oracles do affect the fault detection ability of test cases in different and interesting ways: (1) Test cases significantly lose their fault detection ability when using weak test oracles; (2) in many cases, invoking a thorough oracle at the end of test case execution yields the best cost-benefit ratio; (3) certain test cases detect faults only if the

oracle is invoked during a small window of opportunity during test execution; and (4) using thorough and frequently-executing test oracles can compensate for not having long test cases. Reference Type: Journal Article Record Number: 32 Author: Q. Xie and A. M. Memon Year: 2008 Title: Using a pilot study to derive a GUI model for automated testing Journal: ACM Trans. Softw. Eng. Methodol. Volume: 18 Issue: 2 Pages: 1-35 Short Title: Using a pilot study to derive a GUI model for automated testing ISSN: 1049-331X DOI: 10.1145/1416563.1416567 Legal Note: 1416567 Abstract: Graphical user interfaces (GUIs) are one of the most commonly used parts of today's software. Despite their ubiquity, testing GUIs for functional correctness remains an understudied area. A typical GUI gives many degrees of freedom to an end-user, leading to an enormous input event interaction space that needs to be tested. GUI test designers generate and execute test cases (modeled as sequences of user events) to traverse its parts; targeting a subspace in order to maximize fault detection is a nontrivial task. In this vein, in previous work, we used informal GUI code examination and personal intuition to develop an event-interaction graph (EIG). In this article we empirically derive the EIG model via a pilot study, and the resulting EIG validates our intuition used in previous work; the empirical derivation process also allows for model evolution as our understanding of GUI faults improves. Results of the pilot study show that events interact in complex ways; a GUI's response to an event may vary depending on the context established by preceding events and their execution order. The EIG model helps testers to understand the nature of interactions between GUI events when executed in test cases and why certain events detect faults, so that they can better traverse the event space. New test adequacy criteria are defined for the EIG; new algorithms use these criteria and EIG to systematically generate test cases that are shown to be effective on four fielded open-source applications. Reference Type: Journal Article Record Number: 308 Author: C. Xu, S. C. Cheung, W. K. Chan and C. Ye Year: 2010 Title: Partial constraint checking for context consistency in pervasive computing Journal: ACM Trans. Softw. Eng. Methodol. Volume: 19 Issue: 3

Pages: 1-61 Short Title: Partial constraint checking for context consistency in pervasive computing ISSN: 1049-331X DOI: 10.1145/1656250.1656253 Legal Note: 1656253 Abstract: Pervasive computing environments typically change frequently in terms of available resources and their properties. Applications in pervasive computing use contexts to capture these changes and adapt their behaviors accordingly. However, contexts available to these applications may be abnormal or imprecise due to environmental noises. This may result in context inconsistencies, which imply that contexts conflict with each other. The inconsistencies may set such an application into a wrong state or lead the application to misadjust its behavior. It is thus desirable to detect and resolve the context inconsistencies in a timely way. One popular approach is to detect context inconsistencies when contexts breach certain consistency constraints. Existing constraint checking techniques recheck the entire expression of each affected consistency constraint upon context changes. When a changed context affects only a constraint's subexpression, rechecking the entire expression can adversely delay the detection of other context inconsistencies. This article proposes a rigorous approach to identifying the parts of previous checking results that are reusable without entire rechecking. We evaluated our work on the Cabot middleware through both simulation experiments and a case study. The experimental results reported that our approach achieved over a fifteenfold performance improvement on context inconsistency detection than conventional approaches. Reference Type: Journal Article Record Number: 112 Author: F. Zambonelli, N. R. Jennings and M. Wooldridge Year: 2003 Title: Developing multiagent systems: The Gaia methodology Journal: ACM Trans. Softw. Eng. Methodol. Volume: 12 Issue: 3 Pages: 317-370 Short Title: Developing multiagent systems: The Gaia methodology ISSN: 1049-331X DOI: 10.1145/958961.958963 Legal Note: 958963 Abstract: Systems composed of interacting autonomous agents offer a promising software engineering approach for developing applications in complex domains. However, this multiagent system paradigm introduces a number of new abstractions and design/development issues when compared with more traditional approaches to software development. Accordingly, new analysis and design methodologies, as well as new tools, are needed to effectively engineer such systems. Against this background, the contribution of this article is twofold. First, we synthesize and clarify the key abstractions of agent-based computing as they pertain to agent-oriented software

engineering. In particular, we argue that a multiagent system can naturally be viewed and architected as a computational organization, and we identify the appropriate organizational abstractions that are central to the analysis and design of such systems. Second, we detail and extend the Gaia methodology for the analysis and design of multiagent systems. Gaia exploits the aforementioned organizational abstractions to provide clear guidelines for the analysis and design of complex and open software systems. Two representative case studies are introduced to exemplify Gaia's concepts and to show its use and effectiveness in different types of multiagent system. Reference Type: Journal Article Record Number: 105 Author: P. Zave Year: 2004 Title: Address translation in telecommunication features Journal: ACM Trans. Softw. Eng. Methodol. Volume: 13 Issue: 1 Pages: 1-36 Short Title: Address translation in telecommunication features ISSN: 1049-331X DOI: 10.1145/1005561.1005562 Legal Note: 1005562 Abstract: Address translation causes a wide variety of interactions among telecommunication features. This article begins with a formal model of address translation and its effects, and with principles for understanding how features should interact in the presence of address translation. There is a simple and intuitive set of constraints on feature behavior so that features will interact according to the principles. This scheme (called "ideal address translation") has provable properties, is modular (explicit cooperation among features is not required), and supports extensibility (adding new features does not require changing old features). The article also covers reasoning in the presence of exceptions to the constraints, limitations of the theory, relation to real networks and protocols, and relation to other research. Reference Type: Journal Article Record Number: 79 Author: W. Zhao, L. Zhang, Y. Liu, J. Sun and F. Yang Year: 2006 Title: SNIAFL: Towards a static noninteractive approach to feature location Journal: ACM Trans. Softw. Eng. Methodol. Volume: 15 Issue: 2 Pages: 195-226 Short Title: SNIAFL: Towards a static noninteractive approach to feature location ISSN: 1049-331X

DOI: 10.1145/1131421.1131424 Legal Note: 1131424 Abstract: To facilitate software maintenance and evolution, a helpful step is to locate features concerned in a particular maintenance task. In the literature, both dynamic and interactive approaches have been proposed for feature location. In this article, we present a static and noninteractive method for achieving this objective. The main idea of our approach is to use information retrieval (IR) technology to reveal the basic connections between features and computational units in the source code. Due to the imprecision of retrieved connections, we use a static representation of the source code named BRCG (branch-reserving call graph) to further recover both relevant and specific computational units for each feature. A premise of our approach is that programmers should use meaningful names as identifiers. We also performed an experimental study based on two real-world software systems to evaluate our approach. According to experimental results, our approach is quite effective in acquiring the relevant and specific computational units for most features. Reference Type: Journal Article Record Number: 312 Author: Zhenyu Chen, Tsong Yueh Chen and B. Xu Year: 2011 Title: A revisit of fault class hierarchies in general boolean specifications Journal: ACM Trans. Softw. Eng. Methodol. Volume: 20 Issue: 3 Short Title: A revisit of fault class hierarchies in general boolean specifications ISSN: 1049-331X DOI: 10.1145/2000791.2000797 Keywords: Fault-based testing Requirements/specifications Software/program verification Testing and debugging Abstract: Recently, Kapoor and Bowen [2007] have extended the works by Kuhn [1999], Tsuchiya and Kikuno [2002], and Lau and Yu [2005]. However, their proofs overlook the possibility that a mutant of the Boolean specifications under test may be equivalent. Hence, each of their fault relationships is either incorrect or has an incorrect proof. In this article, we give counterexamples to the incorrect fault relationships and provide new proofs for the valid fault relationships. Furthermore, a co-stronger fault relation is introduced to establish a new fault class hierarchy for general Boolean specifications.

Вам также может понравиться