Вы находитесь на странице: 1из 6

Power Tools: Fusing People, Processes, and Tools to Build Better Software

Justin Killian
Master of Software Engineering Program Institute for Software Research Carnegie Mellon University

Abstract Now more than ever teams must be able to estimate, plan, track, coordinate, communicate, and manage software development projects with a greater accuracy and they are turning towards CASE tools to help them achieve these goals. Oftentimes, however, teams select tools in hopes that they will miraculously cure all their development woes or tame their unwieldy development processes and just as frequently, they conclude that it takes a firmer commitment to improving the overall quality of software development, the first step of which is investing only in tools that fit the needs of the organization. The term power tools refers the CASE tools that my personal experience has shown to be the most effective tools in improving and supplementing our teams processes, communication, and development and quality practices. This paper will lead you through selection processes, including our teams and will focus on a subset of our final toolkit, the select few that were either a boon to the project or cost us time and effort with little resulting benefit. Although we had not known it at the time, the large amounts of data we collected about our processes would provide some interesting insights into the CASE tools and their effects on our studio project.

of the organization. A careful and deliberate analysis must take place in order to guarantee a tools fitness for purpose and, should it be found suitable, how it can best be integrated into the software development pipeline. The term power tools refers the CASE tools that my personal experience has shown to be the most effective tools in improving and supplementing our teams processes, communication, and development and quality practices. The studio environment, no doubt, is unlike any real world scenario in that experimentation is actually encouraged rather than discouraged. We were given the liberty to learn new techniques and try new tools without opposition. Although time and effort may have been lost, there were few other consequences for making the wrong choice so long as our judgment going forward is improved as a result. Because of those circumstances, our team sampled roughly 15 different tools in the process areas of project planning, integrated project management, process and product quality assurance, and configuration management. A. The Ptolemy Project The Bosch Research and Technology Center (RTC), in conjunction with the Ptolemy Project group at the University of California at Berkeley, tasked Team HandSimDroid with developing, as a proof of concept, a tool that would enable engineers in the field to execute, manipulate, and view the real-time output of, Ptolemy simulations running on Androidpowered handheld devices. In addition, as the use-case may differ between engineers, we extended the existing opensource project so that they could construct a customized user interface that would suit their particular purpose. For Bosch RTC, this project is a result of the model-driven methodology that has been adopted in designing and developing highly complex, precision-critical software for the embedded systems within automobiles. While this specific use-case deals with automobiles, the research and capabilities that drove the project are universal and can be applied in many other domains. Ptolemy, as an open-source project with a rather liberal licensing agreement (BSD), allows for rapid prototyping for research endeavors within Bosch. And, as it is built entirely on java and Android runs a java variant, it became the target platform. Were these goals to be achieved, similar capabilities may be built into Boschs proprietary modeling tool that could potentially lower the time and

I.

INTRODUCTION

The landscape of the software industry continues to evolve at an unprecedented rate the people, the processes, the projects, and the tools and with it comes great challenges. Teams must be able to estimate, plan, track, coordinate, communicate, and manage software development projects with a reasonable level of consistency and predictability and they are turning increasingly more towards CASE tools to help them achieve these goals. Carefully investing in tools to support the development efforts and improve quality throughout has made the use of tools almost a necessity in any software project. Oftentimes, however, teams select tools in hopes that they will miraculously cure all their development woes or tame their unwieldy development processes and just as frequently, they conclude that it requires much more than the installation of a specific software package to improve their situation. Brooks has gone as far as to classifying these cases innocent and straightforward, but capable of becoming a monster of missed schedules, blown budgets, and flawed products [1]. Instead, they have introduced inefficiencies that hinder rather than improve the project. It takes a firm commitment to improving the overall quality of software development, the first step of which is investing only in tools that fit the needs

monetary cost of designing and deploying software for the control systems in automobiles. For the Ptolemy Project group, this project explored the potential research applications of distributed simulations and provided a prototype for their efforts to build user interface design capabilities into Vergil, the desktop application for constructing Ptolemy models. They have worked to develop Ptolemy for a number of years and it needs new capabilities. Fortunately, they have a substantial community of developers willing to contribute a major factor in the projects success [2]. B. The Challenges The project posed many challenges. Models developed in Ptolemy can become very complex and require large amounts of physical and computational resources, much of which are not present on handheld devices available on the market today. This discovery significantly altered our design and led to a flurry of decisions around tools and technology. We could no longer focus on a system that operated on a single device in a predictable manner (effectively trusting that the majority of concurrency issues had already been sorted out in Ptolemy) and instead, had multiple devices that depended on one another with potentially complex, time-critical interactions. Regression bugs and corner-cases, something unit testing and static analysis helped us monitor, would have been incredibly hard to find and could have been extremely costly had we not focused on a continual testing strategy. On the development front, given the approximately 350 KLOC found in the Ptolemy system, careful planning was required when making modifications. Over the years, Ptolemy has grown more and more complex with contributions coming from the distributed open source community. In order to maintain this level of quality and control over our software development processes and preserve the quality attributes of maintainability and extensibility, the team evaluated and implemented several tools which will be discussed later in this paper. II. WHY TEAMS CHOOSE TOOLS

teams to evaluate tools based on their fitness for purpose and potential benefits to productivity and the organization. Although the method was developed with commercial off-theshelf products in mind, its principles can be applied to any software being included in the software of development process. A. Selection Process In hindsight, our selection process fell more into the firstfit category, selecting tools merely to fill a given need. Our particular environment allowed us much more freedom in exploring the tools that were available and apply practices that a real-world setting typically would not. As a result, the selection process adopted by the team was more of a shotgun approach than the calculated analysis that would take place in industry. Although we ultimately settled on one solution for each problem (in most cases), there was very little risk in trying out and incorporating a new product into our processes. If it was unable to support our needs or hindered more than it helped, we had not invested enough into it to experience significant loss in time and effort and could easily switch. Unfortunately, some tools had few if any alternatives. Regardless, had we applied a more stringent selection process beforehand to more clearly define and address our needs, perhaps we would have seen a more measurable benefit such as additional development time or more constructive collaboration. B. Tools to Support People The needs, particularly of those maintaining Ptolemy going forward, drove the selection and adoption of a continuous integration strategy. This section will only briefly describe the set of tools that comprise what is collectively referred to as continuous integration but more importantly will discuss the rationale for selecting each tool and the manner in which we gauged their effectiveness in terms of the teams objectives and those of our stakeholders. As cited by Erdil, et al, it is estimated that approximately 50% of the total life cycle costs are attributed to maintenance [4]. As products age, it becomes more and more difficult to update it with new user requirements [4]. Although the number varies slightly between domains, it represents a very significant portion of the development budget and any steps that can be taken to make this process more efficient, whether from a more complete design up-front or better documentation throughout implementation, are likely good ones. This represents a huge cost to the Ptolemy Project which has been under development since 1996 and has evolved into a code base of roughly 350 KLOC, much of which has been contributed by a number of different sources. To combat this growing problem, the team chose to implement the following tools: Jenkins Developed as an off-shoot of Hudson, Jenkins is an open source continuous integration server and dashboard with over 20,000 installations and 400+ plugins to support building and testing of almost any Java-based project [5]. It has become one of the most popular continuous integration systems across the globe, largely because of its ability to

Teams typically implement tools to fill a void in or to supplement the development process. However, the extent to which these tools are evaluated and the rigor applied in selecting the best tool can vary greatly depending on the situation. Along those lines, the SEI has presented the concept of best fit vs. first fit as part of a technique to evaluate commercial off-the-shelf products (COTS) known as PECA. The method requires that a team engage in an evaluation with clearly established criteria and select software based on quantitative data and thorough analysis. As the SEI points out in regards to COTS product selection (although the principle is valid beyond just COTS products), the evaluations applied to products with low technical risk generally focus on lowimpact criteria such as cost and speed of implementation [3]. Consequently, products that will have a pervasive impact or play a critical role in the project must be evaluated with a greater level of rigor (ex. PECA). These processes enable

provide rapid feedback, reduce the risk of defect injection through collective ownership, and be extended beyond a simple build platform. FindBugs This tool applies static analysis to Javabased applications to identify unintended interactions based on a set of known, problematic patterns and can easily be integrated into an Ant build script executed by Jenkins. JUnit Little explanation is needed for those familiar with Java development. Used in conjunction with Cobertura to measure statement and branch coverage, the team developed a suite of test cases to continually test functionality of the system. Checkstyle The coding standard of Ptolemy II is motivated primarily to enhance the readability and understanding of the code across its many contributors. Because of this, the team was expected to strictly adhere to the published standard.

1) Advantages Continuous integration gave the team the ability to easily monitor the status of the project and locate the source of build/regression problems immediately. When the compilation process failed or the JUnit test cases did not produce the expected results, the team would be notified via email. Although it is not a foolproof method, it allowed us to minimize the amount downtime when problems arose, with a large percentage being resolved in under 20 minutes (Fig. I). The team was able to tailor Checkstyle to meet the coding standards of the Ptolemy Project group and target our efforts for enforcing conformance to the standard. At the onset of the project, the team needed to establish a set of core components that would later be refined and integrate all the work being done separately by each team member. As a result, conformance to the standard slipped. As can be seen in Figure II, the team was able to identify major violations of the standards and make corrections to the documentation. The figure shows the trends for the Android project and the Ptolemy server (and their accompanying branches) prior to the complete merger into the trunk. The key item to take away is the dramatic drops in violations at the end of June and July where the team made a concentrated effort to improve the quality of the documentation. The spike following July represents the Android project being merged into the trunk and a subsequent double-counting of violations identified by Checkstyle. Ultimately, the team was able to satisfy all but 390 of the 1600-1800 item listed at the peak

The focus of continuous integration was placed primarily on the principles of maintainability and quality. Ptolemy II is used in conjunction with several other open source and commercial products, so the impact of making a change could be fairly dramatic if not done carefully. The teams extensions to Ptolemy, having gone through several rounds of experimentation and system design, would need to augment the very core of Ptolemy, namely the graphical actors, in order to accommodate the graphical packages of the Android OS and provide extension points for other platforms. In order to minimize the risk of introducing breaking changes, the team used a combination of JUnit and FindBugs to catch and correct problems, both static and runtime, before submitting them to the Berkeley repository.

Fig. I Breakdown of repair time for correcting unstable or broken builds

Fig. II Checkstyle warning trend over the major development cycle

(approximately 600 previously existed documentation to the core features of Ptolemy).

in

positives. Once a false-positive had been found, however, it was impossible to mark it as such and remove it from subsequent reports. Configuring the Jenkins server, installing the myriad of plugins, and compiling the Ant script that executes the build for each job can be a time-consuming process. At the onset of the summer semester, our Development and Support Manager exhausted nearly 30 hours to putting together our continuous integration environment and later, following the compromise of our virtual server, I spent nearly 30 again reconstructing it. Implementing a continuous integration strategy was difficult in constructing entirely new systems, rather than patching or modifying existing ones. This is the case for a number of reasons. First, to derive the maximum benefit from an automatic integration platform, there needs to be a suite of tests already in place. Without having that, continuous build is only as good as the test it is provided. Secondly, we would often be constructing modules in parallel that required the work of one another. Integrating our work was difficult without breaking the compilation process. Lastly, each check-in is considered a deliverable regardless of its completeness. Our stakeholders immediately saw features that were incomplete and buggy rather than the polished and completed ones that we ultimately intended to deliver.

Applying FindBugs to our continuous integration process identified a number of faulty patterns or potential race conditions that would have been difficult to reproduce in testing. As the team continued to extend the Ptolemy server and Android client, the complexity of their interactions grew. Static analysis tools are typically judged by recall, or ability to find real problems, precision, or ability to exclude false positives, and their performance [6]. The trend seen in Fig. III represents problems (race conditions, unforeseen compiler optimizations, etc) that were later addressed. In terms of recall, the tool was fairly successful at catching conditions that our team had not. However, the data from the beginning of September going forward represents false positives identified by the tool that cause no harm to the application. As far as quality and maintainability go, the team was certainly focused on providing a product that performed all the requested functionality with as few defects as possible. 2) Disadvantages Static analysis tools such as FindBugs can only identify potential problems based on a set of known patters and rely on the engineer to recognize the difference between truly problematic conditions and false-

Fig. III FindBugs static analysis of various grade bugs throughout development

C. Supplement Process The team opted, based on the advice of previous MSE teams, to transition from the agile and adaptive SCRUM methodology to the more rigidly structured Team Software Process (TSP) as our process framework. Our hope was to tailor the process so that it more closely aligned with our particular needs before the implementation phase and familiarize ourselves with the tools available to support our efforts. As a natural complement to this selection, the SEI has developed an Excel-based tool that supports teams in planning and tracking their work. It also helps teams to analyze their performance, on a weekly basis, in terms of the project baseline and monitor a wide variety of statistics that could be used by the team to improve their quality practices or adjust their plan. Our choice to use this tool was more out of necessity and falls into the category of first-fit, for better or worse. It is important, however, to draw a clear distinction between deficiencies in the process and difficulties with the limited set of tools available to support the process. TSP provides considerably more detail in guiding the teams activities and requires the collection of significant amounts of data. This was deliberately done, as Jim Over states, to help teams plan their work, negotiate their commitments with management, manage and track projects to a successful conclusion, and produce quality products in less time [16]. These are all inherently good things and target the improvement of software development practices. As for the tools that support the process, the team found both advantages and disadvantages.

1) Advantages The amount of data available after even one or two cycles with the tool could have fueled our planning and estimation efforts for the remainder of the project. The team, unfortunately, was inexperienced with TSP and often unsure of how to best make use of all the statistics. That is not necessarily a reflection on the tool but rather on the team. The tool provided as much granularity in planning as the team or planning manager desired and allowed the team to continually track those task assignments. Our team began with very vague task descriptions and created increasingly more specific plans as time went on. This was an ongoing effort to reduce the ambiguity between team members of what was expected from each throughout the course of the cycle. TSP gives teams the opportunity to tailor the process to their needs and organization. To that end, the tool acts as a single source for process improvement proposals, project risk statements, and tracking data. This reduces the amount of confusion as to which artifacts and data are stored where. 2) Disadvantages Using the tool, particularly for constructing the cycle plan and distributing workbooks, is a slow and frustrating process. During each of our cycles, we allocated between 2 and 3 hours per week to the Planning Manager (approximately 25% of his available time) solely to consolidate and update the workbook data so that the team could get an accurate picture of our progress.

Workbook consolidation is typically done on a weekly basis and, for real transparency during the summer semester when we were committing substantially more hours, the team required more frequent updates. The 2 3 hours of consolidation process from the previous semester became somewhat of a constant activity. According to the MSELI, these activities are the cost of doing business and qualify as overhead [8], although at times it became almost excessive. The activities were difficult to foresee, so the team opted to plan in month-long cycles during the spring and six-week cycles during the summer. This made the process of getting a project-long of view our progress a tedious and time-consuming task, sometimes in excess of 4 hours of compilation. The tool is limited in its capabilities to track task dependencies and, for highly interdependent projects such as ours, that became rather difficult to manage within the tool. Though the tool does allot a place to store them, nothing is done to enforce that they are in fact observed by the team. III. CONCLUSION

Throughout the course of the studio project, our team was focused on two key objectives learning and applying new techniques, as well as providing a high quality product that our stakeholders could use and that would continue on long after our involvement had concluded. Our team was fortunate in that we were able to experiment with and learn from a number of tools and techniques so that, following graduation, we will have the experience and wherewithal to make better choices going forward. The tools only assisted us in meeting those objectives. REFERENCES
[1] [2] [3] [4] [5] [6] Frederick P. Brooks. (1986) No Silver Bullet - Essense and Accident in Software Engineering. Scott Hissam, Charles B. Weinstock, Daniel Plakosh, and Jayatirtha Asundi, "Perspectives on Open Source Software," Pittsburgh, PA, 2001. Santiago Comella-Dorda et al., "A Process for COTS Software Product Evaluation," Pittsburgh, PA, 2004. Kagan Erdil et al., "Software Maintenance as Part of the Software Life Cycle," Medford, MA, 2003. Welcome to Jenkins CI. [Online]. http://jenkins-ci.org/ Paul Anderson, "The Use and Limitations of Static-Analysis Tools to Improve Software Quality," CrossTalk: The Journal of Defense Software Engineering, pp. 18-21, 2008. Watts S. Humphrey, Introduction to the Team Software Process.: Addison-Wesley Professional, 1999. MSELI. (2009, April) MSE Overhead Standard. Diane Jamieson, Kevin Vinsen, and Guy Callender. Measuring Software Costs: A New Perspective on a Recurring Problem. Robert E. Park, "A Manager's Checklist for Validating Software Cost and Schedule Estimates," Pittsburgh, PA, 1995. CMMI Product Team, "CMMI for Development, Version 1.2," Pittsburgh, PA, 2006. FindBugs Find Bugs in Java Programs. [Online]. http://findbugs.sourceforge.net/ Welcome to JUnit.org! [Online]. http://www.junit.org/ Checkstyle 5.4. [Online]. http://checkstyle.sourceforge.net/ Cobertura. [Online]. http://cobertura.sourceforge.net/ Managing Software Quality with the Team Software Process. [Online]. http://c-spin.net/2010/cspin201004Managing%20Software%20Quality%20with%20the%20Tea m%20Software%20Process.pdf

[7] [8] [9] [10] [11] [12] [13] [14] [15] [16]

Software development has become so much more than coding and, as the practice matures, management is starting to take notice. Disciplined teams require the appropriate resources to improve the quality and predictability of their work and sometimes, CASE tools, particularly those in the areas of process, project, and configuration management, are the answer. These tools, if used properly and in conjunction with good engineering practices, can be a great asset to the organization and their software projects. They cannot, however, be a substitute for solid judgment and analysis that comes as part of both the design phase and the development cycle, nor can they guarantee quality on their own. A team considering the implementation of CASE tools must be prepared to perform objective and thorough analysis of their selections, whether through PECA or some other means. Without that objectivity and careful reasoning, a team is doomed to continuously add detriment to their work and waste countless time and resources investing in resources that will not necessarily add value.

Вам также может понравиться