Вы находитесь на странице: 1из 6

Building a Family of Compilers

Chae, W. Blume, M. Toyota Technol. Inst. at Chicago, Chicago, IL

This paper appears in: Software Product Line Conference, 2008. SPLC '08. 12th International Issue Date : 8-12 Sept. 2008 On page(s): 307 Location: Limerick Print ISBN: 978-0-7695-3303-2 INSPEC Accession Number: 10234710 Digital Object Identifier : 10.1109/SPLC.2008.28 Date of Current Version : 19 September 2008

ABSTRACT We have developed and maintained a set of closely related compilers. Although much of their code is duplicated and shared, they have been maintained separately because they are treated as different compilers. Even if they were merged together, the combined code would become too complicated to serve as the base for another extension. We describe our experience to address this problem by adopting the product line engineering paradigm to build a family of compilers. This paradigm encourages developers to focus on developing a set of compilers rather than on developing one particular compiler. We show engineering activities for a family of compilers from product line analysis through product line architecture design to product line component design. Then, we present how to build particular compilers from core assets resulting from the previous activities and how to take advantage of modern programming language technology to organize this task. Our experience demonstrates that the product line engineering as a developing paradigm can ease the construction of a family of compilers.

Compiler optimization-space exploration

Triantafyllis, S. Vachharajani, M. Vachharajani, N. August, D.I. Departments of Comput. Sci. & Electr. Eng., Princeton Univ., NJ, USA

This paper appears in: Code Generation and Optimization, 2003. CGO 2003.

International Symposium on Issue Date : 23-26 March 2003 On page(s): 204 Print ISBN: 0-7695-1913-X INSPEC Accession Number: 7797866 Digital Object Identifier : 10.1109/CGO.2003.1191546 Date of Current Version : 02 April 2003

ABSTRACT To meet the demands of modern architectures, optimizing compilers must incorporate an ever larger number of increasingly complex transformation algorithms. Since code transformations may often degrade performance or interfere with subsequent transformations, compilers employ predictive heuristics to guide optimizations by predicting their effects a priori. Unfortunately, the unpredictability of optimization interaction and the irregularity of today's wide-issue machines severely limit the accuracy of these heuristics. As a result, compiler writers may temper high variance optimizations with overly conservative heuristics or may exclude these optimizations entirely. While this process results in a compiler capable of generating good average code quality across the target benchmark set, it is at the cost of missed optimization opportunities in individual code segments. To replace predictive heuristics, researchers have proposed compilers which explore many optimization options, selecting the best one a posteriori. Unfortunately, these existing iterative compilation techniques are not practical for reasons of compile time and applicability. We present the Optimization-Space Exploration (OSE) compiler organization, the first practical iterative compilation strategy applicable to optimizations in general-purpose compilers. Instead of replacing predictive heuristics, OSE uses the compiler writer's knowledge encoded in the heuristics to select a small number of promising optimization alternatives for a given code segment. Compile time is limited by evaluating only these alternatives for hot code segments using a general compile-time performance estimator An OSE-enhanced version of Intel's highly-tuned, aggressively optimizing production compiler for IA64 yields a significant performance improvement, more than 20% in some cases, on Itanium for SPEC codes.

DML-a meta-language and system for the generation of practical and efficient compilers from denotational specifications

Pattersson, M. Fritzson, P. Dept. of Comput. & Inf. Sci., Linkoping Univ.

This paper appears in: Computer Languages, 1992., Proceedings of the 1992 International Conference on Issue Date : 20-23 Apr 1992 On page(s): 127 Location: Oakland, CA Meeting Date : 20 Apr 1992-23 Apr 1992 Print ISBN: 0-8186-2585-6 References Cited: 29 INSPEC Accession Number: 4297267 Digital Object Identifier : 10.1109/ICCL.1992.185475 Date of Current Version : 06 August 2002

ABSTRACT DML (Denotational Meta Language) is a specification language and a compiler generation tool for producing practical and efficient compilers from denotational semantics specifications. This means that code emitted from generated compilers should be product quality, and that generated compilers should have reasonable compilation speed and interface well with standard front-ends and back-ends. To achieve this goal, the DML system contains: a general algorithm for producing efficient quadruple code from continuation semantics of Algol-like languages, and enhancements in the DML specification language with BNF rules for abstract syntax declarations and semantic brackets with inline concrete syntax and pattern matching for readable and concise semantic equations. Generated quadrupole code is fed into a standard optimizing back-end to obtain high-quality target code. The DML system generates efficient compilers in C and contains a foreign language interface for communication, e.g. with parsers or optimizing back-ends. DML is a superset of Standard ML and uses applicative order semantics, i.e. call by value, for reasons of efficiency

The Next Generation of Compilers

This paper appears in: Code Generation and Optimization, 2009. CGO 2009. International Symposium on Issue Date : 22-25 March 2009 On page(s): xxiii Location: Seattle, WA, USA

Print ISBN: 978-0-7695-3576-0 Digital Object Identifier : 10.1109/CGO.2009.37 Date of Current Version : 05 May 2009

ABSTRACT Over the past decade, production compilers for general-purpose processors have adopted a number of major technologies emerging from compiler research, including SSA-based optimization, pointer analysis, profile-guided optimization, link-time cross-module optimization, automatic vectorization, and just-in-time compilation with adaptive optimization for dynamic languages. These features are here to stay for the foreseeable future. So what major new features could emerge from compiler research over the next decade?First, just-in-time and dynamic optimization will be extended to static languages, such as C, C++, and Fortran. This has already happened for graphics applications, as in the MacOS X OpenGL library and the AMD ATI compiler, and is now being adopted for generalpurpose multicore platforms such as the RapidMind Multicore Development Platform.Second, and perhaps most predictably, compilers will play a major role in tackling the multicore programming challenge. This does not mean that automatic parallelization will come back from the dead. Rather compiler support for parallel programming will take two forms: optimization and code generation for explicitly parallel programs; and interactive, potentially optimistic, parallelization technology to support semi-automatic porting of existing code to explicitly parallel programming models.Third, compilers will increasingly be responsible for enhancing or enforcing safety and reliability properties for programs. The last few years have seen new language and compiler techniques (e.g. in the Cyclone, CCured, and SAFECode projects) that guarantee complete memory safety and sound operational semantics even for C and C++ programs. There is no longer any excuse for production C/C++ compilers not to provide these capabilities, at least as an option for security-sensitive software, including all privileged software. Furthermo- - re, these capabilities can be deployed via a typed virtual machine that enables more powerful security and reliability techniques than with native machine code.Fourth, compilers will increasingly incorporate more sophisticated auto-tuning strategies for exploring optimization sequences, or even arbitrary code sequences for key kernels. This is one of the major sources of unexploited performance improvements with existing compiler technology.Finally, compilers will adopt speculative optimizations in order to compensate for the constraints imposed by conservative static analysis. Recent architecture research has led to novel hardware mechanisms that can make such speculation efficient and the ball is in the compiler community's court to invent new ways to exploit this hardware support for more powerful, traditional and non-traditional, optimizations.

Performance characterization of optimizing compilers


search/srchabstr

Saavedra, R.H. Smith, A.J. Dept. of Comput. Sci., Univ. of Southern California, Los Angeles, CA

This paper appears in: Software Engineering, IEEE Transactions on Issue Date : Jul 1995 Volume : 21 , Issue:7 On page(s): 615 ISSN : 0098-5589 References Cited: 27 INSPEC Accession Number: 5024394 Digital Object Identifier : 10.1109/32.392982 Date of Current Version : 06 August 2002 Sponsored by : IEEE Computer Society

ABSTRACT Optimizing compilers have become an essential component in achieving high levels of performance. Various simple and sophisticated optimizations are implemented at different stages of compilation to yield significant improvements, but little work has been done in characterizing the effectiveness of optimizers, or in understanding where most of this improvement comes from. We study the performance impact of optimization in the context of our methodology for CPU performance characterization based on the abstract machine model. The model considers all machines to be different implementations of the same high level language abstract machine; in previous research, the model has been used as a basis to analyze machine and benchmark performance. We show that our model can be extended to characterize the performance improvement provided by optimizers and to predict the run time of optimized programs, and measure the effectiveness of several compilers in implementing different optimization techniques

Intelligent compilers

Cavazos, J. Comput. & Inf. Sci. Dept., Univ. of Delaware, Newark, DE

This paper appears in: Cluster Computing, 2008 IEEE International Conference on Issue Date : Sept. 29 2008-Oct. 1 2008 On page(s): 360 Location: Tsukuba ISSN : 1552-5244 Print ISBN: 978-1-4244-2639-3 INSPEC Accession Number: 10392079 Digital Object Identifier : 10.1109/CLUSTR.2008.4663796 Date of Current Version : 31 October 2008

ABSTRACT The industry is now in agreement that the future of architecture design lies in multiple cores. As a consequence, all computer systems today, from embedded devices to petascale computing systems, are being developed using multicore processors. Although researchers in industry and academia are exploring many different multicore hardware design choices, most agree that developing portable software that achieves high performance on multicore processors is a major unsolved problem. We now see a plethora of architectural features, with little consensus on how the computation, memory, and communication structures in multicore systems will be organized. The wide disparity in hardware systems available has made it nearly impossible to write code that is portable in functionality while still taking advantage of the performance potential of each system. In this paper, we propose exploring the viability of developing intelligent compilers, focusing on key components that will allow application portability while still achieving high performance.

Вам также может понравиться