Skip to main content

Automatic Parallelization

An Overview of Fundamental Compiler Techniques

  • Book
  • © 2012

Overview

Part of the book series: Synthesis Lectures on Computer Architecture (SLCA)

This is a preview of subscription content, log in via an institution to check access.

Access this book

eBook USD 29.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book USD 37.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Other ways to access

Licence this eBook for your library

Institutional subscriptions

Table of contents (8 chapters)

About this book

Compiling for parallelism is a longstanding topic of compiler research. This book describes the fundamental principles of compiling "regular" numerical programs for parallelism. We begin with an explanation of analyses that allow a compiler to understand the interaction of data reads and writes in different statements and loop iterations during program execution. These analyses include dependence analysis, use-def analysis and pointer analysis. Next, we describe how the results of these analyses are used to enable transformations that make loops more amenable to parallelization, and discuss transformations that expose parallelism to target shared memory multicore and vector processors. We then discuss some problems that arise when parallelizing programs for execution on distributed memory machines. Finally, we conclude with an overview of solving Diophantine equations and suggestions for further readings in the topics of this book to enable the interested reader to delve deeper into the field. Table of Contents: Introduction and overview / Dependence analysis, dependence graphs and alias analysis / Program parallelization / Transformations to modify and eliminate dependences / Transformation of iterative and recursive constructs / Compiling for distributed memory machines / Solving Diophantine equations / A guide to further reading

Authors and Affiliations

  • Purdue University, USA

    Samuel P. Midkiff

About the author

Samuel Midkiff is a Professor of Electrical and Computer Engineering at Purdue University, where he has been since 2001.He received his PhD degree from the University of Illinois at Urbana-Champaign in 1992 where he was a member of the Cedar project. In 1991 he became a Research Staff Member at the IBM T.J. Watson Research Center, where he was a key member of the xlhpf compiler team and the Numerically INtensive Java (Ninja) project. His research has focused on parallelism and high performance computing, and in particular compiler and language support for the development of correct and efficient programs. To this end, his research has covered dependence analysis and automatic synchronization of explicitly parallel programs, compilation under different memory models,automatic parallelization, high performance computing in Java and other high-level languages,and tools to help in the detection and localization of program errors.

Bibliographic Information

Publish with us