did-you-know? rent-now

Amazon no longer offers textbook rentals. We do!

did-you-know? rent-now

Amazon no longer offers textbook rentals. We do!

We're the #1 textbook rental company. Let us show you why.

9780321228116

Patterns for Parallel Programming

by ; ;
  • ISBN13:

    9780321228116

  • ISBN10:

    0321228111

  • Edition: 1st
  • Format: Hardcover
  • Copyright: 2004-09-15
  • Publisher: Addison-Wesley Professional
  • Purchase Benefits
  • Free Shipping Icon Free Shipping On Orders Over $35!
    Your order must be $35 or more to qualify for free economy shipping. Bulk sales, PO's, Marketplace items, eBooks and apparel do not qualify for this offer.
  • eCampus.com Logo Get Rewarded for Ordering Your Textbooks! Enroll Now
List Price: $64.99 Save up to $5.38
  • Digital
    $59.61
    Add to Cart

    DURATION
    PRICE

Supplemental Materials

What is included with this book?

Summary

The Parallel Programming Guide for Every Software Developer From grids and clusters to next-generation game consoles, parallel computing is going mainstream. Innovations such as Hyper-Threading Technology, HyperTransport Technology, and multicore microprocessors from IBM, Intel, and Sun are accelerating the movement's growth. Only one thing is missing: programmers with the skills to meet the soaring demand for parallel software. That's where Patterns for Parallel Programming comes in. It's the first parallel programming guide written specifically to serve working software developers, not just computer scientists. The AUTHORs introduce a complete, highly accessible pattern language that will help any experienced developer "think parallel"-and start writing effective parallel code almost immediately. Instead of formal theory, they deliver proven solutions to the challenges faced by parallel programmers, and pragmatic guidance for using today's parallel APIs in the real world. Coverage includes: bull; bull;Understanding the parallel computing landscape and the challenges faced by parallel developers bull; bull;Finding the concurrency in a software design problem and decomposing it into concurrent tasks bull; bull;Managing the use of data across tasks bull; bull;Creating an algorithm structure that effectively exploits the concurrency you've identified bull; bull;Connecting your algorithmic structures to the APIs needed to implement them bull; bull;Specific software constructs for implementing parallel programs bull; bull;Working with today's leading parallel programming environments: OpenMP, MPI, and Java Patterns have helped thousands of programmers master object-oriented development and other complex programming technologies. With this book, you will learn that they're the best way to master parallel programming too. 0321228111B08232004

Author Biography

Berna L. Massingill is assistant professor in the Department of Computer Science at Trinity University, San Antonio, Texas.

Table of Contents

Preface x
A Pattern Language for Parallel Programming
1(6)
Introduction
1(2)
Parallel Programming
3(1)
Design Patterns and Pattern Languages
4(1)
A Pattern Language for Parallel Programming
5(2)
Background and Jargon of Parallel Computing
7(17)
Concurrency in Parallel Programs Versus Operating Systems
7(1)
Parallel Architectures: A Brief Introduction
8(4)
Flynn's Taxonomy
8(1)
A Further Breakdown of MIMD
9(3)
Summary
12(1)
Parallel Programming Environments
12(4)
The Jargon of Parallel Computing
16(2)
A Quantitative Look at Parallel Computation
18(3)
Communication
21(2)
Latency and Bandwidth
21(1)
Overlapping Communication and Computation and Latency Hiding
22(1)
Summary
23(1)
The Finding Concurrency Design Space
24(33)
About the Design Space
24(5)
Overview
25(1)
Using the Decomposition Patterns
26(1)
Background for Examples
26(3)
The Task Decomposition Pattern
29(5)
The Data Decomposition Pattern
34(5)
The Group Tasks Pattern
39(3)
The Order Tasks Pattern
42(2)
The Data Sharing Pattern
44(5)
The Design Evaluation Pattern
49(6)
Summary
55(2)
The Algorithm Structure Design Space
57(64)
Introduction
57(2)
Choosing an Algorithm Structure Pattern
59(3)
Target Platform
59(1)
Major Organizing Principle
60(1)
The Algorithm Structure Decision Tree
60(2)
Re-evaluation
62(1)
Examples
62(2)
Medical Imaging
62(1)
Molecular Dynamics
63(1)
The Task Parallelism Pattern
64(9)
The Divide and Conquer Pattern
73(6)
The Geometric Decomposition Pattern
79(18)
The Recursive Data Pattern
97(6)
The Pipeline Pattern
103(11)
The Event-Based Coordination Pattern
114(7)
The Supporting Structures Design Space
121(95)
Introduction
121(2)
Program Structuring Patterns
122(1)
Patterns Representing Data Structures
123(1)
Forces
123(2)
Choosing the Patterns
125(1)
The SPMD Pattern
126(17)
The Master/Worker Pattern
143(9)
The Loop Parallelism Pattern
152(15)
The Fork/Join Pattern
167(6)
The Shared Data Pattern
173(10)
The Shared Queue Pattern
183(15)
The Distributed Array Pattern
198(13)
Other Supporting Structures
211(5)
SIMD
211(1)
MPMD
212(2)
Client-Server Computing
214(1)
Concurrent Programming with Declarative Languages
214(1)
Problem-Solving Environments
215(1)
The Implementation Mechanisms Design Space
216(37)
Overview
217(1)
UE Management
217(4)
Thread Creation/Destruction
218(2)
Process Creation/Destruction
220(1)
Synchronization
221(16)
Memory Synchronization and Fences
221(5)
Barriers
226(3)
Mutual Exclusion
229(8)
Communication
237(16)
Message Passing
238(7)
Collective Communication
245(6)
Other Communication Constructs
251(2)
Appendix A A Brief Introduction to OpenMP
253(20)
Core Concepts
254(3)
Structured Blocks and Directive Formats
257(2)
Worksharing
259(3)
Data Environment Clauses
262(3)
The OpenMP Runtime Library
265(1)
Synchronization
266(4)
The Schedule Clause
270(2)
The Rest of the Language
272(1)
Appendix B A Brief Introduction to MPI
273(18)
Concepts
273(2)
Getting Started
275(2)
Basic Point-to-Point Message Passing
277(2)
Collective Operations
279(4)
Advanced Point-to-Point Message Passing
283(5)
MPI and Fortran
288(2)
Conclusion
290(1)
Appendix C A Brief Introduction to Concurrent Programming in Java
291(16)
Creating Threads
293(4)
Atomicity, Memory Synchronization, and the volatile Keyword
297(1)
Synchronized Blocks
297(2)
Wait and Notify
299(2)
Locks
301(2)
Other Synchronization Mechanisms and Shared Data Structures
303(1)
Interrupts
304(3)
Glossary 307(10)
Bibliography 317(16)
About the Authors 333(2)
Index 335

Supplemental Materials

What is included with this book?

The New copy of this book will include any supplemental materials advertised. Please check the title of the book to determine if it should include any access cards, study guides, lab manuals, CDs, etc.

The Used, Rental and eBook copies of this book are not guaranteed to include any supplemental materials. Typically, only the book itself is included. This is true even if the title states it includes any access cards, study guides, lab manuals, CDs, etc.

Excerpts

"If you build it, they will come." And so we built them. Multiprocessor workstations, massively parallel supercomputers, a cluster in every department ... and they haven't come. Programmers haven't come to program these wonderful machines. Oh, a few programmers in love with the challenge have shown that most types of problems can be force-fit onto parallel computers, but general programmers, especially professional programmers who "have lives", ignore parallel computers. And they do so at their own peril. Parallel computers are going mainstream. Multithreaded microprocessors, multicore CPUs, multiprocessor PCs, clusters, parallel game consoles ... parallel computers are taking over the world of computing. The computer industry is ready to flood the market with hardware that will only run at full speed with parallel programs. But who will write these programs? This is an old problem. Even in the early 1980s, when the "killer micros" started their assault on traditional vector supercomputers, we worried endlessly about how to attract normal programmers. We tried everything we could think of: high-level hardware abstractions, implicitly parallel programming languages, parallel language extensions, and portable message-passing libraries. But after many years of hard work, the fact of the matter is that "they" didn't come. The overwhelming majority of programmers will not invest the effort to write parallel software. A common view is that you can't teach old programmers new tricks, so the problem will not be solved until the old programmers fade away and a new generation takes over. But we don't buy into that defeatist attitude. Programmers have shown a remarkable ability to adopt new software technologies over the years. Look at how many old Fortran programmers are now writing elegant Java programs with sophisticated object-oriented designs. The problem isn't withold programmers. The problem is withold parallel computing expertsand the way they've tried to create a pool of capable parallel programmers. And that's where this book comes in. We want to capture the essence of how expert parallel programmers think about parallel algorithms and communicate that essential understanding in a way professional programmers can readily master. The technology we've adopted to accomplish this task is apattern language. We made this choice not because we started the project as devotees of design patterns looking for a new field to conquer, but because patterns have been shown to work in ways that would be applicable in parallel programming. For example, patterns have been very effective in the field of object-oriented design. They have provided a common language experts can use to talk about the elements of design and have been extremely effective at helping programmers master object-oriented design. This book contains our pattern language for parallel programming. The book opens with a couple of chapters to introduce the key concepts in parallel computing. These chapters focus on the parallel computing concepts and jargon used in the pattern language as opposed to being an exhaustive introduction to the field. The pattern language itself is presented in four parts corresponding to thefour phases of creating a parallel program: Finding Concurrency. The programmer works in the problem domain to identify the available concurrency and expose it for use in the algorithm design. Algorithm Structure. The programmer works with high-level structures for organizing a parallel algorithm. Supporting Structures. We shift from algorithms to source code and consider how the parallel program will be organized and the techniques used to manage shared data. Implementation Mechanisms. The final step is to look at specific software constructs for implementing a parallel program. The patterns making up these four design s

Rewards Program