did-you-know? rent-now

Amazon no longer offers textbook rentals. We do!

did-you-know? rent-now

Amazon no longer offers textbook rentals. We do!

We're the #1 textbook rental company. Let us show you why.

9781420072136

Introduction to Concurrency in Programming Languages

by ;
  • ISBN13:

    9781420072136

  • ISBN10:

    1420072137

  • Format: Hardcover
  • Copyright: 2009-09-28
  • Publisher: Chapman & Hall/

Note: Supplemental materials are not guaranteed with Rental or Used book purchases.

Purchase Benefits

  • Free Shipping Icon Free Shipping On Orders Over $35!
    Your order must be $35 or more to qualify for free economy shipping. Bulk sales, PO's, Marketplace items, eBooks and apparel do not qualify for this offer.
  • eCampus.com Logo Get Rewarded for Ordering Your Textbooks! Enroll Now
List Price: $100.00 Save up to $50.52
  • Rent Book $63.00
    Add to Cart Free Shipping Icon Free Shipping

    TERM
    PRICE
    DUE
    USUALLY SHIPS IN 3-5 BUSINESS DAYS
    *This item is part of an exclusive publisher rental program and requires an additional convenience fee. This fee will be reflected in the shopping cart.

Supplemental Materials

What is included with this book?

Summary

High-performance parallel architectures have been available for many years, but now for the fist time, parallel computers can be found on the desktop and are available to everyday users. Designed specifically for application developers, Introduction to Concurrency in Programming Languagesprovides a survey of high-level programming constructs that make parallel programming more accessible. The book covers the history of parallel languages and explains the relation of hardware developments that motivated these languages to modern systems. It also describes the current state of the art in parallel languages, with a focus on the PGAS and HPCS language families.  Each chapter is accompanied by a set of exercises.

Table of Contents

Introductionp. 1
Motivationp. 3
Navigating the concurrency seap. 3
Where does concurrency appear?p. 6
Why is concurrency considered hard?p. 9
Real-world concurrencyp. 9
Timelinep. 11
Approachp. 12
Intended audiencep. 13
Acknowledgmentsp. 14
Exercisesp. 14
Concepts in Concurrencyp. 17
Terminologyp. 19
Units of executionp. 19
Parallelism versus concurrencyp. 23
Dependencies and parallelismp. 25
Shared versus distributed memoryp. 28
Conceptsp. 29
Atomicityp. 30
Mutual exclusion and critical sectionsp. 34
Coherence and consistencyp. 36
Thread safetyp. 38
Exercisesp. 40
Concurrency Controlp. 43
Correctnessp. 44
Race conditionsp. 44
Deadlockp. 46
Liveness, starvation and fairnessp. 49
Nondeterminismp. 51
Techniquesp. 52
Synchronizationp. 52
Locksp. 54
Seamphoresp. 56
Monitorsp. 57
Transactionsp. 60
Exercisesp. 62
The State of the Artp. 65
Limitations of librariesp. 66
Explicit techniquesp. 69
Message passingp. 69
Explicity controlled threadsp. 75
Higher-level techniquesp. 76
Transactional memoryp. 77
Event-driven programsp. 78
The Actor modelp. 79
The limits of explicit controlp. 80
Pointers and aliasingp. 81
Concluding remarksp. 82
Exercisesp. 83
High-level Language Constructsp. 85
Common high-level constructsp. 88
Expressionp. 89
Control flow primitivesp. 91
Abstract types and data structuresp. 92
Using and evaluating languages constructsp. 94
Cognitive dimesionsp. 98
Working with the cognitive dimensionsp. 101
Implications of concurrencyp. 102
Sequential constructs and concurrencyp. 103
Interpreted languagesp. 104
Exercisesp. 106
Historical Context and Evolution of Languagesp. 109
Evolution of machinesp. 111
Multiprogramming and interrupt driven I/Op. 111
Cache-based memory hierarchiesp. 112
Pipelining and vector processingp. 113
Dataflowp. 114
Massively parallel computersp. 115
Clusters and distributed memory systemsp. 117
Integrationp. 118
Flynn's taxonomyp. 118
Evoltion of programming languagesp. 120
In the beginning, there was FORTRANp. 120
The ALGOL familyp. 122
Coroutinesp. 125
CSP and process algebrasp. 125
Concurrency in Adap. 128
Declarative and functional languagesp. 131
Parallel languagesp. 138
Modern languagesp. 144
Limits to automatic parallelizationp. 145
Exercisesp. 147
Modern Languages and Concurrency Constructsp. 149
Array abstractionp. 150
Array notationp. 152
Shiftsp. 155
Index sets and regionsp. 157
Messages passingp. 158
The Actor modelp. 160
Channelsp. 160
Co-arraysp. 161
Control flowp. 163
ALGOL collateral clausesp. 163
PAR, SEQ and ALT in occamp. 164
Parallel loopsp. 166
Functional languagesp. 168
Functional operatorsp. 169
Discussion of functional operatorsp. 171
Exercisesp. 172
Performance Considerations and Modern Systemsp. 175
Memoryp. 176
Architectural solutions to the performance problemp. 177
Examining single threaded memory performancep. 178
Shared memory and cache coherencep. 180
Distributed memory as a deeper memory hierarchyp. 185
Amdahl's law, speedup, and efficiencyp. 186
Lockingp. 188
Serializationp. 188
Blockingp. 189
Wasted operationsp. 190
Thread overheadp. 191
Exercisesp. 194
Introduction to Parallel Algorithmsp. 197
Designing parallel algorithmsp. 198
Finding concurrencyp. 199
Strategies for exploiting concurrencyp. 200
Algorithm patternsp. 201
Patterns supporting parallel source codep. 203
Demonstrating parallel algorithm patternsp. 204
Exercisesp. 205
Pattern: Task Parallelismp. 207
Supporting algorithm structuresp. 208
The Master-worker patternp. 209
Implementation mechanismsp. 210
Abstractions supporting task parallelismp. 212
Case study: Genetic algorithmsp. 215
Population managementp. 218
Individual expression and fitness evaluationp. 220
Discussionp. 221
Case study: Mandelbrot set computationp. 222
The problemp. 222
Identifying tasks and separating master from workerp. 223
Cilk implementationp. 226
OpenMP implementationp. 229
Discussionp. 230
Exercisesp. 230
Pattern: Data Parallelismp. 233
Data parallel algorithmsp. 233
Case study: Matrix multiplicationp. 236
Case study: Cellular automationp. 238
Limitations of SIMD data parallel programmingp. 240
Beyond SIMDp. 242
Approximating data parallelism with tasksp. 243
Geometric Decompositionp. 244
Exercisesp. 245
Pattern: Recursive Algorithmsp. 247
Recursion conceptsp. 248
Recursion and concurrencyp. 252
Recursion and the divide and conquer patternp. 253
Case study: Sortingp. 254
Case study: Sudokup. 257
Exercisesp. 261
Pattern: Pipelined Algorithmsp. 263
Pipelining as a software design patternp. 265
Languages support for pipeliningp. 266
Case study: Pipelining in Erlangp. 267
Pipeline constructionp. 268
Pipeline stage structurep. 269
Discussionp. 270
Case study: Visual cortexp. 272
Peta Vision code descriptionp. 274
Exercisesp. 276
OpenMP Quick Referencep. 279
OpenMP fundamentalsp. 280
Creating threads and their implicit tasksp. 280
OpenMP data environmentp. 282
Synchronization and the OpenMP memory modelp. 285
Work Sharingp. 288
OpenMP runtime library and environment variablesp. 291
Explicit tasks and OpenMP 3.0p. 292
Erlang Quick Referncep. 295
Language basicsp. 295
Execution and memory modelp. 300
Message passing syntaxp. 301
Cilk Quick Referencep. 305
Cilk keyswordsp. 306
Cilk modelp. 310
Work and span metricsp. 310
Memory modelp. 311
Clik standard libraryp. 312
Further informationp. 314
Referencesp. 315
Indexp. 323
Table of Contents provided by Ingram. All Rights Reserved.

Supplemental Materials

What is included with this book?

The New copy of this book will include any supplemental materials advertised. Please check the title of the book to determine if it should include any access cards, study guides, lab manuals, CDs, etc.

The Used, Rental and eBook copies of this book are not guaranteed to include any supplemental materials. Typically, only the book itself is included. This is true even if the title states it includes any access cards, study guides, lab manuals, CDs, etc.

Rewards Program