- Broschiertes Buch
- Merkliste
- Auf die Merkliste
- Bewerten Bewerten
- Teilen
- Produkt teilen
- Produkterinnerung
- Produkterinnerung
"OpenMP is a widely used language for programming the nodes in a parallel computer. Those nodes are now heterogeneous, including a GPU alongside the traditional CPU"--
Andere Kunden interessierten sich auch für
- Timothy G. Mattson (Intel Senior Research Scientist)The OpenMP Common Core45,99 €
- Jon SteinhartThe Secret Life of Programs31,99 €
- Thomas H. CormenIntroduction to Algorithms135,99 €
- Matthias Felleisen (Northeastern University Trustee Professor)How to Design Programs67,99 €
- Patrick CousotPrinciples of Abstract Interpretation92,99 €
- Javier EsparzaAutomata Theory87,99 €
- Programming Models for Parallel Computing66,99 €
-
-
-
"OpenMP is a widely used language for programming the nodes in a parallel computer. Those nodes are now heterogeneous, including a GPU alongside the traditional CPU"--
Hinweis: Dieser Artikel kann nur an eine deutsche Lieferadresse ausgeliefert werden.
Hinweis: Dieser Artikel kann nur an eine deutsche Lieferadresse ausgeliefert werden.
Produktdetails
- Produktdetails
- Verlag: MIT Press Ltd
- Seitenzahl: 336
- Erscheinungstermin: 7. November 2023
- Englisch
- Abmessung: 267mm x 176mm x 16mm
- Gewicht: 546g
- ISBN-13: 9780262547536
- ISBN-10: 0262547538
- Artikelnr.: 68431077
- Verlag: MIT Press Ltd
- Seitenzahl: 336
- Erscheinungstermin: 7. November 2023
- Englisch
- Abmessung: 267mm x 176mm x 16mm
- Gewicht: 546g
- ISBN-13: 9780262547536
- ISBN-10: 0262547538
- Artikelnr.: 68431077
Tom Deakin is Lecturer in Advanced Computer Systems at the University of Bristol, researching the performance portability of massively parallel high performance simulation codes. He has given tutorials and lecture series on parallel programming models including OpenMP, SYCL, and OpenCL. Timothy G. Mattson is a senior principal engineer at Intel where he’s worked since 1993 on: the first TFLOP computer; the creation of MPI, OpenMP, and OpenCL; HW/SW co-design of many-core processors; data management systems; and the GraphBLAS API for expressing graph algorithms as sparse linear algebra.
Series Foreword xiii
Preface xv
Acknowledgments xix
I Setting the Stage
1 Heterogeneity and the Future of Computing 5
1.1 The Basic Building Blocks of Modern Computing 7
1.1.1 The CPU 8
1.1.2 The SIMD Vector Unit 11
1.1.3 The GPU 15
1.2 OpenMP: A Single Code-Base for Heterogeneous Hardware 20
1.3 The Structure of This Book 21
1.4 Supplementary Materials 22
2 OpenMP Overview 23
2.1 Threads: Basic Concepts 23
2.2 OpenMP: Basic Syntax 27
2.3 The Fundamental Design Patterns of OpenMP 32
2.3.1 The SPMD Pattern 33
2.3.2 The Loop-Level Parallelism Pattern 37
2.3.3 The Divide-and-Conquer Pattern 42
2.3.3.1 Tasks in OpenMP 45
2.3.3.2 Parallelizing Divide-and-Conquer 48
2.4 Task Execution 49
2.5 Our Journey Ahead 51
II The GPU Common Core
3 Running Parallel Code on a GPU 59
3.1 Target Construct: Offloading Execution onto a Device 59
3.2 Moving Data between the Host and a Device 63
3.2.1 Scalar Variables 63
3.2.2 Arrays on the Stack 65
3.2.3 Derived Types 66
3.3 Parallel Execution on the Target Device 68
3.4 Concurrency and the Loop Construct 70
3.5 Example: Walking through Matrix Multiplication 72
4 Memory Movement 75
4.1 OpenMP Array Syntax 76
4.2 Sharing Data Explicitly with the Map Clause 78
4.2.1 The Map Clause 79
4.2.2 Example: Vector Add on the Heap 80
4.2.3 Example: Mapping Arrays in Matrix Multiplication 81
4.3 Reductions and Mapping the Result from the Device 82
4.4 Optimizing Data Movement 84
4.4.1 Target Data Construct 85
4.4.2 Target Update Directive 88
4.4.3 Target Enter/Exit Data 90
4.4.4 Pointer Swapping 91
4.5 Summary 99
5 Using the GPU Common Core 101
5.1 Recap of the GPU Common Core 101
5.2 The Eightfold Path to Performance 108
5.2.1 Portability 109
5.2.2 Libraries 110
5.2.3 The Right Algorithm 111
5.2.4 Occupancy 112
5.2.5 Converged Execution Flow 114
5.2.6 Data Movement 115
5.2.7 Memory Coalescence 119
5.2.8 Load Balance 121
5.3 Concluding the GPU Common Core 121
III Beyond the Common Core
6 Managing a GPU’s Hierarchical Parallelism 127
6.1 Parallel Threads 128
6.2 League of Teams of Threads 130
6.2.1 Controlling the Number of Teams and Threads 132
6.2.2 Distributing Work between Teams 135
6.3 Hierarchical Parallelism in Practice 139
6.3.1 Example: Batched Matrix Multiplication 140
6.3.2 Example: Batched Gaussian Elimination 142
6.4 Hierarchical Parallelism and the Loop Directive 143
6.4.1 Combined Constructs that Include Loop 145
6.4.2 Reductions and Combined Constructs 146
6.4.3 The Bind Clause 146
6.5 Summary 149
7 Revisiting Data Movement 151
7.1 Manipulating the Device Data Environment 151
7.1.1 Allocating and Deleting Variables 155
7.1.2 Map Type Modifiers 158
7.1.3 Changing the Default Mapping 160
7.2 Compiling External Functions and Static Variables for the Device 162
7.3 User-Defined Mappers 168
7.4 Team-Only Memory 173
7.5 Becoming a Cartographer: Mapping Device Memory by Hand 179
7.6 Unified Shared Memory for Productivity 185
7.7 Summary 189
8 Asynchronous Offload to Multiple GPUs 191
8.1 Device Discovery 193
8.2 Selecting a Default Device 194
8.3 Offload to Multiple Devices 196
8.3.1 Reverse Offload 198
8.4 Conditional Offload 200
8.5 Asynchronous Offload 201
8.5.1 Task Dependencies 202
8.5.2 Asynchronous Data Transfers 206
8.5.3 Task Reductions 208
8.6 Summary 210
9 Working with External Runtime Environments 213
9.1 Calling External Library Routines from OpenMP 213
9.2 Sharing OpenMP Data with Foreign Functions 217
9.2.1 The Need for Synchronization 221
9.2.2 Example: Sharing OpenMP Data with cuBLAS 222
9.3 Using Data from a Foreign Runtime with OpenMP 223
9.3.1 Example: Sharing cuBLAS Data with OpenMP 227
9.3.2 Avoiding Unportable Code 229
9.4 Direct Control of Foreign Runtimes 231
9.4.1 Query Properties of the Foreign Runtime 234
9.4.2 Using the Interop Construct to Correctly
Synchronize with Foreign Functions 238
9.4.3 Non-blocking Synchronization with a Foreign Runtime 242
9.4.4 Example: Calling CUDA Kernels without Blocking 245
9.5 Enhanced Portability Using Variant Directives 248
9.5.1 Declaring Function Variants 250
9.5.1.1 OpenMP Context and the Match Clause 253
9.5.1.2 Modifying Variant Function Arguments 255
9.5.2 Controlling Variant Substitution with the Dispatch Construct 257
9.5.3 Putting It All Together 259
10 OpenMP and the Future of Heterogeneous Computing 263
Appendix: Reference Guide 265
A.1 Programming a CPU with OpenMP 266
A.2 Directives and Constructs for the GPU 268
A.2.1 Parallelism with Loop, Teams, and Worksharing Constructs 272
A.2.2 Constructs for Interoperability 275
A.2.3 Constructs for Device Data Environment Manipulation 278
A.3 Combined Constructs 281
A.4 Internal Control Variables, Environment Variables, and OpenMP API
Functions 283
Glossary 287
References 301
Subject Index 305
Preface xv
Acknowledgments xix
I Setting the Stage
1 Heterogeneity and the Future of Computing 5
1.1 The Basic Building Blocks of Modern Computing 7
1.1.1 The CPU 8
1.1.2 The SIMD Vector Unit 11
1.1.3 The GPU 15
1.2 OpenMP: A Single Code-Base for Heterogeneous Hardware 20
1.3 The Structure of This Book 21
1.4 Supplementary Materials 22
2 OpenMP Overview 23
2.1 Threads: Basic Concepts 23
2.2 OpenMP: Basic Syntax 27
2.3 The Fundamental Design Patterns of OpenMP 32
2.3.1 The SPMD Pattern 33
2.3.2 The Loop-Level Parallelism Pattern 37
2.3.3 The Divide-and-Conquer Pattern 42
2.3.3.1 Tasks in OpenMP 45
2.3.3.2 Parallelizing Divide-and-Conquer 48
2.4 Task Execution 49
2.5 Our Journey Ahead 51
II The GPU Common Core
3 Running Parallel Code on a GPU 59
3.1 Target Construct: Offloading Execution onto a Device 59
3.2 Moving Data between the Host and a Device 63
3.2.1 Scalar Variables 63
3.2.2 Arrays on the Stack 65
3.2.3 Derived Types 66
3.3 Parallel Execution on the Target Device 68
3.4 Concurrency and the Loop Construct 70
3.5 Example: Walking through Matrix Multiplication 72
4 Memory Movement 75
4.1 OpenMP Array Syntax 76
4.2 Sharing Data Explicitly with the Map Clause 78
4.2.1 The Map Clause 79
4.2.2 Example: Vector Add on the Heap 80
4.2.3 Example: Mapping Arrays in Matrix Multiplication 81
4.3 Reductions and Mapping the Result from the Device 82
4.4 Optimizing Data Movement 84
4.4.1 Target Data Construct 85
4.4.2 Target Update Directive 88
4.4.3 Target Enter/Exit Data 90
4.4.4 Pointer Swapping 91
4.5 Summary 99
5 Using the GPU Common Core 101
5.1 Recap of the GPU Common Core 101
5.2 The Eightfold Path to Performance 108
5.2.1 Portability 109
5.2.2 Libraries 110
5.2.3 The Right Algorithm 111
5.2.4 Occupancy 112
5.2.5 Converged Execution Flow 114
5.2.6 Data Movement 115
5.2.7 Memory Coalescence 119
5.2.8 Load Balance 121
5.3 Concluding the GPU Common Core 121
III Beyond the Common Core
6 Managing a GPU’s Hierarchical Parallelism 127
6.1 Parallel Threads 128
6.2 League of Teams of Threads 130
6.2.1 Controlling the Number of Teams and Threads 132
6.2.2 Distributing Work between Teams 135
6.3 Hierarchical Parallelism in Practice 139
6.3.1 Example: Batched Matrix Multiplication 140
6.3.2 Example: Batched Gaussian Elimination 142
6.4 Hierarchical Parallelism and the Loop Directive 143
6.4.1 Combined Constructs that Include Loop 145
6.4.2 Reductions and Combined Constructs 146
6.4.3 The Bind Clause 146
6.5 Summary 149
7 Revisiting Data Movement 151
7.1 Manipulating the Device Data Environment 151
7.1.1 Allocating and Deleting Variables 155
7.1.2 Map Type Modifiers 158
7.1.3 Changing the Default Mapping 160
7.2 Compiling External Functions and Static Variables for the Device 162
7.3 User-Defined Mappers 168
7.4 Team-Only Memory 173
7.5 Becoming a Cartographer: Mapping Device Memory by Hand 179
7.6 Unified Shared Memory for Productivity 185
7.7 Summary 189
8 Asynchronous Offload to Multiple GPUs 191
8.1 Device Discovery 193
8.2 Selecting a Default Device 194
8.3 Offload to Multiple Devices 196
8.3.1 Reverse Offload 198
8.4 Conditional Offload 200
8.5 Asynchronous Offload 201
8.5.1 Task Dependencies 202
8.5.2 Asynchronous Data Transfers 206
8.5.3 Task Reductions 208
8.6 Summary 210
9 Working with External Runtime Environments 213
9.1 Calling External Library Routines from OpenMP 213
9.2 Sharing OpenMP Data with Foreign Functions 217
9.2.1 The Need for Synchronization 221
9.2.2 Example: Sharing OpenMP Data with cuBLAS 222
9.3 Using Data from a Foreign Runtime with OpenMP 223
9.3.1 Example: Sharing cuBLAS Data with OpenMP 227
9.3.2 Avoiding Unportable Code 229
9.4 Direct Control of Foreign Runtimes 231
9.4.1 Query Properties of the Foreign Runtime 234
9.4.2 Using the Interop Construct to Correctly
Synchronize with Foreign Functions 238
9.4.3 Non-blocking Synchronization with a Foreign Runtime 242
9.4.4 Example: Calling CUDA Kernels without Blocking 245
9.5 Enhanced Portability Using Variant Directives 248
9.5.1 Declaring Function Variants 250
9.5.1.1 OpenMP Context and the Match Clause 253
9.5.1.2 Modifying Variant Function Arguments 255
9.5.2 Controlling Variant Substitution with the Dispatch Construct 257
9.5.3 Putting It All Together 259
10 OpenMP and the Future of Heterogeneous Computing 263
Appendix: Reference Guide 265
A.1 Programming a CPU with OpenMP 266
A.2 Directives and Constructs for the GPU 268
A.2.1 Parallelism with Loop, Teams, and Worksharing Constructs 272
A.2.2 Constructs for Interoperability 275
A.2.3 Constructs for Device Data Environment Manipulation 278
A.3 Combined Constructs 281
A.4 Internal Control Variables, Environment Variables, and OpenMP API
Functions 283
Glossary 287
References 301
Subject Index 305
Series Foreword xiii
Preface xv
Acknowledgments xix
I Setting the Stage
1 Heterogeneity and the Future of Computing 5
1.1 The Basic Building Blocks of Modern Computing 7
1.1.1 The CPU 8
1.1.2 The SIMD Vector Unit 11
1.1.3 The GPU 15
1.2 OpenMP: A Single Code-Base for Heterogeneous Hardware 20
1.3 The Structure of This Book 21
1.4 Supplementary Materials 22
2 OpenMP Overview 23
2.1 Threads: Basic Concepts 23
2.2 OpenMP: Basic Syntax 27
2.3 The Fundamental Design Patterns of OpenMP 32
2.3.1 The SPMD Pattern 33
2.3.2 The Loop-Level Parallelism Pattern 37
2.3.3 The Divide-and-Conquer Pattern 42
2.3.3.1 Tasks in OpenMP 45
2.3.3.2 Parallelizing Divide-and-Conquer 48
2.4 Task Execution 49
2.5 Our Journey Ahead 51
II The GPU Common Core
3 Running Parallel Code on a GPU 59
3.1 Target Construct: Offloading Execution onto a Device 59
3.2 Moving Data between the Host and a Device 63
3.2.1 Scalar Variables 63
3.2.2 Arrays on the Stack 65
3.2.3 Derived Types 66
3.3 Parallel Execution on the Target Device 68
3.4 Concurrency and the Loop Construct 70
3.5 Example: Walking through Matrix Multiplication 72
4 Memory Movement 75
4.1 OpenMP Array Syntax 76
4.2 Sharing Data Explicitly with the Map Clause 78
4.2.1 The Map Clause 79
4.2.2 Example: Vector Add on the Heap 80
4.2.3 Example: Mapping Arrays in Matrix Multiplication 81
4.3 Reductions and Mapping the Result from the Device 82
4.4 Optimizing Data Movement 84
4.4.1 Target Data Construct 85
4.4.2 Target Update Directive 88
4.4.3 Target Enter/Exit Data 90
4.4.4 Pointer Swapping 91
4.5 Summary 99
5 Using the GPU Common Core 101
5.1 Recap of the GPU Common Core 101
5.2 The Eightfold Path to Performance 108
5.2.1 Portability 109
5.2.2 Libraries 110
5.2.3 The Right Algorithm 111
5.2.4 Occupancy 112
5.2.5 Converged Execution Flow 114
5.2.6 Data Movement 115
5.2.7 Memory Coalescence 119
5.2.8 Load Balance 121
5.3 Concluding the GPU Common Core 121
III Beyond the Common Core
6 Managing a GPU’s Hierarchical Parallelism 127
6.1 Parallel Threads 128
6.2 League of Teams of Threads 130
6.2.1 Controlling the Number of Teams and Threads 132
6.2.2 Distributing Work between Teams 135
6.3 Hierarchical Parallelism in Practice 139
6.3.1 Example: Batched Matrix Multiplication 140
6.3.2 Example: Batched Gaussian Elimination 142
6.4 Hierarchical Parallelism and the Loop Directive 143
6.4.1 Combined Constructs that Include Loop 145
6.4.2 Reductions and Combined Constructs 146
6.4.3 The Bind Clause 146
6.5 Summary 149
7 Revisiting Data Movement 151
7.1 Manipulating the Device Data Environment 151
7.1.1 Allocating and Deleting Variables 155
7.1.2 Map Type Modifiers 158
7.1.3 Changing the Default Mapping 160
7.2 Compiling External Functions and Static Variables for the Device 162
7.3 User-Defined Mappers 168
7.4 Team-Only Memory 173
7.5 Becoming a Cartographer: Mapping Device Memory by Hand 179
7.6 Unified Shared Memory for Productivity 185
7.7 Summary 189
8 Asynchronous Offload to Multiple GPUs 191
8.1 Device Discovery 193
8.2 Selecting a Default Device 194
8.3 Offload to Multiple Devices 196
8.3.1 Reverse Offload 198
8.4 Conditional Offload 200
8.5 Asynchronous Offload 201
8.5.1 Task Dependencies 202
8.5.2 Asynchronous Data Transfers 206
8.5.3 Task Reductions 208
8.6 Summary 210
9 Working with External Runtime Environments 213
9.1 Calling External Library Routines from OpenMP 213
9.2 Sharing OpenMP Data with Foreign Functions 217
9.2.1 The Need for Synchronization 221
9.2.2 Example: Sharing OpenMP Data with cuBLAS 222
9.3 Using Data from a Foreign Runtime with OpenMP 223
9.3.1 Example: Sharing cuBLAS Data with OpenMP 227
9.3.2 Avoiding Unportable Code 229
9.4 Direct Control of Foreign Runtimes 231
9.4.1 Query Properties of the Foreign Runtime 234
9.4.2 Using the Interop Construct to Correctly
Synchronize with Foreign Functions 238
9.4.3 Non-blocking Synchronization with a Foreign Runtime 242
9.4.4 Example: Calling CUDA Kernels without Blocking 245
9.5 Enhanced Portability Using Variant Directives 248
9.5.1 Declaring Function Variants 250
9.5.1.1 OpenMP Context and the Match Clause 253
9.5.1.2 Modifying Variant Function Arguments 255
9.5.2 Controlling Variant Substitution with the Dispatch Construct 257
9.5.3 Putting It All Together 259
10 OpenMP and the Future of Heterogeneous Computing 263
Appendix: Reference Guide 265
A.1 Programming a CPU with OpenMP 266
A.2 Directives and Constructs for the GPU 268
A.2.1 Parallelism with Loop, Teams, and Worksharing Constructs 272
A.2.2 Constructs for Interoperability 275
A.2.3 Constructs for Device Data Environment Manipulation 278
A.3 Combined Constructs 281
A.4 Internal Control Variables, Environment Variables, and OpenMP API
Functions 283
Glossary 287
References 301
Subject Index 305
Preface xv
Acknowledgments xix
I Setting the Stage
1 Heterogeneity and the Future of Computing 5
1.1 The Basic Building Blocks of Modern Computing 7
1.1.1 The CPU 8
1.1.2 The SIMD Vector Unit 11
1.1.3 The GPU 15
1.2 OpenMP: A Single Code-Base for Heterogeneous Hardware 20
1.3 The Structure of This Book 21
1.4 Supplementary Materials 22
2 OpenMP Overview 23
2.1 Threads: Basic Concepts 23
2.2 OpenMP: Basic Syntax 27
2.3 The Fundamental Design Patterns of OpenMP 32
2.3.1 The SPMD Pattern 33
2.3.2 The Loop-Level Parallelism Pattern 37
2.3.3 The Divide-and-Conquer Pattern 42
2.3.3.1 Tasks in OpenMP 45
2.3.3.2 Parallelizing Divide-and-Conquer 48
2.4 Task Execution 49
2.5 Our Journey Ahead 51
II The GPU Common Core
3 Running Parallel Code on a GPU 59
3.1 Target Construct: Offloading Execution onto a Device 59
3.2 Moving Data between the Host and a Device 63
3.2.1 Scalar Variables 63
3.2.2 Arrays on the Stack 65
3.2.3 Derived Types 66
3.3 Parallel Execution on the Target Device 68
3.4 Concurrency and the Loop Construct 70
3.5 Example: Walking through Matrix Multiplication 72
4 Memory Movement 75
4.1 OpenMP Array Syntax 76
4.2 Sharing Data Explicitly with the Map Clause 78
4.2.1 The Map Clause 79
4.2.2 Example: Vector Add on the Heap 80
4.2.3 Example: Mapping Arrays in Matrix Multiplication 81
4.3 Reductions and Mapping the Result from the Device 82
4.4 Optimizing Data Movement 84
4.4.1 Target Data Construct 85
4.4.2 Target Update Directive 88
4.4.3 Target Enter/Exit Data 90
4.4.4 Pointer Swapping 91
4.5 Summary 99
5 Using the GPU Common Core 101
5.1 Recap of the GPU Common Core 101
5.2 The Eightfold Path to Performance 108
5.2.1 Portability 109
5.2.2 Libraries 110
5.2.3 The Right Algorithm 111
5.2.4 Occupancy 112
5.2.5 Converged Execution Flow 114
5.2.6 Data Movement 115
5.2.7 Memory Coalescence 119
5.2.8 Load Balance 121
5.3 Concluding the GPU Common Core 121
III Beyond the Common Core
6 Managing a GPU’s Hierarchical Parallelism 127
6.1 Parallel Threads 128
6.2 League of Teams of Threads 130
6.2.1 Controlling the Number of Teams and Threads 132
6.2.2 Distributing Work between Teams 135
6.3 Hierarchical Parallelism in Practice 139
6.3.1 Example: Batched Matrix Multiplication 140
6.3.2 Example: Batched Gaussian Elimination 142
6.4 Hierarchical Parallelism and the Loop Directive 143
6.4.1 Combined Constructs that Include Loop 145
6.4.2 Reductions and Combined Constructs 146
6.4.3 The Bind Clause 146
6.5 Summary 149
7 Revisiting Data Movement 151
7.1 Manipulating the Device Data Environment 151
7.1.1 Allocating and Deleting Variables 155
7.1.2 Map Type Modifiers 158
7.1.3 Changing the Default Mapping 160
7.2 Compiling External Functions and Static Variables for the Device 162
7.3 User-Defined Mappers 168
7.4 Team-Only Memory 173
7.5 Becoming a Cartographer: Mapping Device Memory by Hand 179
7.6 Unified Shared Memory for Productivity 185
7.7 Summary 189
8 Asynchronous Offload to Multiple GPUs 191
8.1 Device Discovery 193
8.2 Selecting a Default Device 194
8.3 Offload to Multiple Devices 196
8.3.1 Reverse Offload 198
8.4 Conditional Offload 200
8.5 Asynchronous Offload 201
8.5.1 Task Dependencies 202
8.5.2 Asynchronous Data Transfers 206
8.5.3 Task Reductions 208
8.6 Summary 210
9 Working with External Runtime Environments 213
9.1 Calling External Library Routines from OpenMP 213
9.2 Sharing OpenMP Data with Foreign Functions 217
9.2.1 The Need for Synchronization 221
9.2.2 Example: Sharing OpenMP Data with cuBLAS 222
9.3 Using Data from a Foreign Runtime with OpenMP 223
9.3.1 Example: Sharing cuBLAS Data with OpenMP 227
9.3.2 Avoiding Unportable Code 229
9.4 Direct Control of Foreign Runtimes 231
9.4.1 Query Properties of the Foreign Runtime 234
9.4.2 Using the Interop Construct to Correctly
Synchronize with Foreign Functions 238
9.4.3 Non-blocking Synchronization with a Foreign Runtime 242
9.4.4 Example: Calling CUDA Kernels without Blocking 245
9.5 Enhanced Portability Using Variant Directives 248
9.5.1 Declaring Function Variants 250
9.5.1.1 OpenMP Context and the Match Clause 253
9.5.1.2 Modifying Variant Function Arguments 255
9.5.2 Controlling Variant Substitution with the Dispatch Construct 257
9.5.3 Putting It All Together 259
10 OpenMP and the Future of Heterogeneous Computing 263
Appendix: Reference Guide 265
A.1 Programming a CPU with OpenMP 266
A.2 Directives and Constructs for the GPU 268
A.2.1 Parallelism with Loop, Teams, and Worksharing Constructs 272
A.2.2 Constructs for Interoperability 275
A.2.3 Constructs for Device Data Environment Manipulation 278
A.3 Combined Constructs 281
A.4 Internal Control Variables, Environment Variables, and OpenMP API
Functions 283
Glossary 287
References 301
Subject Index 305