-23%11
32,95 €
42,79 €**
32,95 €
inkl. MwSt.
**Preis der gedruckten Ausgabe (Broschiertes Buch)
Sofort per Download lieferbar
-23%11
32,95 €
42,79 €**
32,95 €
inkl. MwSt.
**Preis der gedruckten Ausgabe (Broschiertes Buch)
Sofort per Download lieferbar

Alle Infos zum eBook verschenken
Als Download kaufen
42,79 €****
-23%11
32,95 €
inkl. MwSt.
**Preis der gedruckten Ausgabe (Broschiertes Buch)
Sofort per Download lieferbar
Jetzt verschenken
42,79 €****
-23%11
32,95 €
inkl. MwSt.
**Preis der gedruckten Ausgabe (Broschiertes Buch)
Sofort per Download lieferbar

Alle Infos zum eBook verschenken
  • Format: PDF

The advent of multicore processors has renewed interest in the idea of incorporating transactions into the programming model used to write parallel programs. This approach, known as transactional memory, offers an alternative, and hopefully better, way to coordinate concurrent threads. The ACI (atomicity, consistency, isolation) properties of transactions provide a foundation to ensure that concurrent reads and writes of shared data do not produce inconsistent or incorrect results. At a higher level, a computation wrapped in a transaction executes atomically - either it completes successfully…mehr

Produktbeschreibung
The advent of multicore processors has renewed interest in the idea of incorporating transactions into the programming model used to write parallel programs. This approach, known as transactional memory, offers an alternative, and hopefully better, way to coordinate concurrent threads. The ACI (atomicity, consistency, isolation) properties of transactions provide a foundation to ensure that concurrent reads and writes of shared data do not produce inconsistent or incorrect results. At a higher level, a computation wrapped in a transaction executes atomically - either it completes successfully and commits its result in its entirety or it aborts. In addition, isolation ensures the transaction produces the same result as if no other transactions were executing concurrently. Although transactions are not a parallel programming panacea, they shift much of the burden of synchronizing and coordinating parallel computations from a programmer to a compiler, to a language runtime system, or to hardware. The challenge for the system implementers is to build an efficient transactional memory infrastructure. This book presents an overview of the state of the art in the design and implementation of transactional memory systems, as of early spring 2010. Table of Contents: Introduction / Basic Transactions / Building on Basic Transactions / Software Transactional Memory / Hardware-Supported Transactional Memory / Conclusions

Dieser Download kann aus rechtlichen Gründen nur mit Rechnungsadresse in A, B, BG, CY, CZ, D, DK, EW, E, FIN, F, GR, HR, H, IRL, I, LT, L, LR, M, NL, PL, P, R, S, SLO, SK ausgeliefert werden.

Autorenporträt
Tim Harris is a Senior Researcher at MSR Cambridge where he works on abstractions for using multi-core computers. He has worked on concurrent algorithms and transactional memory for over ten years, most recently, focusing on the implementation of STM for multi-core computers and the design of programming language features based on it. Harris is currently working on the Barrelfish operating system and on architecture support for programming language runtime systems. Harris has a BA and PhD in computer science from Cambridge University Computer Laboratory. He was on the faculty at the Computer Laboratory from 2000-2004 where he led the department's research on concurrent data structures and contributed to the Xen virtual machine monitor project. He joined Microsoft Research in 2004. James Larus is a Research Area Manager for programming languages and tools at Microsoft Research. He manages the Advanced Compiler Technology, Human Interaction in Programming, Runtime Analysis and Design, and Software Reliability Research groups and co-lead the Singularity research project. He joined Microsoft Research as a Senior Researcher in 1998 to start and, for five years, lead the Software Productivity Tools (SPT) group, which was one of the most innovative and productive groups in the areas of program analysis and programming tools. Before joining Microsoft, Larus was an Assistant and Associate Professor at the University of Wisconsin-Madison, where he co-led the Wisconsin Wind Tunnel research project with Professors Mark Hill and David Wood. This DARPA and NSF-funded project investigated new approaches to simulating, building, and programming parallel shared-memory computers. In addition, Larus's research covered a number of areas: new and efficient techniques for measuring and recording executing programs' behavior, tools for analyzing and manipulating compiled and linked programs, programming languages, tools for verifying program correctness, compiler analysis and optimization, and custom cache coherence protocols. Larus received a PhD from the University of California at Berkeley in 1989, and an AB from Harvard in 1980. Tools (SPT) group, which was one of the most innovative and productive groups in the areas of program analysis and programming tools. Before joining Microsoft, Larus was an Assistant and Associate Professor at the University of Wisconsin-Madison, where he co-led the Wisconsin Wind Tunnel research project with Professors Mark Hill and David Wood. This DARPA and NSF-funded project investigated new approaches to simulating, building, and programming parallel shared-memory computers. In addition, Larus's research covered a number of areas: new and efficient techniques for measuring and recording executing programs' behavior, tools for analyzing and manipulating compiled and linked programs, programming languages, tools for verifying program correctness, compiler analysis and optimization, and custom cache coherence protocols. Larus received a PhD from the University of California at Berkeley in 1989, and an AB from Harvard in 1980.