32,99 €
inkl. MwSt.
Versandkostenfrei*
Versandfertig in 6-10 Tagen
  • Broschiertes Buch

Ever since computer is invented, more powerful software have been developed, such as amazing PC applications, multithreading programs running in servers of major web sites, etc. These powerful software push hardware especially processor design to limit. Processor cache is key to system performance. Unlike hard drive and DRAM, which have seen quick growth in capacity in recent years, processor caches remain as several MBs due to expensive cost. Software are generally hurt by small cache size because of high cache miss rate. Data prefetching is a mechanism to efficiently reduce cache miss rate…mehr

Produktbeschreibung
Ever since computer is invented, more powerful software have been developed, such as amazing PC applications, multithreading programs running in servers of major web sites, etc. These powerful software push hardware especially processor design to limit. Processor cache is key to system performance. Unlike hard drive and DRAM, which have seen quick growth in capacity in recent years, processor caches remain as several MBs due to expensive cost. Software are generally hurt by small cache size because of high cache miss rate. Data prefetching is a mechanism to efficiently reduce cache miss rate thus improve system performance. This book will give a complete introduction to data prefetching for processors, detailed analysis of cache miss pattern of modern benchmarks, and describe innovative, advanced data prefetching designs.
Autorenporträt
Gang Liu obtuvo su licenciatura en la Universidad de Tsinghua en 2000, su maestría en la Academia China de Ciencias en 2003 y su doctorado en la Universidad de Florida en 2010. Ahora trabaja para Google y reside en California, EE.UU.