79,99 €
inkl. MwSt.
Versandkostenfrei*
Versandfertig in 1-2 Wochen
  • Gebundenes Buch

Rapidly growing cadres of sophisticated computing users need to understand how to exploit computing performance and the architecture of computers that give rise to it. With an accessible, principle-based approach, this book offers a high-level view of the four key pillars of performance. Ideal for computer, data, or social scientists and engineers.

Produktbeschreibung
Rapidly growing cadres of sophisticated computing users need to understand how to exploit computing performance and the architecture of computers that give rise to it. With an accessible, principle-based approach, this book offers a high-level view of the four key pillars of performance. Ideal for computer, data, or social scientists and engineers.
Hinweis: Dieser Artikel kann nur an eine deutsche Lieferadresse ausgeliefert werden.
Autorenporträt
Andrew A. Chien is William Eckhardt Professor at the University of Chicago, Director of the CERES Center for Unstoppable Computing, and a Senior Scientist at Argonne National Laboratory. Since 2017, he has served as Editor-in-Chief of the Communications of the ACM. He is currently a member of the National Science Foundation's CISE Directorate Advisory Board. Chien is a global research leader in parallel computing, computer architecture, clusters, and cloud computing, and has received numerous awards for his research. In 1994 he was named a National Science Foundation Young Investigator. Dr. Chien served as Vice President of Research at Intel Corporation from 2005-2010, and on advisory boards for the National Science Foundation, Department of Energy, Japan RWCP, and distinguished universities such as Stanford, UC Berkeley, EPFL, and the University of Washington. From 1998-2005, he was SAIC Chair Professor at UCSD, and prior to that, a professor at the University of Illinois. Dr. Chien is a Fellow of the ACM, Fellow of the IEEE, and Fellow of the AAAS, and earned his PhD, MS, and BS from the Massachusetts Institute of Technology.