A comprehensive collection of benchmarks for measuring dependability in hardware-software systems As computer systems have become more complex and mission-critical, it is imperative for systems engineers and researchers to have metrics for a system's dependability, reliability, availability, and serviceability. Dependability benchmarks are useful for guiding development efforts for system providers, acquisition choices of system purchasers, and evaluations of new concepts by researchers in academia and industry. This book gathers together all dependability benchmarks developed to date by…mehr
A comprehensive collection of benchmarks for measuring dependability in hardware-software systems As computer systems have become more complex and mission-critical, it is imperative for systems engineers and researchers to have metrics for a system's dependability, reliability, availability, and serviceability. Dependability benchmarks are useful for guiding development efforts for system providers, acquisition choices of system purchasers, and evaluations of new concepts by researchers in academia and industry. This book gathers together all dependability benchmarks developed to date by industry and academia and explains the various principles and concepts of dependability benchmarking. It collects the expert knowledge of DBench, a research project funded by the European Union, and the IFIP Special Interest Group on Dependability Benchmarking, to shed light on this important area. It also provides a large panorama of examples and recommendations for defining dependability benchmarks. Dependability Benchmarking for Computer Systems includes contributions from a credible mix of industrial and academic sources: IBM, Intel, Microsoft, Sun Microsystems, Critical Software, Carnegie Mellon University, LAAS-CNRS, Technical University of Valencia, University of Coimbra, and University of Illinois. It is an invaluable resource for engineers, researchers, system vendors, system purchasers, computer industry consultants, and system integrators.Hinweis: Dieser Artikel kann nur an eine deutsche Lieferadresse ausgeliefert werden.
Karama Kanoun is Directeur de Recherche at LAAS-CNRS, France. Her research interests include the modeling and evaluation of computer system dependability. She was the principal investigator for the DBench (Dependability Benchmarking) European project, and has been a consultant for the European Space Agency, Ansaldo Trasporti, and the International Telecommunication Union. Kanoun is vice-chair of the IFIP WG 10.4 on Dependable Computing and Fault Tolerance and chairs its SIG on Dependability Benchmarking. She also chairs the French SEE Technical Committee on Trustworthy Computer Systems. Lisa Spainhower is an IBM Distinguished Engineer in the System Design organization of Systems and Technology Group (STG). STG designs and develops IBM's semiconductor technology, ranging from small x86-based servers to clusters of mainframes, operating systems, and storage subsystems. She is also a member of the IBM Academy of Technology,?IEEE, IEEE Computer Society, and the Technical Committee on Fault-Tolerant Computing Executive Committee. Spainhower is vice-chair of the IFIP WG 10.4 SIG on Dependability Benchmarking.
Inhaltsangabe
Preface vii Contributors xi Prologue: Dependability Benchmarking: A Reality or a Dream? xiii Karama Kanoun, Phil Koopman, Henrique Madeira, and Lisa Spainhower 1 The Autonomic Computing Benchmark 3 Joyce Coleman, Tony Lau, Bhushan Lokhande, Peter Shum, Robert Wisniewski, and Mary Peterson Yost 2 Analytical Reliability, Availability, and Serviceability Benchmarks 23 Richard Elling, Ira Pramanick, James Mauro, William Bryson, and Dong Tang 3 System Recovery Benchmarks 35 Richard Elling, Ira Pramanick, James Mauro, William Bryson, and Dong Tang 4 Dependability Benchmarking Using Environmental Test Tools 55 Cristian Constantinescu 5 Dependability Benchmark for OLTP Systems 63 Marco Vieira, João Durães, and Henrique Madeira 6 Dependability Benchmarking of Web Servers 91 João Durães, Marco Vieira, and Henrique Madeira 7 Dependability Benchmark of Automotive Engine Control Systems 111 Juan-Carlos Ruiz, Pedro Gil, Pedro Yuste, and David de-Andrés 8 Toward Evaluating the Dependability of Anomaly Detectors 141 Kymie M. C. Tan and Roy A. Maxion 9 Vajra: Evaluating Byzantine-Fault-Tolerant Distributed Systems 163 Sonya J. Wierman and Priya Narasimhan 10 User-Relevant Software Reliability Benchmarking 185 Mario R. Garzia 11 Interface Robustness Testing: Experience and Lessons Learned from the Ballista Project 201 Philip Koopman, Kobey DeVale, and John DeVale 12 Windows and Linux Robustness Benchmarks with Respect to Application Erroneous Behavior 227 Karama Kanoun, Yves Crouzet, Ali Kalakech, and Ana-Elena Rugina 13 DeBERT: Dependability Benchmarking of Embedded Real-Time Off-the-Shelf Components for Space Applications 255 Diamantino Costa, Ricardo Barbosa, Ricardo Maia, and Francisco Moreira 14 Benchmarking the Impact of Faulty Drivers: Application to the Linux Kernel 285 Arnaud Albinet, Jean Arlat, and Jean-Charles Fabre 15 Benchmarking the Operating System against Faults Impacting Operating System Functions 311 Ravishankar Iyer, Zbigniew Kalbarczyk, and Weining Gu 16 Neutron Soft Error Rate Characterization of Microprocessors 341 Cristian Constantinescu Index 351
Preface vii Contributors xi Prologue: Dependability Benchmarking: A Reality or a Dream? xiii Karama Kanoun, Phil Koopman, Henrique Madeira, and Lisa Spainhower 1 The Autonomic Computing Benchmark 3 Joyce Coleman, Tony Lau, Bhushan Lokhande, Peter Shum, Robert Wisniewski, and Mary Peterson Yost 2 Analytical Reliability, Availability, and Serviceability Benchmarks 23 Richard Elling, Ira Pramanick, James Mauro, William Bryson, and Dong Tang 3 System Recovery Benchmarks 35 Richard Elling, Ira Pramanick, James Mauro, William Bryson, and Dong Tang 4 Dependability Benchmarking Using Environmental Test Tools 55 Cristian Constantinescu 5 Dependability Benchmark for OLTP Systems 63 Marco Vieira, João Durães, and Henrique Madeira 6 Dependability Benchmarking of Web Servers 91 João Durães, Marco Vieira, and Henrique Madeira 7 Dependability Benchmark of Automotive Engine Control Systems 111 Juan-Carlos Ruiz, Pedro Gil, Pedro Yuste, and David de-Andrés 8 Toward Evaluating the Dependability of Anomaly Detectors 141 Kymie M. C. Tan and Roy A. Maxion 9 Vajra: Evaluating Byzantine-Fault-Tolerant Distributed Systems 163 Sonya J. Wierman and Priya Narasimhan 10 User-Relevant Software Reliability Benchmarking 185 Mario R. Garzia 11 Interface Robustness Testing: Experience and Lessons Learned from the Ballista Project 201 Philip Koopman, Kobey DeVale, and John DeVale 12 Windows and Linux Robustness Benchmarks with Respect to Application Erroneous Behavior 227 Karama Kanoun, Yves Crouzet, Ali Kalakech, and Ana-Elena Rugina 13 DeBERT: Dependability Benchmarking of Embedded Real-Time Off-the-Shelf Components for Space Applications 255 Diamantino Costa, Ricardo Barbosa, Ricardo Maia, and Francisco Moreira 14 Benchmarking the Impact of Faulty Drivers: Application to the Linux Kernel 285 Arnaud Albinet, Jean Arlat, and Jean-Charles Fabre 15 Benchmarking the Operating System against Faults Impacting Operating System Functions 311 Ravishankar Iyer, Zbigniew Kalbarczyk, and Weining Gu 16 Neutron Soft Error Rate Characterization of Microprocessors 341 Cristian Constantinescu Index 351
Es gelten unsere Allgemeinen Geschäftsbedingungen: www.buecher.de/agb
Impressum
www.buecher.de ist ein Internetauftritt der buecher.de internetstores GmbH
Geschäftsführung: Monica Sawhney | Roland Kölbl | Günter Hilger
Sitz der Gesellschaft: Batheyer Straße 115 - 117, 58099 Hagen
Postanschrift: Bürgermeister-Wegele-Str. 12, 86167 Augsburg
Amtsgericht Hagen HRB 13257
Steuernummer: 321/5800/1497