67,99 €
inkl. MwSt.
Versandkostenfrei*
Versandfertig in 1-2 Wochen
payback
34 °P sammeln
  • Broschiertes Buch

A deep dive into the key aspects and challenges of machine learning interpretability using a comprehensive toolkit, including SHAP, feature importance, and causal inference, to build fairer, safer, and more reliable models. Purchase of the print or Kindle book includes a free eBook in PDF format.Key FeaturesInterpret real-world data, including cardiovascular disease data and the COMPAS recidivism scores Build your interpretability toolkit with global, local, model-agnostic, and model-specific methods Analyze and extract insights from complex models from CNNs to BERT to time series models Book…mehr

Produktbeschreibung
A deep dive into the key aspects and challenges of machine learning interpretability using a comprehensive toolkit, including SHAP, feature importance, and causal inference, to build fairer, safer, and more reliable models. Purchase of the print or Kindle book includes a free eBook in PDF format.Key FeaturesInterpret real-world data, including cardiovascular disease data and the COMPAS recidivism scores Build your interpretability toolkit with global, local, model-agnostic, and model-specific methods Analyze and extract insights from complex models from CNNs to BERT to time series models Book Description Interpretable Machine Learning with Python, Second Edition, brings to light the key concepts of interpreting machine learning models by analyzing real-world data, providing you with a wide range of skills and tools to decipher the results of even the most complex models. Build your interpretability toolkit with several use cases, from flight delay prediction to waste classification to COMPAS risk assessment scores. This book is full of useful techniques, introducing them to the right use case. Learn traditional methods, such as feature importance and partial dependence plots to integrated gradients for NLP interpretations and gradient-based attribution methods, such as saliency maps. In addition to the step-by-step code, you'll get hands-on with tuning models and training data for interpretability by reducing complexity, mitigating bias, placing guardrails, and enhancing reliability. By the end of the book, you'll be confident in tackling interpretability challenges with black-box models using tabular, language, image, and time series data.What you will learnProgress from basic to advanced techniques, such as causal inference and quantifying uncertainty Build your skillset from analyzing linear and logistic models to complex ones, such as CatBoost, CNNs, and NLP transformers Use monotonic and interaction constraints to make fairer and safer models Understand how to mitigate the influence of bias in datasets Leverage sensitivity analysis factor prioritization and factor fixing for any model Discover how to make models more reliable with adversarial robustness Who this book is for This book is for data scientists, machine learning developers, machine learning engineers, MLOps engineers, and data stewards who have an increasingly critical responsibility to explain how the artificial intelligence systems they develop work, their impact on decision making, and how they identify and manage bias. It's also a useful resource for self-taught ML enthusiasts and beginners who want to go deeper into the subject matter, though a good grasp of the Python programming language is needed to implement the examples.Table of ContentsInterpretation, Interpretability and Explainability; and why does it all matter? Key Concepts of Interpretability Interpretation Challenges Global Model-agnostic Interpretation Methods Local Model-agnostic Interpretation Methods Anchors and Counterfactual Explanations Visualizing Convolutional Neural Networks Interpreting NLP Transformers Interpretation Methods for Multivariate Forecasting and Sensitivity Analysis Feature Selection and Engineering for Interpretability Bias Mitigation and Causal Inference Methods Monotonic Constraints and Model Tuning for Interpretability Adversarial Robustness What's Next for Machine Learning Interpretability?
Hinweis: Dieser Artikel kann nur an eine deutsche Lieferadresse ausgeliefert werden.
Autorenporträt
Serg Masi¿s has been at the confluence of the internet, application development, and analytics for the last two decades. Currently, he's a climate and agronomic data scientist at Syngenta, a leading agribusiness company with a mission to improve global food security. Before that role, he co-founded a start-up, incubated by Harvard Innovation Labs, that combined the power of cloud computing and machine learning with principles in decision-making science to expose users to new places and events. Whether it pertains to leisure activities, plant diseases, or customer lifetime value, Serg is passionate about providing the often-missing link between data and decision-making-and machine learning interpretation helps bridge this gap robustly.