36,99 €
inkl. MwSt.
Sofort per Download lieferbar
  • Format: PDF

Understand how to use Explainable AI (XAI) libraries and build trust in AI and machine learning models. This book utilizes a problem-solution approach to explaining machine learning models and their algorithms. The book starts with model interpretation for supervised learning linear models, which includes feature importance, partial dependency analysis, and influential data point analysis for both classification and regression models. Next, it explains supervised learning using non-linear models and state-of-the-art frameworks such as SHAP values/scores and LIME for local interpretation.…mehr

Produktbeschreibung
Understand how to use Explainable AI (XAI) libraries and build trust in AI and machine learning models. This book utilizes a problem-solution approach to explaining machine learning models and their algorithms.
The book starts with model interpretation for supervised learning linear models, which includes feature importance, partial dependency analysis, and influential data point analysis for both classification and regression models. Next, it explains supervised learning using non-linear models and state-of-the-art frameworks such as SHAP values/scores and LIME for local interpretation. Explainability for time series models is covered using LIME and SHAP, as are natural language processing-related tasks such as text classification, and sentiment analysis with ELI5, and ALIBI. The book concludes with complex model classification and regression-like neural networks and deep learning models using the CAPTUM framework that shows feature attribution, neuron attribution,and activation attribution.
After reading this book, you will understand AI and machine learning models and be able to put that knowledge into practice to bring more accuracy and transparency to your analyses.

What You Will Learn
  • Create code snippets and explain machine learning models using Python
  • Leverage deep learning models using the latest code with agile implementations
  • Build, train, and explain neural network models designed to scale
  • Understand the different variants of neural network models
Who This Book Is For
AI engineers, data scientists, and software developers interested in XAI

Autorenporträt
Pradeepta Mishra is the Director of AI, Fosfor at L&T Infotech (LTI). He leads a large group of data scientists, computational linguistics experts, and machine learning and deep learning experts in building the next-generation product—Leni—which is the world’s first virtual data scientist. He has expertise across core branches of artificial intelligence, including autonomous ML and deep learning pipelines, ML ops, image processing, audio processing, natural language processing (NLP), natural language generation (NLG), design and implementation of expert systems, and personal digital assistants (PDAs). In 2019 and 2020, he was named one of "India's Top 40 Under 40 Data Scientists" by Analytics India magazine. Two of his books have been translated into Chinese and Spanish, based on popular demand.
Pradeepa delivered a keynote session at the Global Data Science Conference 2018, USA. He delivered a TEDx talk on "Can Machines Think?", available on the official TEDx YouTube channel. He has mentored more than 2,000 data scientists globally. He has delivered 200+ tech talks on data science, ML, DL, NLP, and AI at various universities, meetups, technical institutions, and community-arranged forums. He is a visiting faculty member to more than 10 universities, where he teaches deep learning and machine learning to professionals, and mentors them in pursuing a rewarding career in artificial intelligence.