74,89 €
inkl. MwSt.
Sofort per Download lieferbar
- Format: PDF
- Merkliste
- Auf die Merkliste
- Bewerten Bewerten
- Teilen
- Produkt teilen
- Produkterinnerung
- Produkterinnerung
Bitte loggen Sie sich zunächst in Ihr Kundenkonto ein oder registrieren Sie sich bei
bücher.de, um das eBook-Abo tolino select nutzen zu können.
Hier können Sie sich einloggen
Hier können Sie sich einloggen
Sie sind bereits eingeloggt. Klicken Sie auf 2. tolino select Abo, um fortzufahren.
Bitte loggen Sie sich zunächst in Ihr Kundenkonto ein oder registrieren Sie sich bei bücher.de, um das eBook-Abo tolino select nutzen zu können.
This book provides a full presentation of the current concepts and available techniques to make “machine learning” systems more explainable. The approaches presented can be applied to almost all the current “machine learning” models: linear and logistic regression, deep learning neural networks, natural language processing and image recognition, among the others.
Progress in Machine Learning is increasing the use of artificial agents to perform critical tasks previously handled by humans (healthcare, legal and finance, among others). While the principles that guide the design of these…mehr
- Geräte: PC
- ohne Kopierschutz
- eBook Hilfe
- Größe: 8.51MB
- Upload möglich
Andere Kunden interessierten sich auch für
- Loveleen GaurExplainable Artificial Intelligence for Intelligent Transportation Systems (eBook, PDF)160,49 €
- Pradeepta MishraPractical Explainable AI Using Python (eBook, PDF)66,99 €
- Georg DedikovExplainable AI and User Experience. Prototyping and Evaluating an UX-Optimized XAI Interface in Computer Vision (eBook, PDF)39,99 €
- Pradeepta MishraExplainable AI Recipes (eBook, PDF)36,99 €
- Uday KamathExplainable Artificial Intelligence: An Introduction to Interpretable Machine Learning (eBook, PDF)139,09 €
- Aarushi KansalBuilding Generative AI-Powered Apps (eBook, PDF)52,99 €
- AnshikAI for Healthcare with Keras and Tensorflow 2.0 (eBook, PDF)62,99 €
-
-
-
This book provides a full presentation of the current concepts and available techniques to make “machine learning” systems more explainable. The approaches presented can be applied to almost all the current “machine learning” models: linear and logistic regression, deep learning neural networks, natural language processing and image recognition, among the others.
Progress in Machine Learning is increasing the use of artificial agents to perform critical tasks previously handled by humans (healthcare, legal and finance, among others). While the principles that guide the design of these agents are understood, most of the current deep-learning models are "opaque" to human understanding. Explainable AI with Python fills the current gap in literature on this emerging topic by taking both a theoretical and a practical perspective, making the reader quickly capable of working with tools and code for Explainable AI.
Beginning with examples of what Explainable AI (XAI) is and why it is needed in the field, the book details different approaches to XAI depending on specific context and need. Hands-on work on interpretable models with specific examples leveraging Python are then presented, showing how intrinsic interpretable models can be interpreted and how to produce “human understandable” explanations. Model-agnostic methods for XAI are shown to produce explanations without relying on ML models internals that are “opaque.” Using examples from Computer Vision, the authors then look at explainable models for Deep Learning and prospective methods for the future. Taking a practical perspective, the authors demonstrate how to effectively use ML and XAI in science. The final chapter explains Adversarial Machine Learning and how to do XAI with adversarial examples.
Progress in Machine Learning is increasing the use of artificial agents to perform critical tasks previously handled by humans (healthcare, legal and finance, among others). While the principles that guide the design of these agents are understood, most of the current deep-learning models are "opaque" to human understanding. Explainable AI with Python fills the current gap in literature on this emerging topic by taking both a theoretical and a practical perspective, making the reader quickly capable of working with tools and code for Explainable AI.
Beginning with examples of what Explainable AI (XAI) is and why it is needed in the field, the book details different approaches to XAI depending on specific context and need. Hands-on work on interpretable models with specific examples leveraging Python are then presented, showing how intrinsic interpretable models can be interpreted and how to produce “human understandable” explanations. Model-agnostic methods for XAI are shown to produce explanations without relying on ML models internals that are “opaque.” Using examples from Computer Vision, the authors then look at explainable models for Deep Learning and prospective methods for the future. Taking a practical perspective, the authors demonstrate how to effectively use ML and XAI in science. The final chapter explains Adversarial Machine Learning and how to do XAI with adversarial examples.
Produktdetails
- Produktdetails
- Verlag: Springer International Publishing
- Erscheinungstermin: 28. April 2021
- Englisch
- ISBN-13: 9783030686406
- Artikelnr.: 61667379
- Verlag: Springer International Publishing
- Erscheinungstermin: 28. April 2021
- Englisch
- ISBN-13: 9783030686406
- Artikelnr.: 61667379
Leonida Gianfagna (Phd, MBA) is a theoretical physicist that is currently working in Cyber Security as R&D director for Cyber Guru. Before joining Cyber Guru he worked in IBM for 15 years covering leading roles in software development in ITSM (IT Service Management). He is the author of several publications in theoretical physics and computer science and accredited as IBM Master Inventor (15+ filings).
Antonio Di Cecco is a theoretical physicist with a strong mathematical background that is fully engaged on delivering education on AIML at different levels from dummies to experts (face to face classes and remotely). The main strength of his approach is the deep-diving of the mathematical foundations of AIML models that open new angles to present the AIML knowledge and space of improvements for the existing state of art. Antonio has also a “Master in Economics” with focus innovation and teaching experiences. He is leading School of AI in Italy with chapters in Rome and Pescara
Antonio Di Cecco is a theoretical physicist with a strong mathematical background that is fully engaged on delivering education on AIML at different levels from dummies to experts (face to face classes and remotely). The main strength of his approach is the deep-diving of the mathematical foundations of AIML models that open new angles to present the AIML knowledge and space of improvements for the existing state of art. Antonio has also a “Master in Economics” with focus innovation and teaching experiences. He is leading School of AI in Italy with chapters in Rome and Pescara
1.- The Landscape.- 1.1 Examples of what Explainable AI is.- 1.1.1 Learning Phase.- 1.1.2 Knowledge Discovery.- 1.1.3 Reliability and Robustness.- 1.1.4 What have we learnt from the 3 examples.- 1.2 Machine Learning and XAI.- 1.2.1 Machine Learning tassonomy.- 1.2.2 Common Myths.- 1.3 The need for Explainable AI.- 1.4 Explainability and Interpretability: different words to say the same thing or not?.- 1.4.1 From World to Humans.- 1.4.2 Correlation is not causation.- 1.4.3 So what is the difference between interpretability and explainability?.- 1.5 Making Machine Learning systems explainable.- 1.5.1 The XAI flow.- 1.5.2 The big picture.- 1.6 Do we really need to make Machine Learning Models explainable?.- 1.7 Summary.- 1.8 References.- 2. Explainable AI: needs, opportunities and challenges.- 2.1 Human in the loop.- 2.1.1 Centaur XAI systems.- 2.1.2 XAI evaluation from “Human in The Loop perspective”.- 2.2 How to make Machine Learning models explainable.- 2.2.1 Intrinsic Explanations.- 2.2.2 Post-Hoc Explanations.- 2.2.3 Global or Local Explainability.- 2.3 Properties of Explanations.- 2.4 Summary.- 2.5 References.- 3 Intrinsic Explainable Models.- 3.1.Loss Function.- 3.2.Linear Regression.- 3.3.Logistic Regression.- 3.4.Decision Trees.- 3.5.K-Nearest Neighbors (KNN).- 3.6.Summary.- 3.7 References.- 4. Model-agnostic methods for XAI.- 4.1 Global Explanations: permutation Importance and Partial Dependence Plot.- 4.1.1 Ranking features by Permutation Importance.- 4.1.2 Permutation Importance on the train set.- 4.1.3 Partial Dependence Plot.- 4.1.4 Properties of Explanations.- 4.2 Local Explanations: XAI with Shapley Additive explanations.- 4.2.1 Shapley Values: a game-theoretical approach.- 4.2.2 The first use of SHAP.- 4.2.3 Properties of Explanations.- 4.3 The road to KernelSHAP.- 4.3.1 The Shapley formula.- 4.3.2 How to calculate Shapley values.- 4.3.3 Local Linear Surrogate Models (LIME).- 4.3.4 KernelSHAP is a unique form of LIME.- 4.4 Kernel SHAP and interactions.- 4.4.1 The NewYork Cab scenario.- 4.4.2 Train the Model with preliminary analysis.- 4.4.3 Making the model explainable with KernelShap.- 4.4.4 Interactions of features.- 4.5 A faster SHAP for boosted trees.- 4.5.1 Using TreeShap.- 4.5.2 Providing explanations.- 4.6 A naïve criticism to SHAP.- 4.7 Summary.- 4.8 References.- 5. Explaining Deep Learning Models.- 5.1 Agnostic Approach.- 5.1.1 Adversarial Features.- 5.1.2 Augmentations.- 5.1.3 Occlusions as augmentations.- 5.1.4 Occlusions as an Agnostic XAI Method.- 5.2 Neural Networks.- 5.2.1 The neural network structure.- 5.2.2 Why the neural network is Deep? (vs shallow).- 5.2.3 Rectified activations (and Batch Normalization).- 5.2.4 Saliency Maps.- 5.3 Opening Deep Networks.- 5.3.1 Different layer explanation.- 5.3.2 CAM (Class Activation Maps) and Grad-CAM.- 5.3.3 DeepShap / DeepLift.- 5.4 A critic of Saliency Methods.- 5.4.1 What the network sees.- 5.4.2 Explainability batch normalizing layer by layer.- 5.5 Unsupervised Methods.- 5.5.1 Unsupervised Dimensional Reduction.- 5.5.2 Dimensional reduction of convolutional filters.- 5.5.3 Activation Atlases: How to tell a wok from a pan.- 5.6 Summary.- 5.7 References.- 6.Making science with Machine Learning and XAI.- 6.1 Scientific method in the age of data.- 6.2 Ladder of Causation.- 6.3 Discovering physics concepts with ML and XAI.- 6.3.1 The magic of autoencoders.- 6.3.2 Discover the physics of damped pendulum with ML and XAI.- 6.3.3 Climbing the ladder of causation.- 6.4 Science in the age of ML and XAI.- 6.5 Summary.- 6.6 References.- 7. Adversarial Machine Learning and Explainability.- 7.1 Adversarial Examples (AE) crash course.- 7.1.2 Hands-on Adversarial Examples.- 7.2 Doing XAI with Adversarial Examples.- 7.3 Defending against Adversarial Attacks with XAI.- 7.4 Summary.- 7.5 References.- 8. A proposal for a sustainable model of Explainable AI.- 8.1 The XAI "fil rouge".- 8.2 XAI and GDPR.- 8.2.1 FAST XAI.- 8.3 Conclusions.- 8.4 Summary.- 8.5 References.- Index.
1.- The Landscape.- 1.1 Examples of what Explainable AI is.- 1.1.1 Learning Phase.- 1.1.2 Knowledge Discovery.- 1.1.3 Reliability and Robustness.- 1.1.4 What have we learnt from the 3 examples.- 1.2 Machine Learning and XAI.- 1.2.1 Machine Learning tassonomy.- 1.2.2 Common Myths.- 1.3 The need for Explainable AI.- 1.4 Explainability and Interpretability: different words to say the same thing or not?.- 1.4.1 From World to Humans.- 1.4.2 Correlation is not causation.- 1.4.3 So what is the difference between interpretability and explainability?.- 1.5 Making Machine Learning systems explainable.- 1.5.1 The XAI flow.- 1.5.2 The big picture.- 1.6 Do we really need to make Machine Learning Models explainable?.- 1.7 Summary.- 1.8 References.- 2. Explainable AI: needs, opportunities and challenges.- 2.1 Human in the loop.- 2.1.1 Centaur XAI systems.- 2.1.2 XAI evaluation from "Human in The Loop perspective".- 2.2 How to make Machine Learning models explainable.- 2.2.1 Intrinsic Explanations.- 2.2.2 Post-Hoc Explanations.- 2.2.3 Global or Local Explainability.- 2.3 Properties of Explanations.- 2.4 Summary.- 2.5 References.- 3 Intrinsic Explainable Models.- 3.1.Loss Function.- 3.2.Linear Regression.- 3.3.Logistic Regression.- 3.4.Decision Trees.- 3.5.K-Nearest Neighbors (KNN).- 3.6.Summary.- 3.7 References.- 4. Model-agnostic methods for XAI.- 4.1 Global Explanations: permutation Importance and Partial Dependence Plot.- 4.1.1 Ranking features by Permutation Importance.- 4.1.2 Permutation Importance on the train set.- 4.1.3 Partial Dependence Plot.- 4.1.4 Properties of Explanations.- 4.2 Local Explanations: XAI with Shapley Additive explanations.- 4.2.1 Shapley Values: a game-theoretical approach.- 4.2.2 The first use of SHAP.- 4.2.3 Properties of Explanations.- 4.3 The road to KernelSHAP.- 4.3.1 The Shapley formula.- 4.3.2 How to calculate Shapley values.- 4.3.3 Local Linear Surrogate Models (LIME).- 4.3.4 KernelSHAP is a unique form of LIME.- 4.4 Kernel SHAP and interactions.- 4.4.1 The NewYork Cab scenario.- 4.4.2 Train the Model with preliminary analysis.- 4.4.3 Making the model explainable with KernelShap.- 4.4.4 Interactions of features.- 4.5 A faster SHAP for boosted trees.- 4.5.1 Using TreeShap.- 4.5.2 Providing explanations.- 4.6 A naïve criticism to SHAP.- 4.7 Summary.- 4.8 References.- 5. Explaining Deep Learning Models.- 5.1 Agnostic Approach.- 5.1.1 Adversarial Features.- 5.1.2 Augmentations.- 5.1.3 Occlusions as augmentations.- 5.1.4 Occlusions as an Agnostic XAI Method.- 5.2 Neural Networks.- 5.2.1 The neural network structure.- 5.2.2 Why the neural network is Deep? (vs shallow).- 5.2.3 Rectified activations (and Batch Normalization).- 5.2.4 Saliency Maps.- 5.3 Opening Deep Networks.- 5.3.1 Different layer explanation.- 5.3.2 CAM (Class Activation Maps) and Grad-CAM.- 5.3.3 DeepShap / DeepLift.- 5.4 A critic of Saliency Methods.- 5.4.1 What the network sees.- 5.4.2 Explainability batch normalizing layer by layer.- 5.5 Unsupervised Methods.- 5.5.1 Unsupervised Dimensional Reduction.- 5.5.2 Dimensional reduction of convolutional filters.- 5.5.3 Activation Atlases: How to tell a wok from a pan.- 5.6 Summary.- 5.7 References.- 6.Making science with Machine Learning and XAI.- 6.1 Scientific method in the age of data.- 6.2 Ladder of Causation.- 6.3 Discovering physics concepts with ML and XAI.- 6.3.1 The magic of autoencoders.- 6.3.2 Discover the physics of damped pendulum with ML and XAI.- 6.3.3 Climbing the ladder of causation.- 6.4 Science in the age of ML and XAI.- 6.5 Summary.- 6.6 References.- 7. Adversarial Machine Learning and Explainability.- 7.1 Adversarial Examples (AE) crash course.- 7.1.2 Hands-on Adversarial Examples.- 7.2 Doing XAI with Adversarial Examples.- 7.3 Defending against Adversarial Attacks with XAI.- 7.4 Summary.- 7.5 References.- 8. A proposal for a sustainable model of Explainable AI.- 8.1 The XAI "fil rouge".- 8.2 XAI and GDPR.- 8.2.1 FAST XAI.- 8.3 Conclusions.- 8.4 Summary.- 8.5 References.- Index.
1.- The Landscape.- 1.1 Examples of what Explainable AI is.- 1.1.1 Learning Phase.- 1.1.2 Knowledge Discovery.- 1.1.3 Reliability and Robustness.- 1.1.4 What have we learnt from the 3 examples.- 1.2 Machine Learning and XAI.- 1.2.1 Machine Learning tassonomy.- 1.2.2 Common Myths.- 1.3 The need for Explainable AI.- 1.4 Explainability and Interpretability: different words to say the same thing or not?.- 1.4.1 From World to Humans.- 1.4.2 Correlation is not causation.- 1.4.3 So what is the difference between interpretability and explainability?.- 1.5 Making Machine Learning systems explainable.- 1.5.1 The XAI flow.- 1.5.2 The big picture.- 1.6 Do we really need to make Machine Learning Models explainable?.- 1.7 Summary.- 1.8 References.- 2. Explainable AI: needs, opportunities and challenges.- 2.1 Human in the loop.- 2.1.1 Centaur XAI systems.- 2.1.2 XAI evaluation from “Human in The Loop perspective”.- 2.2 How to make Machine Learning models explainable.- 2.2.1 Intrinsic Explanations.- 2.2.2 Post-Hoc Explanations.- 2.2.3 Global or Local Explainability.- 2.3 Properties of Explanations.- 2.4 Summary.- 2.5 References.- 3 Intrinsic Explainable Models.- 3.1.Loss Function.- 3.2.Linear Regression.- 3.3.Logistic Regression.- 3.4.Decision Trees.- 3.5.K-Nearest Neighbors (KNN).- 3.6.Summary.- 3.7 References.- 4. Model-agnostic methods for XAI.- 4.1 Global Explanations: permutation Importance and Partial Dependence Plot.- 4.1.1 Ranking features by Permutation Importance.- 4.1.2 Permutation Importance on the train set.- 4.1.3 Partial Dependence Plot.- 4.1.4 Properties of Explanations.- 4.2 Local Explanations: XAI with Shapley Additive explanations.- 4.2.1 Shapley Values: a game-theoretical approach.- 4.2.2 The first use of SHAP.- 4.2.3 Properties of Explanations.- 4.3 The road to KernelSHAP.- 4.3.1 The Shapley formula.- 4.3.2 How to calculate Shapley values.- 4.3.3 Local Linear Surrogate Models (LIME).- 4.3.4 KernelSHAP is a unique form of LIME.- 4.4 Kernel SHAP and interactions.- 4.4.1 The NewYork Cab scenario.- 4.4.2 Train the Model with preliminary analysis.- 4.4.3 Making the model explainable with KernelShap.- 4.4.4 Interactions of features.- 4.5 A faster SHAP for boosted trees.- 4.5.1 Using TreeShap.- 4.5.2 Providing explanations.- 4.6 A naïve criticism to SHAP.- 4.7 Summary.- 4.8 References.- 5. Explaining Deep Learning Models.- 5.1 Agnostic Approach.- 5.1.1 Adversarial Features.- 5.1.2 Augmentations.- 5.1.3 Occlusions as augmentations.- 5.1.4 Occlusions as an Agnostic XAI Method.- 5.2 Neural Networks.- 5.2.1 The neural network structure.- 5.2.2 Why the neural network is Deep? (vs shallow).- 5.2.3 Rectified activations (and Batch Normalization).- 5.2.4 Saliency Maps.- 5.3 Opening Deep Networks.- 5.3.1 Different layer explanation.- 5.3.2 CAM (Class Activation Maps) and Grad-CAM.- 5.3.3 DeepShap / DeepLift.- 5.4 A critic of Saliency Methods.- 5.4.1 What the network sees.- 5.4.2 Explainability batch normalizing layer by layer.- 5.5 Unsupervised Methods.- 5.5.1 Unsupervised Dimensional Reduction.- 5.5.2 Dimensional reduction of convolutional filters.- 5.5.3 Activation Atlases: How to tell a wok from a pan.- 5.6 Summary.- 5.7 References.- 6.Making science with Machine Learning and XAI.- 6.1 Scientific method in the age of data.- 6.2 Ladder of Causation.- 6.3 Discovering physics concepts with ML and XAI.- 6.3.1 The magic of autoencoders.- 6.3.2 Discover the physics of damped pendulum with ML and XAI.- 6.3.3 Climbing the ladder of causation.- 6.4 Science in the age of ML and XAI.- 6.5 Summary.- 6.6 References.- 7. Adversarial Machine Learning and Explainability.- 7.1 Adversarial Examples (AE) crash course.- 7.1.2 Hands-on Adversarial Examples.- 7.2 Doing XAI with Adversarial Examples.- 7.3 Defending against Adversarial Attacks with XAI.- 7.4 Summary.- 7.5 References.- 8. A proposal for a sustainable model of Explainable AI.- 8.1 The XAI "fil rouge".- 8.2 XAI and GDPR.- 8.2.1 FAST XAI.- 8.3 Conclusions.- 8.4 Summary.- 8.5 References.- Index.
1.- The Landscape.- 1.1 Examples of what Explainable AI is.- 1.1.1 Learning Phase.- 1.1.2 Knowledge Discovery.- 1.1.3 Reliability and Robustness.- 1.1.4 What have we learnt from the 3 examples.- 1.2 Machine Learning and XAI.- 1.2.1 Machine Learning tassonomy.- 1.2.2 Common Myths.- 1.3 The need for Explainable AI.- 1.4 Explainability and Interpretability: different words to say the same thing or not?.- 1.4.1 From World to Humans.- 1.4.2 Correlation is not causation.- 1.4.3 So what is the difference between interpretability and explainability?.- 1.5 Making Machine Learning systems explainable.- 1.5.1 The XAI flow.- 1.5.2 The big picture.- 1.6 Do we really need to make Machine Learning Models explainable?.- 1.7 Summary.- 1.8 References.- 2. Explainable AI: needs, opportunities and challenges.- 2.1 Human in the loop.- 2.1.1 Centaur XAI systems.- 2.1.2 XAI evaluation from "Human in The Loop perspective".- 2.2 How to make Machine Learning models explainable.- 2.2.1 Intrinsic Explanations.- 2.2.2 Post-Hoc Explanations.- 2.2.3 Global or Local Explainability.- 2.3 Properties of Explanations.- 2.4 Summary.- 2.5 References.- 3 Intrinsic Explainable Models.- 3.1.Loss Function.- 3.2.Linear Regression.- 3.3.Logistic Regression.- 3.4.Decision Trees.- 3.5.K-Nearest Neighbors (KNN).- 3.6.Summary.- 3.7 References.- 4. Model-agnostic methods for XAI.- 4.1 Global Explanations: permutation Importance and Partial Dependence Plot.- 4.1.1 Ranking features by Permutation Importance.- 4.1.2 Permutation Importance on the train set.- 4.1.3 Partial Dependence Plot.- 4.1.4 Properties of Explanations.- 4.2 Local Explanations: XAI with Shapley Additive explanations.- 4.2.1 Shapley Values: a game-theoretical approach.- 4.2.2 The first use of SHAP.- 4.2.3 Properties of Explanations.- 4.3 The road to KernelSHAP.- 4.3.1 The Shapley formula.- 4.3.2 How to calculate Shapley values.- 4.3.3 Local Linear Surrogate Models (LIME).- 4.3.4 KernelSHAP is a unique form of LIME.- 4.4 Kernel SHAP and interactions.- 4.4.1 The NewYork Cab scenario.- 4.4.2 Train the Model with preliminary analysis.- 4.4.3 Making the model explainable with KernelShap.- 4.4.4 Interactions of features.- 4.5 A faster SHAP for boosted trees.- 4.5.1 Using TreeShap.- 4.5.2 Providing explanations.- 4.6 A naïve criticism to SHAP.- 4.7 Summary.- 4.8 References.- 5. Explaining Deep Learning Models.- 5.1 Agnostic Approach.- 5.1.1 Adversarial Features.- 5.1.2 Augmentations.- 5.1.3 Occlusions as augmentations.- 5.1.4 Occlusions as an Agnostic XAI Method.- 5.2 Neural Networks.- 5.2.1 The neural network structure.- 5.2.2 Why the neural network is Deep? (vs shallow).- 5.2.3 Rectified activations (and Batch Normalization).- 5.2.4 Saliency Maps.- 5.3 Opening Deep Networks.- 5.3.1 Different layer explanation.- 5.3.2 CAM (Class Activation Maps) and Grad-CAM.- 5.3.3 DeepShap / DeepLift.- 5.4 A critic of Saliency Methods.- 5.4.1 What the network sees.- 5.4.2 Explainability batch normalizing layer by layer.- 5.5 Unsupervised Methods.- 5.5.1 Unsupervised Dimensional Reduction.- 5.5.2 Dimensional reduction of convolutional filters.- 5.5.3 Activation Atlases: How to tell a wok from a pan.- 5.6 Summary.- 5.7 References.- 6.Making science with Machine Learning and XAI.- 6.1 Scientific method in the age of data.- 6.2 Ladder of Causation.- 6.3 Discovering physics concepts with ML and XAI.- 6.3.1 The magic of autoencoders.- 6.3.2 Discover the physics of damped pendulum with ML and XAI.- 6.3.3 Climbing the ladder of causation.- 6.4 Science in the age of ML and XAI.- 6.5 Summary.- 6.6 References.- 7. Adversarial Machine Learning and Explainability.- 7.1 Adversarial Examples (AE) crash course.- 7.1.2 Hands-on Adversarial Examples.- 7.2 Doing XAI with Adversarial Examples.- 7.3 Defending against Adversarial Attacks with XAI.- 7.4 Summary.- 7.5 References.- 8. A proposal for a sustainable model of Explainable AI.- 8.1 The XAI "fil rouge".- 8.2 XAI and GDPR.- 8.2.1 FAST XAI.- 8.3 Conclusions.- 8.4 Summary.- 8.5 References.- Index.