Responsible Use of AI in Military Systems
Herausgeber: Schraagen, Jan Maarten
Responsible Use of AI in Military Systems
Herausgeber: Schraagen, Jan Maarten
- Gebundenes Buch
- Merkliste
- Auf die Merkliste
- Bewerten Bewerten
- Teilen
- Produkt teilen
- Produkterinnerung
- Produkterinnerung
The book provides the reader with a broad overview of all relevant aspects involved with the responsible development, deployment and use of AI in military systems. It stresses both the advantages of AI as well as the potential downsides of including AI in military systems.
Andere Kunden interessierten sich auch für
- Nachshon Goltz (Sean)Real World AI Ethics for Data Scientists216,99 €
- Shriram K VasudevanDeep Learning187,99 €
- Anoop KunchukuttanMachine Translation and Transliteration involving Related, Low-resource Languages219,99 €
- Artificial Intelligence, Blockchain, Computing and Security Volume 2329,99 €
- K. ShankarArtificial Intelligence for the Internet of Health Things165,99 €
- Chandrasekar VuppalapatiDemocratization of Artificial Intelligence for the Future of Humanity303,99 €
- Jitendra KumarMachine Learning for Cloud Management198,99 €
-
-
-
The book provides the reader with a broad overview of all relevant aspects involved with the responsible development, deployment and use of AI in military systems. It stresses both the advantages of AI as well as the potential downsides of including AI in military systems.
Produktdetails
- Produktdetails
- Verlag: Taylor & Francis
- Seitenzahl: 374
- Erscheinungstermin: 26. April 2024
- Englisch
- Abmessung: 234mm x 156mm x 22mm
- Gewicht: 717g
- ISBN-13: 9781032524306
- ISBN-10: 1032524308
- Artikelnr.: 70146918
- Verlag: Taylor & Francis
- Seitenzahl: 374
- Erscheinungstermin: 26. April 2024
- Englisch
- Abmessung: 234mm x 156mm x 22mm
- Gewicht: 717g
- ISBN-13: 9781032524306
- ISBN-10: 1032524308
- Artikelnr.: 70146918
Jan Maarten Schraagen is Principal Scientist at TNO, The Netherlands. His research interests include human-autonomy teaming and responsible AI. He is main editor of Cognitive Task Analysis (2000) and Naturalistic Decision Making and Macrocognition (2008) and co-editor of the Oxford Handbook of Expertise (2020). He is editor in chief of the Journal of Cognitive Engineering and Decision Making. Dr. Schraagen holds a PhD in Cognitive Psychology from the University of Amsterdam, The Netherlands.
Preface. Acknowledgements. Editor. Contributors. 1 Introduction to Responsible Use of AI in Military Systems. SECTION I Implementing Military AI Responsibly: Models and Approaches. 2 A SociöTechnical Feedback Loop for Responsible Military AI Life
Cycles from Governance to Operation. 3 How Can Responsible AI Be Implemented? 4 A Qualitative Risk Evaluation Model for AI
Enabled Military Systems. 5 Applying Responsible AI Principles into Military AI Products and Services: A Practical Approach. 6 Unreliable AIs for the Military. SECTION II Liability and Accountability of Individuals and States. 7 Methods to Mitigate Risks Associated with the Use of AI in the Military Domain. 8 'Killer Pays': State Liability for the Use of Autonomous Weapons Systems in the Battlespace. 9 Military AI and Accountability of Individuals and States for War Crimes in the Ukraine. 10 Scapegoats!: Assessing the Liability of Programmers and Designers for Autonomous Weapons Systems. SECTION III Human Control in Human-AI Military Teams. 11 Rethinking 'Meaningful Human Control'. 12 AlphaGo's Move 37 and Its Implications for AI
Supported Military Decision
Making. 13 Bad, Mad, and Cooked: Moral Responsibility for Civilian Harms in Human-AI Military Teams. 14 Neglect Tolerance as a Measure for Responsible Human Delegation. SECTION IV Policy Aspects. 15 Strategic Interactions: The Economic Complements of AI and the Political Context of War. 16 Promoting Responsible State Behavior on the Use of AI in the Military Domain: Lessons. SECTION V Bounded Autonomy. 17 Bounded Autonomy. Index.
Cycles from Governance to Operation. 3 How Can Responsible AI Be Implemented? 4 A Qualitative Risk Evaluation Model for AI
Enabled Military Systems. 5 Applying Responsible AI Principles into Military AI Products and Services: A Practical Approach. 6 Unreliable AIs for the Military. SECTION II Liability and Accountability of Individuals and States. 7 Methods to Mitigate Risks Associated with the Use of AI in the Military Domain. 8 'Killer Pays': State Liability for the Use of Autonomous Weapons Systems in the Battlespace. 9 Military AI and Accountability of Individuals and States for War Crimes in the Ukraine. 10 Scapegoats!: Assessing the Liability of Programmers and Designers for Autonomous Weapons Systems. SECTION III Human Control in Human-AI Military Teams. 11 Rethinking 'Meaningful Human Control'. 12 AlphaGo's Move 37 and Its Implications for AI
Supported Military Decision
Making. 13 Bad, Mad, and Cooked: Moral Responsibility for Civilian Harms in Human-AI Military Teams. 14 Neglect Tolerance as a Measure for Responsible Human Delegation. SECTION IV Policy Aspects. 15 Strategic Interactions: The Economic Complements of AI and the Political Context of War. 16 Promoting Responsible State Behavior on the Use of AI in the Military Domain: Lessons. SECTION V Bounded Autonomy. 17 Bounded Autonomy. Index.
Preface. Acknowledgements. Editor. Contributors. 1 Introduction to Responsible Use of AI in Military Systems. SECTION I Implementing Military AI Responsibly: Models and Approaches. 2 A SociöTechnical Feedback Loop for Responsible Military AI Life
Cycles from Governance to Operation. 3 How Can Responsible AI Be Implemented? 4 A Qualitative Risk Evaluation Model for AI
Enabled Military Systems. 5 Applying Responsible AI Principles into Military AI Products and Services: A Practical Approach. 6 Unreliable AIs for the Military. SECTION II Liability and Accountability of Individuals and States. 7 Methods to Mitigate Risks Associated with the Use of AI in the Military Domain. 8 'Killer Pays': State Liability for the Use of Autonomous Weapons Systems in the Battlespace. 9 Military AI and Accountability of Individuals and States for War Crimes in the Ukraine. 10 Scapegoats!: Assessing the Liability of Programmers and Designers for Autonomous Weapons Systems. SECTION III Human Control in Human-AI Military Teams. 11 Rethinking 'Meaningful Human Control'. 12 AlphaGo's Move 37 and Its Implications for AI
Supported Military Decision
Making. 13 Bad, Mad, and Cooked: Moral Responsibility for Civilian Harms in Human-AI Military Teams. 14 Neglect Tolerance as a Measure for Responsible Human Delegation. SECTION IV Policy Aspects. 15 Strategic Interactions: The Economic Complements of AI and the Political Context of War. 16 Promoting Responsible State Behavior on the Use of AI in the Military Domain: Lessons. SECTION V Bounded Autonomy. 17 Bounded Autonomy. Index.
Cycles from Governance to Operation. 3 How Can Responsible AI Be Implemented? 4 A Qualitative Risk Evaluation Model for AI
Enabled Military Systems. 5 Applying Responsible AI Principles into Military AI Products and Services: A Practical Approach. 6 Unreliable AIs for the Military. SECTION II Liability and Accountability of Individuals and States. 7 Methods to Mitigate Risks Associated with the Use of AI in the Military Domain. 8 'Killer Pays': State Liability for the Use of Autonomous Weapons Systems in the Battlespace. 9 Military AI and Accountability of Individuals and States for War Crimes in the Ukraine. 10 Scapegoats!: Assessing the Liability of Programmers and Designers for Autonomous Weapons Systems. SECTION III Human Control in Human-AI Military Teams. 11 Rethinking 'Meaningful Human Control'. 12 AlphaGo's Move 37 and Its Implications for AI
Supported Military Decision
Making. 13 Bad, Mad, and Cooked: Moral Responsibility for Civilian Harms in Human-AI Military Teams. 14 Neglect Tolerance as a Measure for Responsible Human Delegation. SECTION IV Policy Aspects. 15 Strategic Interactions: The Economic Complements of AI and the Political Context of War. 16 Promoting Responsible State Behavior on the Use of AI in the Military Domain: Lessons. SECTION V Bounded Autonomy. 17 Bounded Autonomy. Index.