This book offers a unique interdisciplinary perspective on the ethics of 'artificial intelligence' - autonomous, intelligent, (and connected) systems, or AISs, applying principles of social cognition to understand the social and ethical issues associated with the creation, adoption, and implementation of AISs.
As humans become entangled in sociotechnical systems defined by human and artificial agents, there is a pressing need to understand how trust is created, used, and abused. Compounding the difficulty in answering these questions, stakeholders directly or indirectly affected by these systems differ in their motivations, understanding, and values. This volume provides a comprehensive resource to help stakeholders understand ethical issues of designing and implementing AISs using an ethical sensemaking approach. Starting with the general technical affordances of AIS, Dr. Jordan Richard Schoenherr considers the features of system design relating data integrity, selection and interpretation of algorithms, and the evolution processes that drive AISs innovation as a sociotechnological system. The poles of technophobia (algorithmic aversion) and technophilia (algorithmic preference) in the public perception of AISs are then described and considered against existing evidence, including issues ranging from the displacement and re-education needs of the human workforce, the impact of use of technology on interpersonal accord, and surveillance and cybersecurity. Ethical frameworks that provide tools for evaluating the values and outcomes of AISs are then reviewed, and how they can be aligned with ethical sensemaking processes identified by psychological science is explored. Finally, these disparate threads are brought together in a design framework.
Also including sections on policies and guideline, gaming and social media, and Eastern philosophical frameworks, this is fascinating reading for students and academics in psychology, computer science, philosophy, and related areas, as well as professionals such as policy makers and those working with AI systems.
As humans become entangled in sociotechnical systems defined by human and artificial agents, there is a pressing need to understand how trust is created, used, and abused. Compounding the difficulty in answering these questions, stakeholders directly or indirectly affected by these systems differ in their motivations, understanding, and values. This volume provides a comprehensive resource to help stakeholders understand ethical issues of designing and implementing AISs using an ethical sensemaking approach. Starting with the general technical affordances of AIS, Dr. Jordan Richard Schoenherr considers the features of system design relating data integrity, selection and interpretation of algorithms, and the evolution processes that drive AISs innovation as a sociotechnological system. The poles of technophobia (algorithmic aversion) and technophilia (algorithmic preference) in the public perception of AISs are then described and considered against existing evidence, including issues ranging from the displacement and re-education needs of the human workforce, the impact of use of technology on interpersonal accord, and surveillance and cybersecurity. Ethical frameworks that provide tools for evaluating the values and outcomes of AISs are then reviewed, and how they can be aligned with ethical sensemaking processes identified by psychological science is explored. Finally, these disparate threads are brought together in a design framework.
Also including sections on policies and guideline, gaming and social media, and Eastern philosophical frameworks, this is fascinating reading for students and academics in psychology, computer science, philosophy, and related areas, as well as professionals such as policy makers and those working with AI systems.