The inside story of the groundbreaking experiment that captured what people think about the life-and-death dilemmas posed by driverless cars. Human drivers don't find themselves facing such moral dilemmas as "should I sacrifice myself by driving off a cliff if that could save the life of a little girl on the road?" Human brains aren't fast enough to make that kind of calculation; the car is over the cliff in a nanosecond. A self-driving car, on the other hand, can compute fast enough to make such a decision--to do whatever humans have programmed it to do. But what should that be? This book investigates how people want driverless cars to decide matters of life and death. In The Car That Knew Too Much, psychologist Jean-François Bonnefon reports on a groundbreaking experiment that captured what people think cars should do in situations where not everyone can be saved. Sacrifice the passengers for pedestrians? Save children rather than adults? Kill one person so many can live? Bonnefon and his collaborators Iyad Rahwan and Azim Shariff designed the largest experiment in moral psychology ever: the Moral Machine, an interactive website that has allowed people --eventually, millions of them, from 233 countries and territories--to make choices within detailed accident scenarios. Bonnefon discusses the responses (reporting, among other things, that babies, children, and pregnant women were most likely to be saved), the media frenzy over news of the experiment, and scholarly responses to it. Boosters for driverless cars argue that they will be in fewer accidents than human-driven cars. It's up to humans to decide how many fatal accidents we will allow these cars to have.
Hinweis: Dieser Artikel kann nur an eine deutsche Lieferadresse ausgeliefert werden.
Hinweis: Dieser Artikel kann nur an eine deutsche Lieferadresse ausgeliefert werden.