Autonomous morals: Inferences of mind predict acceptance of AI behavior in sacrificial moral dilemmas

Publication date: November 2019Source: Journal of Experimental Social Psychology, Volume 85Author(s): April D. Young, Andrew E. MonroeAbstractThree studies compared people's judgments of autonomous vehicles (AVs) and humans faced with a moral dilemma. Study 1 showed that, for identical decisions, AVs were judged as more blameworthy, less moral, and less trustworthy compared to humans. However, perceiving AVs as having a human-like mind reduced this difference. Study 2 extended this finding by manipulating AV mindedness. Describing AVs' decision-making capacity in mentalistic terms (relative to mechanistic terms) reduced blame and anger and fostered greater trust and perceptions of morality. Study 3, replicated these findings, and demonstrated that perceived mindedness predicted judgments of trust, morality, and willingness to purchase or ride in an AV. These findings suggest that people's moral reservations about AVs may derive from doubting that AVs have the mental capacities necessary for moral judgment, and that one route for improving trust in AVs is to design them with a veneer of human-like mental qualities.
Source: Journal of Experimental Social Psychology - Category: Psychiatry & Psychology Source Type: research
More News: Men | Psychology | Study