Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2023 Jan 4:13:1052729.
doi: 10.3389/fpsyg.2022.1052729. eCollection 2022.

Humans, machines, and double standards? The moral evaluation of the actions of autonomous vehicles, anthropomorphized autonomous vehicles, and human drivers in road-accident dilemmas

Affiliations

Humans, machines, and double standards? The moral evaluation of the actions of autonomous vehicles, anthropomorphized autonomous vehicles, and human drivers in road-accident dilemmas

Maike M Mayer et al. Front Psychol. .

Abstract

A more critical evaluation of the actions of autonomous vehicles in comparison to those of human drivers in accident scenarios may complicate the introduction of autonomous vehicles into daily traffic. In two experiments, we tested whether the evaluation of actions in road-accident scenarios differs as a function of whether the actions were performed by human drivers or autonomous vehicles. Participants judged how morally adequate they found the actions of a non-anthropomorphized autonomous vehicle (Experiments 1 and 2), an anthropomorphized autonomous vehicle (Experiment 2), and a human driver (Experiments 1 and 2) in otherwise identical road-accident scenarios. The more lives were spared, the better the action was evaluated irrespective of the agent. However, regardless of the specific action that was chosen, the actions of the human driver were always considered more morally justifiable than the corresponding actions of the autonomous vehicle. The differences in the moral evaluations between the human driver and the autonomous vehicle were reduced, albeit not completely eliminated, when the autonomous vehicle was anthropomorphized (Experiment 2). Anthropomorphizing autonomous vehicles may thus influence the processes underlying moral judgments about the actions of autonomous vehicles such that the actions of anthropomorphized autonomous vehicles appear closer in moral justifiability to the actions of humans. The observed differences in the moral evaluation of the actions of human drivers and autonomous vehicles could cause a more critical public response to accidents involving autonomous vehicles compared to those involving human drivers which might be reduced by anthropomorphizing the autonomous vehicles.

Keywords: anthropomorphism; autonomous agents; autonomous vehicle; human driver; moral evaluation.

PubMed Disclaimer

Conflict of interest statement

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Figures

Figure 1
Figure 1
Two examples of the illustrations of the road-accident scenarios employed in the experiment. The images depict the two available actions for a road-accident scenario with five pedestrians on the road. (A) The person inside the vehicle is sacrificed to save the five pedestrians. (B) The five pedestrians are sacrificed to save the person inside the vehicle. The scenarios were created using Microsoft PowerPoint® and Apple Keynote®.
Figure 2
Figure 2
The mean moral evaluation of the actions (sacrificing the person inside the vehicle [dashed lines], sacrificing the pedestrian/s [solid lines]) as a function of the number of pedestrians on the road (1, 2, and 5) and the agent (human driver, autonomous vehicle). The moral-evaluation scale ranged from “very reprehensible” (1) to “very justifiable” (6). The error bars represent standard errors of the means.
Figure 3
Figure 3
The mean moral evaluation of the actions (sacrificing the person inside the vehicle [dashed lines], sacrificing the pedestrian/s [solid lines]) as a function of the number of pedestrians on the road (1, 2, and 5) and the agent (human driver, anthropomorphized autonomous vehicle, autonomous vehicle). The moral-evaluation scale ranged from “very reprehensible” (1) to “very justifiable” (6). The error bars represent standard errors of the mean.

Similar articles

References

    1. Altman M. N., Khislavsky A. L., Coverdale M. E., Gilger J. W. (2016). Adaptive attention: how preference for animacy impacts change detection. Evol. Hum. Behav. 37, 303–314. doi: 10.1016/j.evolhumbehav.2016.01.006 - DOI
    1. Anania E. C., Rice S., Walters N. W., Pierce M., Winter S. R., Milner M. N. (2018). The effects of positive and negative information on consumers’ willingness to ride in a driverless vehicle. Transp. Policy 72, 218–224. doi: 10.1016/j.tranpol.2018.04.002 - DOI
    1. Anderson J. M., Nidhi K., Stanley K. D., Sorensen P., Samaras C., Oluwatola O. A. (2016). Autonomous Vehicle Technology: A Guide for Policymakers. Santa Monica, CA: RAND Corporation.
    1. Awad E., Dsouza S., Kim R., Schulz J., Henrich J., Shariff A., et al. . (2018). The moral machine experiment. Nature 563, 59–64. doi: 10.1038/s41586-018-0637-6 - DOI - PubMed
    1. Bagloee S. A., Tavana M., Asadi M., Oliver T. (2016). Autonomous vehicles: challenges, opportunities, and future implications for transportation policies. J. Mod. Transport. 24, 284–303. doi: 10.1007/s40534-016-0117-3 - DOI

LinkOut - more resources