Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2019 Dec 10;116(50):24972-24978.
doi: 10.1073/pnas.1820676116. Epub 2019 Nov 22.

Social behavior for autonomous vehicles

Affiliations

Social behavior for autonomous vehicles

Wilko Schwarting et al. Proc Natl Acad Sci U S A. .

Abstract

Deployment of autonomous vehicles on public roads promises increased efficiency and safety. It requires understanding the intent of human drivers and adapting to their driving styles. Autonomous vehicles must also behave in safe and predictable ways without requiring explicit communication. We integrate tools from social psychology into autonomous-vehicle decision making to quantify and predict the social behavior of other drivers and to behave in a socially compliant way. A key component is Social Value Orientation (SVO), which quantifies the degree of an agent's selfishness or altruism, allowing us to better predict how the agent will interact and cooperate with others. We model interactions between agents as a best-response game wherein each agent negotiates to maximize their own utility. We solve the dynamic game by finding the Nash equilibrium, yielding an online method of predicting multiagent interactions given their SVOs. This approach allows autonomous vehicles to observe human drivers, estimate their SVOs, and generate an autonomous control policy in real time. We demonstrate the capabilities and performance of our algorithm in challenging traffic scenarios: merging lanes and unprotected left turns. We validate our results in simulation and on human driving data from the NGSIM dataset. Our results illustrate how the algorithm's behavior adapts to social preferences of other drivers. By incorporating SVO, we improve autonomous performance and reduce errors in human trajectory predictions by 25%.

Keywords: Social Value Orientation; autonomous driving; game theory; inverse reinforcement learning; social compliance.

PubMed Disclaimer

Conflict of interest statement

Competing interest statement: W.S., A.P., J.A.-M., S.K., and D.R. are inventors on a provisional patent disclosure (filed by Massachusetts Institute of Technology) related to the social behavior for autonomous vehicles and uses thereof.

Figures

Fig. 1.
Fig. 1.
(A) Knowing a driver’s SVO helps predict their behavior. Here, the AV (blue) observes the trajectories of the other human driver (black). We can predict future motion of the black vehicle for candidate SVOs based on a utility-maximizing decision model (Driving as a Game in Mixed Human–Robot Systems). If the human driver is egoistic, they will not yield, and the AV must wait to turn. If the human driver is prosocial, they will yield, and the AV can safely turn. In both cases, the driver is utility-maximizing, but the utility function varies by SVO. An egoistic driver considers only its own reward in computing its utility. A prosocial driver weights its reward with the reward of the other car. The most likely SVO is the one that best matches a candidate trajectory to the actual observed trajectory (Measuring and Estimating SVO Online). The AV predicts future motion using the most likely SVO estimate. (B) SVO is represented as an angular preference φ that relates how individuals weight rewards in a social dilemma game. Here, we plot the estimated SVOs for drivers merging in the NGSIM dataset, explained in Methods and Results. (C) The distribution of mean SVO estimates during interactions. We find merging drivers (red) to be more competitive than nonmerging drivers (blue).
Fig. 2.
Fig. 2.
(Upper) Snapshot of NGSIM dataset with n=2 active cars (purple and green) and n=50 obstacle cars (gray). Here, car 1 (purple) is attempting a merge and must interact with car 2 (green). The solid lines indicate the predicted trajectory from our algorithm. For SVO estimates at each frame, the blue represents the distribution, while the red line indicates our estimate. (Lower) The solid line indicates SVO estimate over time, with the shaded region representing the confidence bounds. Initially, car 2 does not cooperate with car 1 and does not allow it to merge. After a few seconds, car 2 becomes more prosocial, which corresponds to it “dropping back” and allowing the first car to merge.
Fig. 3.
Fig. 3.
(Left) Estimated distribution of SVO preference of blue car shown as polar histograms in SVO circles for premerge and during merge. (Right) The mean estimate is shown as red and the ground truth (80, altruistic) in black. SVO estimates with 1-σ uncertainty bounds are shown on the right. Area of strong interaction corresponds to gray area on both sides.
Fig. 4.
Fig. 4.
Unprotected left turn of an AV (red; i=1) with oncoming traffic. As the AV approaches the intersection, two egoistic cars (blue; i=2,3) continue and do not yield. A third altruistic car (magenta; i=4) yields by slowing down, allowing the AV to complete the turn in the gap.

References

    1. National Highway Traffic Safety Administration , “Traffic Safety Facts 2015” (Tech. Rep. DOT HS 812 318, National Highway Traffic Safety Administration, Washington, DC, 2015), p. 101.
    1. Efrati A., Waymo‘s big ambitions slowed by tech trouble. The Information (2018). https://www.theinformation.com/articles/waymos-big-ambitions-slowed-by-t....
    1. Stewart J., Why people keep rear-ending self-driving cars. Wired (2018) https://www.wired.com/story/self-driving-car-crashes-rear-endings-why-ch....
    1. Urmson C., et al. , Autonomous driving in urban environments: Boss and the urban challenge. J. Field Robot. 25, 425–466 (2008).
    1. Leonard J., et al. , A perception-driven autonomous urban vehicle. J. Field Robot. 25, 727–774 (2008).

Publication types

LinkOut - more resources