4.7 Article

Driving Behavior Modeling Using Naturalistic Human Driving Data With Inverse Reinforcement Learning


Volume 23, Issue 8, Pages 10239-10251


DOI: 10.1109/TITS.2021.3088935


Trajectory; Vehicles; Entropy; Decision making; Predictive models; Hidden Markov models; Task analysis; Driving behavior modeling; inverse reinforcement learning; trajectory generation; interaction awareness


  1. STAR, Singapore [SERC 1922500046, A2084c0156]
  2. Nanyang Technological University [M4082268.050]

Ask authors/readers for more resources

This paper presents a driving model based on internal reward functions that mimics human decision-making mechanisms. The use of maximum entropy inverse reinforcement learning allows for the inference of reward function parameters from naturalistic human driving data. The results demonstrate that the learned reward functions effectively capture the preferences of different drivers and improve the accuracy of the modeling process.
Driving behavior modeling is of great importance for designing safe, smart, and personalized autonomous driving systems. In this paper, an internal reward function-based driving model that emulates the human's decision-making mechanism is utilized. To infer the reward function parameters from naturalistic human driving data, we propose a structural assumption about human driving behavior that focuses on discrete latent driving intentions. It converts the continuous behavior modeling problem to a discrete setting and thus makes maximum entropy inverse reinforcement learning (IRL) tractable to learn reward functions. Specifically, a polynomial trajectory sampler is adopted to generate candidate trajectories considering high-level intentions and approximate the partition function in the maximum entropy IRL framework. An environment model considering interactive behaviors among the ego and surrounding vehicles is built to better estimate the generated trajectories. The proposed method is applied to learn personalized reward functions for individual human drivers from the NGSIM highway driving dataset. The qualitative results demonstrate that the learned reward functions are able to explicitly express the preferences of different drivers and interpret their decisions. The quantitative results reveal that the learned reward functions are robust, which is manifested by only a marginal decline in proximity to the human driving trajectories when applying the reward function in the testing conditions. For the testing performance, the personalized modeling method outperforms the general modeling approach, significantly reducing the modeling errors in human likeness (a custom metric to gauge accuracy), and these two methods deliver better results compared to other baseline methods. Moreover, it is found that predicting the response actions of surrounding vehicles and incorporating their potential decelerations caused by the ego vehicle are critical in estimating the generated trajectories, and the accuracy of personalized planning using the learned reward functions relies on the accuracy of the forecasting model.


I am an author on this paper
Click your name to claim this paper and add it to your profile.


Primary Rating

Not enough ratings

Secondary Ratings

Scientific rigor
Rate this paper


No Data Available
No Data Available