期刊
TRANSPORTATION RESEARCH PART C-EMERGING TECHNOLOGIES
卷 126, 期 -, 页码 -出版社
PERGAMON-ELSEVIER SCIENCE LTD
DOI: 10.1016/j.trc.2021.102967
关键词
Eco-driving; Deep reinforcement learning; Connected vehicles; Automated driving; Electric vehicles
资金
- German Research Foundation
- German Council of Science and Humanities
- European Regional Development Fund [EFRE-0801698]
- RWTH Aachen University [rwth0477]
Implementing eco-driving strategies for automated vehicles in urban settings is challenging due to limited information on the preceding vehicle pulk and low penetration of vehicle-to-vehicle communication. This study used Reinforcement Learning to develop energy-saving driving strategies for scenarios with limited traffic data, leading to energy savings of up to 19% compared to human drivers and up to 11% compared to a fine-tuned GLOSA algorithm in a probabilistic traffic scenario reflecting real-world conditions.
Urban settings are challenging environments to implement eco-driving strategies for automated vehicles. It is often assumed that sufficient information on the preceding vehicle pulk is available to accurately predict the traffic situation. Because vehicle-to-vehicle communication was introduced only recently, this assumption will not be valid until a sufficiently high penetration of the vehicle fleet has been reached. Thus, in the present study, we employed Reinforcement Learning (RL) to develop eco-driving strategies for cases where little data on the traffic situation are available. An A-segment electric vehicle was simulated using detailed efficiency models to accurately determine its energy-saving potential. A probabilistic traffic environment featuring signalized urban roads and multiple preceding vehicles was integrated into the simulation model. Only information on the traffic light timing and minimal sensor data were provided to the control algorithm. A twin-delayed deep deterministic policy gradient (TD3) agent was implemented and trained to control the vehicle efficiently and safely in this environment. Energy savings of up to 19% compared with a simulated human driver and up to 11% compared with a fine-tuned Green Light Optimal Speed Advice (GLOSA) algorithm were determined in a probabilistic traffic scenario reflecting real-world conditions. Overall, the RL agents showed a better travel time and energy consumption trade-off than the GLOSA reference.
作者
我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。
推荐
暂无数据