4.7 Article

Energy management for a hybrid electric vehicle based on prioritized deep reinforcement learning framework

Journal

ENERGY
Volume 241, Issue -, Pages -

Publisher

PERGAMON-ELSEVIER SCIENCE LTD
DOI: 10.1016/j.energy.2021.122523

Keywords

Energy management control; Series hybrid electric vehicle; Double deep Q-learning algorithm; Modified prioritized experience replay; Adaptive optimization method

Funding

  1. National Natural Science Foundation of China [51805030]
  2. National Natural Science Foun-dation of China [51861135301]

Ask authors/readers for more resources

This paper proposes a novel deep reinforcement learning control framework for the energy management strategy of series hybrid electric tracked vehicles (SHETV). The framework, based on the double deep Q-learning algorithm, achieves higher training efficiency, lower energy consumption, and approaches global optimality in terms of fuel economy. The adaptability and robustness of the framework are validated under different driving schedules.
A novel deep reinforcement learning (DRL) control framework for the energy management strategy of the series hybrid electric tracked vehicle (SHETV) is proposed in this paper. Firstly, the powertrain model of the vehicle is established, and the formulation of the energy management problem is given. Then, an efficient deep reinforcement learning framework based on the double deep Q-learning (DDQL) algorithm is built for the optimal problem solving, which also contains a modified prioritized experience replay (MPER) and an adaptive optimization method of network weights called AMSGrad. The proposed framework is verified by the realistic driving cycle, then is compared to the dynamic programming (DP) method and the previous deep reinforcement learning method. Simulation results show that the newly constructed deep reinforcement learning framework achieves higher training efficiency and lower en-ergy consumption than the previous deep reinforcement learning method does, and the fuel economy is proved to approach the global optimality. Besides, its adaptability and robustness are validated by different driving schedules.(c) 2021 Elsevier Ltd. All rights reserved.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available