Journal
IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS
Volume 19, Issue 7, Pages 2165-2174Publisher
IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TITS.2018.2816935
Keywords
Vehicle-to-grid (V2G); electric vehicle (EV); microgrid; Markov decision process (MDP); reinforcement learning (RL)
Categories
Funding
- Korean Government (MSIP) through the National Research Foundation (NRF) of Korea [2017R1E1A1A01073742]
- Basic Science Research Program through the NRF of Korea - Ministry of Education [2017R1A6A3A03006846]
Ask authors/readers for more resources
In a vehicle-to-grid (V2G) system, electric vehicles (EVs) can be efficiently used as power consumers and suppliers to achieve microgrid (MG) autonomy. Since EVs can act as energy transporters among different regions (i.e., MGs), it is an important issue to decide where and when EVs are charged or discharged to achieve the optimal performance in a V2G system. In this paper, we propose a mobility-aware V2G control algorithm (MACA) that considers the mobility of EVs, states of charge of EVs, and the estimated/actual demands of MGs and then determines charging and discharging schedules for EVs. To optimize the performance of MACA, the Markov decision process problem is formulated and the optimal policy on charging and discharging is obtained by a value iteration algorithm. Since the mobility of EVs and the estimated/actual demand profiles of MGs may not be easily obtained, a reinforcement learning approach is also introduced. Evaluation results demonstrate that MACA with the optimal and learning-based policies can effectively achieve MG autonomy and provide higher satisfaction on the charging.
Authors
I am an author on this paper
Click your name to claim this paper and add it to your profile.
Reviews
Recommended
No Data Available