4.7 Article

Reinforced Deterministic and Probabilistic Load Forecasting via Q-Learning Dynamic Model Selection

Journal

IEEE TRANSACTIONS ON SMART GRID
Volume 11, Issue 2, Pages 1377-1386

Publisher

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TSG.2019.2937338

Keywords

Load forecasting; machine learning; reinforcement learning; probabilistic forecasting; dynamic model selection

Ask authors/readers for more resources

Both deterministic and probabilistic load forecasting (DLF and PLF) are of critical importance to reliable and economical power system operations. However, most of the widely used statistical machine learning (ML) models are trained by optimizing the global performance, without considering the local behaviour. This paper develops a two-step short-term load forecasting (STLF) model with Q-learning based dynamic model selection (QMS), which provides reinforced deterministic and probabilistic load forecasts (DLFs and PLFs). First, a deterministic forecasting model pool (DMP) and a probabilistic forecasting model pool (PMP) are built based on 10 state-of-the-art ML DLF models and 4 predictive distribution models. Then, in the first-step of each time stamp, a Q-learning agent selects the locally-best DLF model from the DMP to provide an enhanced DLF. At last, the DLF is input to the best PLF model selected from the PMP by another Q-learning agent to perform PLF in the second-step. Numerical simulations on two-year weather and smart meter data show that the developed STLF-QMS method improves DLF and PLF by 50% and 60%, respectively, compared to the state-of-the-art benchmarks.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available