Volume 60

Optimal Dispatch of Integrated Energy Systems Based on Deep Reinforcement Learning Daqing Kuang, Yingjun Ruan, Hua Meng, Tingting Xu, Yuting Yao, Chaoliang Wang, Wei Liu

https://doi.org/10.46855/energy-proceedings-11922

Abstract

Integrated Energy Systems (IES) play a crucial role in promoting multi-energy complementarity and enhancing overall energy utilization efficiency. However, the intermittency of renewable energy sources and the stochastic nature of load demand pose significant challenges to system dispatch. To address these issues, this study proposes an economic dispatch method for renewable-integrated IES based on Deep Reinforcement Learning (DRL). The system components, including internal combustion engines, absorption chillers, and battery energy storage systems, are modeled, and the dispatch problem is formulated as a Markov Decision Process (MDP). Two DRL algorithms, Deep Q-Network (DQN) and Twin Delayed Deep Deterministic Policy Gradient (TD3), are employed to train optimal control strategies. Experimental results demonstrate that both algorithms achieve efficient system dispatch, with TD3 exhibiting superior convergence speed and overall performance. The proposed approach does not rely on explicit uncertainty modeling and can adaptively respond to fluctuations in renewable generation and load demand, thereby improving the economic efficiency and robustness of system operation.

Keywords integrated energy system, DQN, TD3, economic dispatch

Copyright ©
Energy Proceedings