Abstract
The optimal control of chilled water (CHW) plants is critical for enhancing the energy efficiency of data centers. The effectiveness of traditional rule-based control (RBC) largely relies on the expertise of control engineers, while the application of model-based predictive control (MPC) is constrained by its dependence on extensive historical data and highly accurate models. Recently, reinforcement learning (RL)-based control has attracted increasing research attention due to its characteristics of adaptive control. However, RL-based control studies in HVAC domain are mainly for air-side systems, with relatively limited exploration of CHW plants. To address this gap, this paper evaluated the application of Deep Q-Network (DQN) for the cooling water system optimal control in a CHW plant. A detailed simulation environment model was developed using the real data from a data center CHW plant, and the DQN algorithm was subsequently assessed within this environment. The results showed that, compared with RBC, DQN-based control achieved 15.4% monthly energy savings, closely approaching the 15.9% energy savings attained by MPC. Moreover, the control actions generated by DQN-based control converged toward patterns similar to those of MPC. These findings suggested that DQN-based optimal control holds strong potential to improve energy performance for CHW plants without sufficient historical data.
Keywords deep Q-network, optimal control, chilled water plant, energy saving
Copyright ©
Energy Proceedings