Volume 63

Energy Consumption Analysis of Deep Reinforcement Learning Zhenghao Yang, Xu Wan, Mingyang Sun

https://doi.org/10.46855/energy-proceedings-12183

Abstract

The widespread success of deep reinforcement learning (DRL) is increasingly challenged by its substantial energy footprint, posing a significant obstacle to green artificial intelligence (AI) development. While some research has primarily focused on the energy efficiency of deep neural networks (DNNs), the unique energy consumption characteristics stemming from DRL’s interactive learning paradigm have been largely overlooked. This paper bridges this critical gap by introducing EnergyDRL, a fine-grained energy analysis framework that enables a precise attribution of energy consumption across the entire DRL workflow by decomposing the training loop into functionally distinct stages. Energy monitoring modules based on EnergyDRL are deployed across comprehensive experiments on benchmark tasks, yielding a crucial but frequently overlooked insight: agent-environment interaction, rather than neural network updates, constitutes the primary energy bottleneck. Quantitative investigations further reveal that hyperparameters, model size, and the balance between sampling and updating distinctly influence the energy consumption profile, which provides the empirical foundation for proposed strategies to develop energy-efficient DRL algorithms.

Keywords deep reinforcement learning, energy consumption analysis, energy efficiency

Copyright ©
Energy Proceedings