Volume 47

Hierarchical Eco-Driving Based on Safety Off-line Reinforcement Learning for P2-P3 Hybrid Electric Truck Yahui Zhao, Zhong Wang, Yahui Zhang, Yunfeng Song



Along with the rapid evolution of intelligent transportation systems (ITS) and network technology, vehicles have access to richer traffic data, paving the way for more efficient driving controls now. A novel hierarchical eco-driving strategy which is tailored specifically for hybrid electric truck navigating complex multi-intersection scenarios is proposed. Initially, a simulation scene is designed to simulate realistic truck-following scenarios. Subsequently, an upper-layer truck-following strategy is devised utilizing the safe off-line deep deterministic policy gradient(SDDPG) algorithm. This strategy is fully use of insights from leading vehicles and traffic signal data. Specifically, logical judgement module considering safety constraints are integrated into training processing to minimize collision risks. In addition, safe reward function is set to direct the agent to learn the safer action. Moving to the lower layer, an energy management strategy is proposed using deep reinforcement learning (DRL) techniques. A unique reward shaping function is introduced to guide the learning process effectively. Ultimately, the proposed methodology demonstrates a remarkable fuel-saving rate of 97.46% compared to dynamic programming (DP) approach by simulation.

Keywords hybrid electric truck, truck-following, SDDPG, energy management strategy

Copyright ©
Energy Proceedings