Abstract
Various methods have been developed for the co-simulation of occupant behavior and thermal comfort. However, these approaches often present a steep learning curve for beginners due to the complex and labor-intensive coding tasks required for real-time data exchange between multiple simulation tools (e.g., EnergyPlus, MATLAB, and CFD). Recent advancements in large language models (LLMs) have demonstrated promising code generation capabilities, yet their potential in domain-specific co-simulation tasks remains underexplored. To address this gap, this study proposes an LLM-based code generation framework to facilitate the coupling of occupant behavior modeling with thermal comfort simulation. Given the collaborative nature of such coding tasks, Generative Pre-trained Transformers (GPT) serves as multiple roles, including that of an “Analyst”, “Coder”, and “Debugger”, each is responsible for specific subtasks. A comprehensive co-simulation experiment was conducted to evaluate LLMs in terms of “learning and understanding”, “imitating and coding”, and “testing and debugging”. A general workflow is also provided to guide prompt engineering in efficiently tackling such coding tasks. Results indicate that LLMs can accomplish most coding tasks after several rounds of prompting iterations. Simple coding tasks are completed with minimal guidance, while for more advanced tasks, collaboration between experts and LLMs is required to save time. This study also discusses the advantages and limitations of applying LLMs to building co-simulation and highlights their potential in transforming conventional workflows.
Keywords Large Language Models, task division, code generation, GPT, building co-simulation, prompt engineering
Copyright ©
Energy Proceedings