GRID RESILIENCEBringing GPT to the Grid

By Leah Burrows

Published 20 June 2024

Much has been discussed about the promise and limitations of large-language models in industries such as education, healthcare and even manufacturing. But what about energy? Could large-language models (LLMs), like those that power ChatGPT, help run and maintain the energy grid?

Much has been discussed about the promise and limitations of large-language models in industries such as education, healthcare and even manufacturing. But what about energy? Could large-language models (LLMs), like those that power ChatGPT, help run and maintain the energy grid? 

New research, co-authored by Na Li, Winokur Family Professor of Electrical Engineering and Applied Mathematics at the Harvard John A. Paulson School of Engineering and Applied Sciences (SEAS) suggests that LLMs could play an important role in co-managing some aspects of the grid, including emergency and outage response, crew assignments and wildfire preparedness and prevention. But security and safety concerns need to be addressed before LLMs can be deployed in the field. 

“There is so much hype with large-language models, it’s important for us to ask what LLMs can do well and, perhaps more importantly, what they can’t do well, at least not yet, in the power sector,” said Le Xie, Professor of Electrical & Computer Engineering at Texas A&M University and corresponding author of the study.  “The best way to describe the potential of LLMs in this sector is as a co-pilot. It’s not a pilot yet — but it can provide advice, a second opinion, and very timely responses with very few training data samples, which is really beneficial to human decision making.”

The research is published in Joule.

The research team, which included engineers from Houston-based energy-provider CenterPoint Energy and grid-operator Midcontinent Independent System Operator, used GPT models to explore the capabilities of LLMs in the energy sector — and identified both strengths and weaknesses. 

The strengths of LLMs — their ability to generate logical responses from prompts, to learn based on limited data, to delegate tasks to embedded tools and to work with non-text data such as pictures — could be leveraged to perform tasks such as detecting broken equipment, real-time electricity load forecasting, and analyzing wildfire patterns for risk assessments.

But there are significant challenges to implementing LLMs in the energy sector — not the least of which is the lack of grid-specific data to train the models. For obvious security reasons, crucial data about the U.S. power system is not publicly available and cannot be made public. Another issue is the lack of safety guardrails. The power grid, like autonomous vehicles, needs to prioritize safety and incorporate large safety margin when making real-time decisions. LLMs also need to get better about providing reliable solutions and transparency around their uncertainties, said Li. 

“We want foundational LLMs to be able to say ‘I don’t know’ or ‘I only have 50% certainty about this response’, rather than give us an answer that might be wrong,” said Li. “We need to be able to count on these models to provide us with reliable solutions that meet specified standards for safety and resiliency.”

All of these challenges give engineers a roadmap for future work. 

“As engineers, we want to highlight these limitations because we want to see how we can improve them,” said Li. “Power system engineers can help improve security and safety guarantees by either fine tuning the foundational LLM or developing our own foundational model for the power systems. One exciting part of this research is that it is a snapshot in time. Next year or even sooner, we can go back and revisit all these challenges and see if there has been any improvement.” 

Leah Burrows. Assistant Director of Communications, Harvard School of Engineering and Applied Sciences. The article was originally posted to the website of the Harvard John A, Paulson School of Engineering and Applied Sciences.