Grid Resilience New Generation of Grid Emergency Control Technology

Published 2 December 2021

Grid operators face big challenges and big opportunities when it comes to managing through emergency conditions that disrupt power service. The increasing number of power outages in the United States cost an estimated $30-50 billion and affect millions of customers each year. A real-time adaptive system can safeguard the grid against costly disruptions.

Grid operators face big challenges and big opportunities when it comes to managing through emergency conditions that disrupt power service. The increasing number of power outages in the United States cost an estimated $30-50 billion and affect millions of customers each year. The challenge and the opportunity both lie in optimizing power system responses when the unexpected happens. Optimization can minimize the effects of these events.

Researchers at Pacific Northwest National Laboratory (PNNL) are collaborating with partners at Google Research, PacifiCorp, and V&R Energy to develop a real-time adaptive emergency control system to safeguard the grid against costly disturbances from extreme weather and other disruptive events. The technology significantly improves on existing methods, which require grid operators to rely on offline studies to determine appropriate system responses during real events.

However, these events do not always unfold as we expect, and grid conditions can change in fractions of a second. Some online tools considered by current standards to operate in “real time” can trail behind actual events happening in the system by five to 15 minutes.

The scalable High-Performance Adaptive Deep-Reinforcement-Learning-based Real-Time Emergency Control (HADREC) platform—being further developed and tested under a three-year investment from the Department of Energy’s Advanced Research Projects Agency–Energy (ARPA-E)—uses a type of artificial intelligence (AI) called deep reinforcement learning, alongside high-performance computing, to automate decision-making and system responses within seconds of a disturbance.

Deep reinforcement learning improves on conventional reinforcement learning in its ability to better scale and quickly and effectively apply existing patterns to a real event’s unpredicted problems across thousands of system assets. Initial results show the HADREC technology will help reduce system reaction time 60-fold and improve system recovery time by at least 10%. This helps prevent cascading disruptions, thus allowing more efficient and resilient grid operation.

A Three-Year Plan Toward Real-World System Demonstration
The project’s collaborators are realizing the benefits of combining diverse perspectives and expertise from all angles of the problem while working efficiently toward a solution. During year one, the team established performance methods and benchmarks for the HADREC algorithms and began testing them using a mock system the size of the Texas grid. Once satisfied with algorithm performance, they moved testing to a more realistic, larger-scale system.