Can Flexible Data Centers Fix the AI Energy Crisis? A New Santa Clara Pilot Aims to Find Out

The words Innovation Explained with the ai underlined on gradient background with a data node pattern.The words Innovation Explained with the ai underlined on gradient background with a data node pattern.

A flexible data center is a computing facility that can dynamically adjust its power consumption in response to real-time electricity grid conditions, all without disrupting the critical AI workloads running inside it. Rather than drawing a constant, heavy load from the grid around the clock, these facilities use intelligent software to scale their energy use up or down during periods of peak demand or grid stress. It’s a concept that re-frames data centers not as rigid power consumers, but as responsive, grid-supportive assets.

In this article, we’ll discuss the newly announced pilot program between Silicon Valley Power (SVP) and Emerald AI, which aims to demonstrate exactly how flexible data centers can work at commercial scale in Santa Clara, California. We’ll explore the technology behind the pilot, including NVIDIA’s DSX Flex capability, the broader problem of surging AI power demand that makes this kind of innovation so urgent, and what the partnership could mean for the future of both AI infrastructure and energy grid management across the country.


TL;DR Snapshot

Silicon Valley Power, the municipally owned electric utility serving Santa Clara, has teamed up with Emerald AI to launch a pilot program that will test whether data centers can actively support the power grid by adjusting their energy consumption during peak periods. The pilot will run at a commercial, multi-megawatt data center where NVIDIA operates advanced AI workloads, and it leverages Emerald AI’s Conductor platform alongside NVIDIA DSX Flex software to orchestrate power flexibility without sacrificing computing performance.

Key takeaways include…

  • Silicon Valley Power and Emerald AI are piloting grid-responsive data center technology using NVIDIA’s cutting-edge AI infrastructure.
  • The Emerald AI Conductor platform can intelligently shift, delay, or redistribute AI workloads to reduce a facility’s power draw during grid stress events, while still protecting the performance of priority computing tasks.
  • If successful, the pilot could help unlock additional power capacity across SVP’s network, offering a practical blueprint for utilities and data center operators nationwide who are grappling with surging AI-driven electricity demand.

Who should read this: Data center operators, energy and utility professionals, AI infrastructure leaders, and sustainability-focused technologists.


The Problem: AI’s Insatiable Appetite for Power

The explosive growth of artificial intelligence is creating an energy challenge that few saw coming at this speed. According to the Belfer Center at Harvard, the Lawrence Berkeley National Laboratory predicts that U.S. data center electricity demand could grow from 176 terawatt hours in 2023 to as much as 580 terawatt hours by 2028. That’s a potential jump from roughly 4.4% to 12% of all U.S. electricity consumption in just five years.

And the strain is already being felt at the local level. As Fortune recently reported, Emerald AI CEO Varun Sivaram noted that data centers were historically less than 5% of the grid but are now on a trajectory toward 25% of American power supply over the course of a decade. Grid interconnection queues are another bottleneck, as according to an ITIF analysis, in some regions projects approved in 2025 had been waiting in the queue for eight years. This timeline simply doesn’t match the pace at which companies want to deploy AI.

The core tension here is straightforward, AI can scale rapidly but the grid can’t. New power plants and transmission lines take years (or decades) to build. In the meantime, communities risk higher electricity bills, strained infrastructure, and potential reliability issues. That tension is precisely what the SVP and Emerald AI pilot is designed to address.

How the Pilot Works: Software-Driven Flexibility

At the heart of this partnership is Emerald AI’s Conductor platform, a software system that acts as an intelligent interface between the power grid and AI computing infrastructure. According to the City of Santa Clara’s official announcement, SVP will deploy Emerald AI software to manage and dispatch participating flexible data centers during limited periods of grid need. The first pilot site will operate at a commercial, multi-megawatt data center where NVIDIA runs cutting-edge AI workloads on advanced GPUs.

Illustration of an electric transmission tower on the left and a data center building on the right, connected by curved green arrows in the center, symbolizing the dynamic energy exchange between power grids and AI data centers.

The Conductor platform works by orchestrating thousands of individual AI workloads across a data center. Some AI tasks, such as model training or batch inference jobs, can tolerate being briefly paused, slowed, or shifted to another time or location without meaningful impact on the end user. The platform identifies these opportunities in real time and uses them to reduce the facility’s power draw precisely when the grid needs relief, such as during a summer peak load event or an unexpected supply shortfall.

This approach is tightly integrated with NVIDIA’s DSX Flex capability, which is a software library built into NVIDIA’s data center reference architecture that enables AI factories to connect to power-grid services. Together, the Emerald AI and NVIDIA technologies allow the data center to respond to SVP’s signals with precision, reducing consumption during critical hours while ensuring that priority workloads keep running without degradation.

Emerald AI has already demonstrated this concept in practice. According to the company’s launch announcement, a previous commercial demonstration showed that an AI compute cluster could reduce power consumption by 25% for three hours during a simulated grid stress event while maintaining acceptable performance. More recent demonstrations in London showed electricity demand dropping by more than a third in under a minute, according to reporting from Pulse2.

Why Santa Clara Is the Ideal Testing Ground

Santa Clara isn’t a random choice for this pilot. The city sits at the geographic and economic center of Silicon Valley and is home to major tech companies like NVIDIA, Intel, Applied Materials, and AMD. Its municipal utility, Silicon Valley Power, has been serving customers since 1896 and is the only full-service, vertically integrated publicly owned utility in Silicon Valley, owning its own generation, transmission, and distribution assets, according to the city’s press release.

This vertically integrated structure gives SVP an unusual degree of control and agility compared to investor-owned utilities. It can move faster on pilot programs, set its own rate structures, and make decisions that prioritize local community benefit. SVP already provides power at rates 33 to 57 percent below neighboring communities and serves over 60,000 customers, making affordability a core part of its mission.

The pilot also reflects a broader strategic vision for the city. As Santa Clara City Manager Jovan Grogan noted in the official announcement, the effort should help strengthen the city’s power system for the future. For a city that’s already a magnet for AI investment, demonstrating that its grid can accommodate growing data center demand without compromising reliability or affordability is a significant competitive advantage.

What This Means for the Future of AI Infrastructure

Illustration of a minimal city skyline with a data center, above which a green power plug and socket connect together with small gold circles radiating outward, symbolizing the unlocking of new energy capacity through data center and grid partnership.

The SVP and Emerald AI pilot is part of a much larger movement to re-imagine the relationship between data centers and the power grid. At CERAWeek 2026 in Houston, NVIDIA and Emerald AI announced partnerships with major energy companies including AES, Constellation, NextEra Energy, Invenergy, and Vistra to develop a new class of flexible AI factories, as reported by Axios. These facilities are designed to connect to the grid faster, generate valuable AI output, and operate as flexible energy assets that can support grid reliability.

The long-term potential is significant. If flexible data centers can reliably reduce their power draw during the relatively few hours of peak grid demand each year, they could unlock substantial additional capacity from the existing grid without requiring expensive new infrastructure. According to Fortune, the longer-term goal is for power-flexible AI factories to unlock up to 100 gigawatts of extra grid capacity from the existing U.S. power grid, enough to power roughly 75 million homes.

For utilities, this represents a paradigm shift. Rather than viewing data centers as threats to grid stability, they can become partners. As Emerald AI’s Sivaram put it in an interview with Data Center Dynamics, flexible data centers could eventually benefit from faster grid interconnection, jumping ahead in queues in much the same way that battery storage systems earn priority access for their role in stabilizing the grid. If the Santa Clara pilot succeeds, it could become a model that other municipalities and utilities across the country look to replicate.


Frequently Asked Questions

Silicon Valley Power (SVP) is a not-for-profit municipal electric utility owned and operated by the City of Santa Clara, California. Established in 1896, it’s the only full-service, vertically integrated publicly owned utility in Silicon Valley, meaning it owns its own power generation, transmission, and distribution infrastructure. SVP serves over 60,000 residential and business customers, including major technology companies like NVIDIA, Intel, and Applied Materials, at rates significantly below those of neighboring communities.

Emerald AI is an energy technology startup founded by Dr. Varun Sivaram that specializes in data center flexibility management. The company’s flagship product, the Emerald Conductor platform, uses AI-driven software to orchestrate computing workloads inside data centers so they can dynamically adjust their power consumption in response to grid conditions. Emerald AI has raised approximately $68 million in total funding, with investors including NVIDIA’s venture capital arm (NVentures), Energy Impact Partners, Radical Ventures, Salesforce Ventures, and Samsung, among others.

NVIDIA is a global technology company and the world’s leading designer of graphics processing units (GPUs), which have become the essential hardware for powering AI workloads. Originally known for its graphics chips used in gaming, NVIDIA has evolved into the dominant supplier of the accelerated computing platforms that train and run AI models at data centers around the world. The company is headquartered in Santa Clara, California, and is one of Silicon Valley Power’s major customers. In this pilot, NVIDIA’s AI workloads running on advanced GPUs serve as the first test site, and its DSX Flex software provides the technical capability that allows the data center to respond to grid signals without disrupting computing performance.

NVIDIA DSX Flex is a software library included in NVIDIA’s Vera Rubin DSX AI Factory reference design. It enables AI data centers (or “AI factories”) to connect to power-grid services and adjust their energy usage in real time based on grid conditions. When paired with Emerald AI’s Conductor platform, DSX Flex allows a data center to precisely reduce its power consumption during periods of peak grid stress while protecting the performance of priority AI workloads.

A flexible data center is a computing facility that can dynamically scale its electricity consumption up or down in response to signals from the power grid. Instead of operating as a constant, high-demand load, a flexible data center uses software to identify which workloads can be briefly delayed, slowed, or redistributed, and it adjusts its power draw accordingly. This capability is especially valuable during the few hours each year when electricity demand peaks, helping to avoid grid strain without building expensive new power infrastructure.

Grid interconnection is the process by which a new electricity-consuming facility (like a data center) or a power generator connects to the existing electrical grid. It involves engineering studies, regulatory approvals, and infrastructure upgrades to ensure the grid can handle the additional load safely and reliably. In many parts of the United States, the interconnection queue has become a major bottleneck, with wait times stretching to several years or more, which significantly delays the deployment of new AI infrastructure.

A municipal utility is an electric utility that’s owned and operated by a local government (such as a city or town) rather than a private, investor-owned corporation. Municipal utilities are typically run on a not-for-profit basis, which often allows them to offer lower electricity rates to their customers. Because they answer to local elected officials rather than shareholders, municipal utilities can also be more responsive to community priorities around affordability, reliability, and sustainability.


Other Enterprise AI Articles You May Be Interested In

What 3DIC Is and Why It Matters for AI Chips: Alchip’s New Platform Explained

OpenAI’s GPT-Rosalind: A New AI Model Purpose-Built for Life Sciences Research

Claude Opus 4.7: Everything You Need to Know About Anthropic’s Latest AI Model

What Is Composable AI Decisioning? GrowthLoop’s New Platform Explained

Adobe’s Firefly AI Assistant: A New Era of Agentic Creativity