The Smart Energy Race for AI
Aligning compute capacity, energy availability, and carbon trajectory has become a weekly discipline. Three strong signals — Ericsson/Forschungszentrum Jülich, Crusoe/Redwood/Form Energy, and Google/Sunraycer — show that organizations can no longer treat AI performance and energy strategy as separate tracks. They must be orchestrated together.
What changed this week
The core question is no longer “how many GPUs can we buy?” but “what energy can we reliably mobilize to power them at scale?”.
For infrastructure and cloud operations leaders, priorities are now clear:
- secure a sovereign HPC backbone for strategic workloads,
- industrialize modular microgrid blocks for local flexibility,
- lock in power volume and price through large-scale PPAs.
Jupiter: Europe’s sovereign HPC signal
Ericsson’s MoU with Forschungszentrum Jülich around Jupiter, Europe’s first exascale system, is more than a research headline. It signals that critical AI workloads (6G RAN models, network optimization, advanced inference) need sovereign, schedulable compute capacity.
Operationally, teams should now define:
- which workloads stay in public cloud,
- which should move to EuroHPC-type capacity,
- how to orchestrate hybrid execution without energy cost drift.
Crusoe + Redwood/Form Energy: microgrids moving beyond pilots
Crusoe’s 12 MW / 63 MWh pilot with Redwood Materials (99.2% availability over seven months), followed by scale-up, validates a key point: repurposed-battery microgrids are becoming operational infrastructure, not just experiments.
This approach can provide:
- local buffers to smooth AI demand peaks,
- stronger resilience under grid constraints,
- faster ramp-up options for selected sites.
It is not a full grid replacement, but a major continuity amplifier.
Google + Sunraycer: the return of large-scale PPAs
The new large solar agreements confirm that competitive advantage is increasingly negotiated in power contracts, not only in IT architecture.
For energy-intensive AI programs, PPAs are becoming governance tools for:
- price visibility,
- volume security,
- tighter carbon trajectory control,
- better capacity planning confidence.
Actionable playbook for infra/ops teams
1) Segment AI workloads by energy criticality
Separate tolerant workloads (batch, deferred training) from sensitive ones (near-real-time inference, business-critical SLAs).
2) Design a 3-layer energy architecture
- primary grid layer,
- local flexibility layer (microgrids/storage),
- long-term contracting layer (PPAs).
3) Set shared KPIs across IT and energy
Track together: effective MW availability, marginal GPU-hour cost, operational carbon intensity, and energy-constraint incidents.
4) Run weekly cross-functional governance
Install a short weekly cadence across infra, ops, finance, and energy leads to arbitrate capacity, workload priority, and grid risk.
What to avoid
- Running AI scaling without dedicated energy steering.
- Stacking cloud commitments without an electricity sourcing plan.
- Keeping IT decisions disconnected from energy and finance decisions.
Energy debt quickly becomes product debt.
Conclusion
The smart energy race is no longer theoretical. It is already an operational reality.
Organizations that combine sovereign HPC, modular microgrids, and structured PPAs now will stay ahead of the AI power curve. Others may discover too late that their bottleneck was not compute, but energy.



