May. 7 at 6:57 PM
AI infrastructure constraints are shifting from GPU availability to physical deployment limits, with power becoming the primary bottleneck. Data center utilization is tightening rapidly, with occupancy expected to exceed ~95% by year-end, limiting new GPU rack deployment due to insufficient power allocation.
Three core constraints are converging:
Power: U.S. data centers already use ~4% of electricity, trending toward ~6% by year-end, with demand projected +165% by 2030 (Goldman Sachs).
Cooling: Air cooling caps near ~35kW per rack, while
$NVDA Blackwell systems require ~100–140kW, accelerating liquid cooling adoption.
Interconnects: Scaling hyperscale clusters increases reliance on high-bandwidth optical networks across facilities.
Key exposed names include
$BE,
$NEE,
$VRT, and
$ANET, each positioned within different layers of the infrastructure stack.
👉If this helps, tap @capitalthinktank