Middlegame Weekly
AI’s competitive frontier moved another layer down the stack this week. The market kept talking about models, agents, and accelerator roadmaps, but the cleaner signal was industrial: electricity, nuclear fuel, memory, cooling, land, construction, copper, helium, substations, classified networks. The physical world is no longer the background condition for AI deployment. It is the trade.
That changes the shape of the opportunity. The obvious winners are still powerful, NVIDIA, TSMC, the hyperscalers, the large data center platforms, but the week’s more useful map ran through the suppliers and bottleneck owners that make AI capacity real. GE Vernova, Vertiv, Equinix, Digital Realty, Micron, Marvell, Broadcom, Cameco, X-Energy, Oklo, Centrus, Prysmian, nVent, Quanta, Supermicro. Not all of them are clean AI stories. That is partly the point. The AI capex cycle is now large enough that it shows up in the earnings, order books, and financing narratives of companies built for power systems, cooling loops, metals, fuel, and dirt.
Power became the board-level constraint
The week’s center of gravity was power procurement. Meta’s reported plan to secure up to 6.6 gigawatts of nuclear power by 2035 made the strategic logic plain: the next large AI campuses do not work if electricity is intermittent, delayed, or repriced after the fact. A one-gigawatt supercluster is a utility-scale load with a model business attached.
The nuclear market responded like a capital market that has found a new underwriting story. X-Energy’s $1.02 billion IPO priced above range, was reportedly 15 times oversubscribed, and gave the company a roughly $12 billion market cap. The contrast with its failed SPAC attempt in 2023 says more than the valuation itself. AI load has changed how investors price nuclear optionality. Amazon’s 5 GW commitment by 2039, Meta-linked campus discussions, NVIDIA’s Oklo and Los Alamos partnership, and TerraPower’s Wyoming construction push all point in the same direction: baseload power is being pulled into AI infrastructure planning, not kept as a separate energy-transition category.
The fuel chain matters too. Centrus Energy’s $900 million HALEU enrichment award and Cameco’s 22 million pound uranium supply deal with India are reminders that reactor announcements are only the visible tip. If nuclear is going to function as AI’s clean baseload answer, uranium mining, conversion, enrichment, licensing, and fabrication become part of the compute supply chain.
The hard part is timing. Nuclear procurement is becoming credible earlier than nuclear deployment. That gap is where a lot of speculative heat can build. SMRs and microreactors are moving into AI pitch decks because grid queues are too long and clusters need 50–200 MW blocks of continuous power, but licensing and construction do not compress just because model demand does. For investors, the signal is not that every nuclear developer becomes a winner. It is that any credible path to firm power is gaining strategic value.
The grid is where AI ambition meets public reality
The more immediate bottleneck is still the grid. In Seattle, four proposed data centers, including Equinix-linked projects, could require 369 MW of combined electricity demand, roughly one-third of the city’s daily consumption. That is enough load to turn a procurement decision into a municipal politics problem. If households see data centers as the reason rates rise, permitting risk becomes infrastructure risk.
The UK offered the harsher version. Grid connection waits have stretched to 12–15 years after a 460% jump in demand during the first half of 2025, with requests for 96 GW of high-voltage capacity exceeding Britain’s 72 GW of total generation capacity. That is the software-to-infrastructure mismatch in one statistic: AI demand arrives as a step function, while grid systems move through planning, equipment supply, rights-of-way, and permitting cycles measured in years.
So buyers are building around the bottleneck. Data center operators are pairing batteries with fossil generation because average grid connection waits can stretch to four years. Startups are flashing “bragawatts,” gigawatt-scale power pledges meant to secure bulk PPAs at $40–60/MWh rather than paying spot rates above $200. The cleanest read is not ideological. Reliability is becoming financial strategy. If power is 30–50% of operating expense, credible energy access starts to look like a moat.
That is why the picks-and-shovels earnings mattered. GE Vernova booked $2.4 billion in data-center orders in Q1 alone, more than its entire 2025 total in the category. nVent raised full-year organic sales guidance to 21–23%, with infrastructure now more than 55% of sales. Vertiv reported 30% revenue growth and kept showing up wherever high-density cooling and power management were discussed. These are not distant TAM slides. These are order books.
Compute demand fragmented beyond the GPU headline
The compute story did not disappear. It became more specialized.
NVIDIA remains the gravity well, but its reported $36 billion supply-chain securing spree showed how the largest platform companies are beginning to behave like industrial allocators. The reported commitments to OpenAI, Lumentum, Coherent, and Marvell read less like ordinary strategic investments and more like pre-purchases of bottlenecks. If the components around accelerators become scarce, the accelerator vendor has to secure the surrounding ecosystem before demand consumes it.
Hyperscalers are doing their own version. Broadcom’s multi-year Meta partnership around next-generation MTIA chips and Ethernet, Marvell’s Google-linked custom chip momentum, and Qualcomm’s planned data center chip for a large hyperscaler point to a compute market separating by workload, customer economics, and power envelope. Training still favors the densest accelerator clusters, but inference opens more room for CPUs, Arm systems, ASICs, optics, and customer-specific silicon.
Memory may be the next pressure point. Micron’s Q2 revenue nearly tripled year over year as AI orders soaked up high-margin capacity, while RAM prices reportedly rose sharply as capacity shifted away from consumer markets. SK Hynix being sold out for 2026 is the kind of detail that matters more than another model benchmark. Inference scale needs bandwidth, HBM, DRAM, packaging, and enough upstream equipment to keep the memory stack from rationing deployment.
There was also a useful reminder that scarcity cuts both ways. Poet Technologies fell 47% after Marvell canceled orders through Celestial AI. In a concentrated supply chain, being adjacent to the right customer can re-rate a company quickly. Losing that customer can do the same in reverse.
Data centers became the evidence layer
The market needed proof that AI demand is translating into committed infrastructure rather than remaining trapped in capex guidance and narrative. Data centers supplied it.
Digital Realty posted $707 million in Q1 leasing, with a $1.8 billion backlog, a $16.5 billion development pipeline, and 1.2 GW under construction. Equinix reported its largest quarter of leasing activity, with inference workloads accounting for 60% of new deployments. That last number is important. Training clusters created the first wave of urgency, but enterprise inference is what turns AI from episodic capex into durable occupancy, power draw, and cooling demand.
The footprint is widening at both ends. Edge data center forecasts ranged from $27.3 billion in 2025 to $72.7 billion by 2033 and from $18.08 billion to $62.18 billion by 2031, depending on the source and market definition, but the direction is consistent. Inference pushes compute closer to users, factories, devices, and regulated environments. At the other end, SoftBank’s reported $100 billion Roze data center construction venture suggests construction capacity itself is becoming a strategic asset.
Even stranded or adjacent power assets are being reclassified. Riot Platforms delivered its first 5 MW to AMD, with a potential path to 200 MW of lease capacity. Coatue launched Next Frontier to buy AI data center land. The infrastructure game is moving upstream from rack demand into site control, interconnection, local politics, and the industrial choreography required to deliver megawatts on schedule.
Commodities entered the model stack
The quieter story, and maybe the one with the longest tail, was commodities. AI infrastructure is making old-economy inputs newly legible to technology investors.
Copper was the cleanest case. Teck Resources reported record quarterly sales of 140,000 tons, up 32% year over year, and $2.1 billion of adjusted EBITDA, up 125%. Data centers are not the only copper buyer, but electrification, grid reinforcement, transmission, and high-power commercial sites all push on the same demand curve. Aluminum, steel, lithium, subsea cable, antimony, rare earths, and iron ore all showed up for similar reasons: the AI buildout is arriving into an already crowded capital cycle for electrification materials.
Helium was the more surgical risk. Attacks on Qatar’s Ras Laffan facility reportedly kept 30% of global helium supply offline, a problem because helium is critical for semiconductor manufacturing and South Korea imports a large share from Qatar. A commodity disruption in the Gulf can hit Samsung and SK Hynix, then memory supply, then servers, then deployment timelines. That is the kind of dependency AI investors are not trained to price until it breaks something.
Critical minerals took on the same strategic tone. Kazakhstan’s estimated $46 trillion in critical mineral deposits and China’s role in Central Asian exports put geopolitics directly into the supply-chain map. The U.S. Export-Import Bank’s $10 billion Project Vault approval fits the same policy response. Strategic inventory is back in fashion, not because AI alone consumes every mineral, but because AI demand is being layered on top of electrification, defense, grid expansion, and national-security procurement.
Defense access raised the institutional stakes
The week ended with a sharper institutional signal: the Pentagon expanded classified AI network access through agreements with OpenAI, Google, SpaceX, Microsoft, AWS, NVIDIA, and Reflection AI. Anthropic’s reported exclusion over a supply-chain risk designation was the important edge. Frontier model capability is no longer sufficient for certain customers. Defense access depends on trust, procurement posture, infrastructure relationships, political resilience, and the perceived cleanliness of the supply chain.
That widens the AI moat from product quality to institutional acceptability. The companies that win classified, regulated, or national-security workloads will likely be the ones that can bundle models with cloud infrastructure, secure compute, power availability, compliance, and durable government relationships. Another physical layer, just dressed in procurement language.
What to watch
Power contracts are becoming the closest thing to a forward indicator for AI capacity. Watch whether gigawatt-scale procurement turns into deliverable megawatts, and whether local rate politics force hyperscalers to absorb more grid cost directly.
Memory deserves the next level of attention. If HBM and DRAM remain rationed toward AI customers, the bottleneck may migrate from accelerator access to bandwidth and packaging, with Micron, SK Hynix, Samsung, and the equipment chain taking more of the economics.
Inference is the proof point. Equinix’s 60% disclosure matters because sustained enterprise inference demand can justify the broader buildout beyond frontier training clusters. If that number keeps rising, power, cooling, edge, and data center leasing remain live earnings stories.
Commodities are now part of the stack. Uranium, copper, helium, lithium, rare earths, steel, and cable will not all trade as pure AI proxies, but one of them can become the next constraint at any time. The farther AI moves into the industrial base, the more old supply-chain shocks can show up as model deployment risk.
The week’s message was blunt enough. AI is still a software revolution, but the scarce asset is becoming physical capacity: power that clears, chips that ship, memory that is allocated, land that is entitled, cooling that works, metals that arrive, and institutional channels that trust the supplier. The companies that control those inputs may own the most durable part of the next leg.
