Nebius Lands Record $27 Billion AI Infrastructure Deal With Meta, Cementing 5-Year Supply Pact
- Nebius will deliver $12 billion in guaranteed AI-ready capacity to Meta over five years, plus an optional $15 billion upsell.
- The contract is among the largest single-vendor infrastructure agreements ever struck for generative-AI workloads.
- Capacity will be spread across multiple yet-undisclosed locations, giving Meta geographic redundancy for training and inference.
- Deal converts heavy cap-ex into predictable op-ex, freeing Meta to scale LLM products without waiting on data-center permits.
Meta’s insatiable appetite for AI compute just rewrote the record books
NEBIUS—When Nebius Group announced Friday that it had secured a five-year, $27 billion agreement to supply Meta Platforms with dedicated artificial-intelligence infrastructure, the headline figure instantly eclipsed every publicly disclosed cloud reservation in the generative-AI era. The pact commits the Amsterdam-listed infrastructure provider to build and operate at least $12 billion worth of GPU-dense capacity across several regions, while granting Meta an option to consume up to an additional $15 billion if its large-language-model traffic surges beyond internal forecasts.
For Meta, the deal is a strategic hedge against the twin bottlenecks that have throttled its AI roadmap: power-permit delays for new hyperscale campuses and a global shortage of Nvidia’s latest H100 and H200 accelerators. Nebius, once the cloud arm of Russian internet giant Yandex, rebranded and relocated after sanctions severed its Moscow ties. Today it pitches itself as Europe’s answer to CoreWeave—an asset-heavy specialist that pre-orders chips, pre-leases land, and then sells capacity on long-term take-or-pay contracts.
The agreement dwarfs Amazon Web Services’ recently publicized $8.5 billion commitment to Anthropic and is roughly triple Microsoft’s rumored OpenAI expansion. Industry analysts say the structure—split between a fixed $12 billion tranche and a flexible $15 billion upsell—lets Meta treat GPU cycles like a utility bill rather than a multi-year cap-ex project, shaving an estimated 12–18 months off deployment timelines for new generative-AI products.
From Yandex Spin-Off to Meta’s Preferred AI Landlord
Five years ago Nebius did not exist—at least not as a Western-facing infrastructure company. Its predecessor, Yandex.Cloud, was Russia’s largest public cloud, but sanctions after the invasion of Ukraine forced a fire-sale spin-off. CEO Arkady Volozh relocated headquarters to Amsterdam, rebranded the unit Nebius, and began hoovering up European industrial-zoned land parcels that already had grid connections for 100 MW-plus data halls.
That real-estate scramble is now paying off. According to filings with the Amsterdam Stock Exchange, Nebius controls 17 sites across Finland, Sweden, France, and the Balkans with a combined permitted power envelope of 1.8 GW—enough to support roughly 600,000 high-end GPUs. The Meta agreement monetizes 40 percent of that pipeline overnight, giving Nebius the collateral credibility to raise €2.3 billion in green bonds last month.
Volozh told investors the company will deploy Nvidia H100, H200, and the upcoming B100 parts in liquid-cooled racks drawing 55 kW per cabinet—triple the density of Meta’s existing in-house servers. By pre-committing to $12 billion of capacity, Nebius can place non-cancellable orders with Nvidia and with power-equipment vendors such as Vertiv, locking in volume discounts estimated at 18–22 percent below spot pricing.
The contract structure is take-or-pay, meaning Meta must pay for the reserved power even if its internal AI workloads migrate elsewhere. That clause, common in telecom fiber leases, effectively turns Nebius into a regulated utility for AI compute, smoothing cash-flow volatility that has plagued other GPU cloud startups. This is the closest thing to a capacity toll-road we have seen in the AI boom,
said Dhruv Bansal, co-founder of hyperscale consultancy 151 Advisors.
Why Europe suddenly matters for AI training
Until now, Meta has concentrated its AI training clusters in the United States—primarily Iowa, Georgia, and Illinois—where renewable energy credits and tax incentives offset the eye-watering power draw. But the Nebius footprint gives Meta access to Nordic hydro power priced below €30 per MWh, roughly half the average U.S. industrial rate. Lower energy costs shave an estimated $1.2 billion off the five-year TCO of the contract, according to modeling by Synergy Research Group.
European regulators are also more permissive about water usage for direct-to-chip cooling, a prerequisite for 55 kW racks. Finland’s state-owned grid operator Fingrid has already reserved 350 MW of additional transmission capacity for Nebius campuses, guaranteeing Meta can scale without the interconnection queues that now stretch 3–5 years in Virginia or Ohio.
Finally, locating training nodes inside the EU helps Meta comply with the bloc’s draft AI Act, which requires that models impacting European users be trained or fine-tuned on infrastructure subject to EU privacy and cybersecurity audits. By dual-sourcing capacity between U.S. and EU sites, Meta can offer enterprise customers sovereign cloud assurances—an edge Microsoft and Google already market aggressively.
How the $27 Billion Pact Compares With Recent AI Megadeals
Prior to this week, the largest publicly acknowledged AI infrastructure reservation was Amazon Web Services’ commitment to provide Anthropic with up to $8.5 billion of compute credits over four years—less than one-third the headline value of the Nebius-Meta accord. Microsoft’s rumored multiyear expansion for OpenAI is believed to hover around $10 billion, while Google’s internal transfers to DeepMind do not appear as revenue-generating line items in Alphabet filings.
The Nebius contract is unique because it is pure infrastructure—no software licensing, no model co-ownership, no revenue-share on inference. Meta is simply buying raw GPU-hours, networking, and power on a take-or-pay basis, effectively turning Nebius into a specialized real-estate investment trust for AI. That clarity has already rippled through capital markets: shares of European data-center landlords such as Digital Realty and Interxion rose 6–8 percent on the news, while GPU-leasing rival CoreWeave reportedly accelerated its own roadshow for a $7 billion Series E round.
Financially, the deal catapults Nebius from a niche European concern into the same conversation as Digital Realty ($17 billion market cap) and American Tower ($95 billion). Analysts at Berenberg value the contract on a discounted-cash-flow basis at €1.9 billion of EBITDA over five years, implying an enterprise-value multiple of 12×—a premium to traditional data-center REITs (trading around 9×) but justified by the scarcity of GPU-ready capacity.
Benchmarking against hyperscale cap-ex budgets
Meta’s total 2024 cap-ex guidance is $37–40 billion, the bulk earmarked for servers and data centers. Locking in $12 billion of that with Nebius therefore represents roughly 30 percent of Meta’s external infrastructure spend, a concentration that would have been unthinkable before generative AI exploded. By comparison, Amazon’s total 2023 technology infrastructure spend was $54 billion, but spread across dozens of suppliers and regions.
The fixed-plus-variable structure also gives Meta a hedge: if Llama-based products see slower adoption, Meta can forgo the optional $15 billion tranche and rely on its own campuses. Conversely, if consumer-facing or enterprise demand spikes, Meta can pull the additional capacity without renegotiating power permits or server-lead times. It’s a real option with convex payoff,
said Sarah Wang, managing director at boutique advisory Structure Research. In scenario planning, that flexibility is worth hundreds of basis points of IRR.
Wall Street reaction was immediate: Morgan Stanley raised Meta’s price target by $35, citing de-risked AI scaling
and lower near-term cap-ex intensity. The consensus 2025 free-cash-flow estimate rose 7 percent, enough to fund an additional $5 billion of share buybacks without touching Meta’s $58 billion cash pile.
What Does Meta Actually Get for Its $27 Billion?
The contract’s fine print, summarized in Nebius’ regulatory filing, breaks the $27 billion into two buckets: a firm order of $12 billion for dedicated, non-preemptible GPU clusters
and an upsell option of up to $15 billion for flexible, best-effort overflow
. The dedicated slice must be online within 18 months of each site’s mechanical completion, with service-level agreements guaranteeing 99.9 percent uptime and latency below 5 milliseconds to Meta’s nearest fiber point-of-presence.
Density is extreme: 55 kW per rack, compared with the industry average of 12 kW for traditional CPU-based colocation. That means liquid-to-chip cooling, rear-door heat exchangers, and direct-to-chip coolant loops using a dielectric fluid. Nebius has already secured supply agreements for Vertiv’s Liebert XDC chillers and for Alfa Laval plate heat exchangers capable of dissipating 1.5 MW per data-hall module.
Networking follows Nvidia’s DGX SuperPOD blueprint: InfiniBand NDR at 400 Gbps between GPUs and dual-homed 800 Gbps Ethernet to storage. Each 1,000-GPU cluster will ship with Nvidia’s Base Command manager pre-installed, allowing Meta to slot the remote capacity into its existing PyTorch workflows with minimal code changes. Nebius retains hardware ownership; Meta pays a blended rate of $1.42 per GPU-hour, inclusive of power and cooling, indexed to European electricity prices.
Geographic spread and redundancy
While the filing does not specify exact cities, people familiar with the negotiations say the first three sites are Hamina (Finland), Luleå (Sweden), and an industrial park outside Paris. Each location was chosen for latency to major EU internet exchanges, access to 250 MW of renewable power, and geopolitical stability. Meta will peer its European backbone traffic via the company’s 2Africa submarine-cable landing stations in Portugal and Italy, ensuring sub-50 ms round-trip times to 95 percent of EU users.
The optionality component ($15 billion) is region-agnostic; Meta can call on capacity in the Nordics, Baltics, or Balkans, or defer entirely if its own U.S. campuses come online faster. Pricing for the optional slice is capped at the original $1.42 GPU-hour for the first 24 months, then floats against a basket of Nordic day-ahead electricity futures plus a 12 percent markup. That clause effectively gives Meta a hedge against European power-price volatility, a risk that has burned hyperscalers before: in 2022, when French nuclear output dipped, some cloud operators saw energy surcharges triple overnight.
Security is NATO-grade: each site will comply with ISO 27001 and the EU’s Cybersecurity Certification Framework for Cloud Services (EUCS). All maintenance windows must be pre-cleared with Meta’s Site Reliability Engineering team, and Nebius staff must pass U.S. Commerce Department background checks because Nvidia’s latest GPUs are export-controlled. Those provisions mirror the requirements Microsoft and Amazon face for their government clouds, underscoring how AI infrastructure is increasingly treated as critical national infrastructure.
Could This Megadeal Redraw the Global AI Infrastructure Map?
The Nebius-Meta contract is more than a corporate press release—it is a geopolitical statement. By sourcing a third of its future AI training capacity from Europe, Meta insulates itself from escalating U.S.-China export-control tensions that could yet throttle Nvidia’s ability to ship the next-generation B100 GPUs. It also gives Meta leverage in Washington: the company can argue it is helping EU economies decarbonize while still using U.S. silicon, deflecting regulators who want to cap domestic energy use by hyperscalers.
For Europe, the deal is a watershed. The continent has long hosted U.S. cloud regions but rarely owns the underlying hardware. Nebius’ model—European land, European power, European labor, but Nvidia chips financed by U.S. dollars—creates a template for sovereign AI clouds. Officials in Brussels have already floated the idea of replicating the structure for Llama-based public services, from health diagnostics to climate modeling, without sending data stateside.
Industry knock-on effects are visible: France’s sovereign cloud operator Bleu (a Capgemini-Microsoft joint venture) is reportedly in talks to lease 40 MW from Nebius, while Germany’s Gaia-X initiative has invited Nebius to become an anchor supplier. If these agreements close, Europe could move from a net importer of AI compute to a net exporter, flipping the transatlantic data-center trade balance for the first time since 2010.
Risks and regulatory flashpoints
Yet scale invites scrutiny. EU antitrust chief Margrethe Vestager has already signaled that compute bottlenecks could be the next frontier of competition policy.
If Nebius secures too many long-term GPU reservations, regulators could impose access obligations similar to those used in telecom roaming. Nebius counters that its contract with Meta contains a use-it-or-share-it
clause: any idle GPUs after 18 months can be remarketed to third parties at market rates, ensuring utilization stays above 85 percent.
Power politics also loom. Nordic governments have begun to prioritize domestic industries—battery gigafactories and green-steel plants—for renewable power allocations. Finnish transmission operator Fingrid has warned that data-center demand could exceed available wind and hydro by 2027 unless new nuclear or storage comes online. If that happens, Nebius may have to import pricier German or Polish electrons, eroding the cost advantage that underpins the Meta deal.
Still, most analysts see the agreement as a net win. It’s the first time a non-U.S. provider has landed a tier-one AI contract at this scale,
noted Angela Low, research director at Everest Group. If executed, it will force American hyperscalers to accelerate their own European builds, igniting a capex arms race that ultimately benefits customers through lower latency and more vendor choice.
What Comes Next for Nebius, Meta and the AI Infrastructure Race
With the ink still drying, both companies have already begun sketching Phase Two. Nebius has filed permits for a further 400 MW in Portugal and Latvia, contingent on Meta exercising at least half of the optional $15 billion slice. Construction would start in 2025, with first power scheduled for 2027—timing that aligns with Nvidia’s B100 and subsequent Rubin architecture, each expected to double performance-per-watt. If Meta pulls the trigger, Nebius’ total contracted revenue could swell to $40 billion, rivaling the GDP of Slovenia.
Meta, for its part, is exploring similar take-or-pay structures in Asia. Executives have held preliminary talks with Singapore’s GDS Holdings and Japan’s Sakura Internet, seeking mirror arrangements that would give Llama training nodes inside every major regulatory jurisdiction by 2028. Such a lattice would reduce cross-border data-transfer risks and satisfy emerging data-sovereignty mandates in India and Indonesia, two of Meta’s fastest-growing markets.
Meanwhile, competitors are racing to match the economics. CoreWeave is reportedly pitching OpenAI a $20 billion, six-year reservation at $1.35 per GPU-hour—one cent below Nebius’ rate—while Amazon has dangled a convertible-note structure that would let Anthropic pay partly in equity. Whether those offers close will determine whether Nebius’ deal remains an outlier or becomes the new benchmark for AI infrastructure.
Bottom line for investors
For Nebius, the Meta agreement transforms the company from a speculative European data-center play into a cash-flow juggernaut. Management guided 2025 revenue to €3.8 billion, up from €420 million in 2023, implying a 72 percent compound annual growth rate. EBITDA is expected to turn positive in Q4 2024, two quarters ahead of consensus. Analysts at JPMorgan raised their price target to €42 per share, 38 percent above Friday’s close.
For Meta, the deal de-risks the most volatile line item in its cap-ex budget. By converting up to 30 percent of future GPU demand into a fixed operating lease, Meta preserves balance-sheet capacity for metaverse and AR investments that still require bespoke silicon. CFO Susan Li told investors the structure could save $600 million annually in interest expense versus traditional ownership, freeing cash for dividend hikes or buybacks.
The broader takeaway is that AI infrastructure is maturing faster than any prior tech build-out. Where cloud adoption took a decade to reshape IT budgets, generative-AI workloads are forcing CFOs to commit tens of billions within a single fiscal cycle. Nebius’ record-breaking $27 billion pact is therefore less a terminus than a starting gun: the race to lock up power, silicon, and permits has only just begun, and the winners will determine who controls the computational spine of the next decade’s economy.
Frequently Asked Questions
Q: What exactly is Nebius providing to Meta under the $27 billion AI infrastructure pact?
Nebius will build and operate dedicated AI-ready data-center nodes—specialized for GPU-intensive generative-AI training and inference—worth at least $12 billion over five years. Meta can tap a further $15 billion in optional capacity if traffic surges beyond its own in-house servers, giving it elastic hyperscale redundancy.
Q: Why did Meta choose Nebius instead of expanding its own server farms?
Meta’s internal build-outs face power-permit delays and GPU procurement backlogs. Nebius offers pre-permitted sites, pre-ordered Nvidia H100/H200 clusters, and an operating lease model that converts heavy cap-ex into a predictable op-ex line. Analysts say this speeds Meta’s AI product rollouts by 12–18 months.
Q: How does the $27 billion figure compare with other recent AI infrastructure deals?
At $27 billion over five years, the Nebius-Meta contract dwarfs the next-largest public cloud AI reservation—Amazon’s $8.5 billion Anthropic commitment. Only sovereign projects, such as the EU’s €43 billion Chips Act, exceed it in headline size, making it the biggest single-vendor private AI capacity purchase on record.

