Americas

  • United States

Oracle to spend $40B on Nvidia chips for OpenAI data center in Texas

News
May 26, 20256 mins

Move signals OpenAIโ€™s break from Microsoft exclusivity as enterprise AI infrastructure costs surge to unprecedented levels.

Oracle logo on building
Credit: Shutterstock

Oracle is reportedly spending about $40 billion on Nvidiaโ€™s high-performance computer chips to power OpenAIโ€™s new data center in Texas, marking a pivotal shift in the AI infrastructure landscape that has significant implications for enterprise IT strategies.

Oracle will purchase approximately 400,000 of Nvidiaโ€™s GB200 GPUs and lease the computing power to OpenAI under a 15-year agreement for the Abilene, Texas facility, the Financial Times reported, citing several people familiar with the matter.

The site will serve as the first US location for the Stargate project, the $500 billion data center initiative spearheaded by OpenAI and SoftBank.

The transaction is more than Oracleโ€™s entire 2024 cloud services and license support revenue of $39.4 billion, underscoring just how much companies are now willing to invest in AI infrastructure.

For enterprise IT leaders watching their own AI budgets balloon, this deal offers a stark reminder of where the market is heading.

Breaking the Microsoft dependency creates enterprise ripple effects

The deal represents a crucial step in OpenAIโ€™s strategy to reduce its dependence on Microsoft, a move that could reshape how enterprises access and deploy AI services. The $300 billion โ€œstartupโ€ previously relied exclusively on Microsoft for computing power, with a significant portion of Microsoftโ€™s nearly $13 billion investment in OpenAI coming through cloud computing credits, according to the report.

OpenAI and Microsoft terminated their exclusivity agreement earlier this year after OpenAI became frustrated that its computational demands exceeded Microsoftโ€™s supply capacity. The two companies are still negotiating how long Microsoft will retain licensing rights to OpenAIโ€™s models.

Sanchit Vir Gogia, chief analyst and CEO at Greyhound Research, said OpenAIโ€™s decision to partner with Oracle represents โ€œa conscious uncoupling from Microsoftโ€™s backend monopolyโ€ that gives the AI company strategic flexibility as it scales.

โ€œAs AI models scale, so does infrastructure complexityโ€”and vendor neutrality is becoming a resilience imperative,โ€ Gogia said. โ€œThis move gives OpenAI strategic optionality โ€” mitigating the risks of co-dependence with Microsoft, particularly as both firms increasingly diverge in go-to-market strategies.โ€

Neil Shah, VP for research and partner at Counterpoint Research, said Microsoftโ€™s vertical integration creates potential conflicts of interest with other OpenAI customers.

โ€œDiversifying beyond Microsoft for compute resources, infrastructure opens up new partnerships, verticals and customer rolodex for OpenAI,โ€ he said. The move also supports OpenAIโ€™s potential IPO ambitions by providing โ€œindependence and necessary diversification instead of exposure from just one investor or customer.โ€

Infrastructure scale reveals cost pressures

The Abilene facility will provide 1.2 gigawatts of power when completed by mid-2026, making it one of the worldโ€™s largest data centers. The site spans eight buildings and needed $15 billion in financing from owners Crusoe and Blue Owl Capital.

At roughly $100,000 per GB200 chip based on the reported figures, Gogia said the pricing reflects a โ€œbrutal new realityโ€ where AI infrastructure is becoming a luxury tier investment.

โ€œThis pricing level affirms that the AI infrastructure market is no longer democratizingโ€”itโ€™s consolidating,โ€ he said. โ€œAccess to frontier compute has become a defining moat.โ€

Oracleโ€™s competitive leap forward

Oracleโ€™s investment positions the company to compete more directly with Amazon Web Services, Microsoft Azure, and Google Cloud in the AI infrastructure market. According to Gogia, the deal represents a significant shift for Oracle from โ€œAI follower to infrastructure architect โ€” a role traditionally dominated by AWS, Azure, and Google.โ€

Shah said Oracle Cloud Infrastructure has been โ€œlagging behind the big hyperscalars in the cloud and AI race,โ€ but the Stargate partnership โ€œgives a significant impetus to OCI in this AI Infrastructure-as-a-Service race,โ€ noting that Oracle has already been seeing โ€œtriple digit GPU consumption demand for AI training from its customers.โ€

The facilityโ€™s scale rivals Elon Muskโ€™s plans to expand his โ€œColossusโ€ data center in Memphis to house about 1 million Nvidia chips. Amazon is also building a data center in northern Virginia larger than 1 gigawatt, showing how the AI infrastructure arms race is heating up across the industry.

Stargateโ€™s global ambitions

The Abilene project fits into Stargateโ€™s broader plan to raise $100 billion for data center projects, potentially scaling to $500 billion over four years. OpenAI and SoftBank have each committed $18 billion to Stargate, with Oracle and Abu Dhabiโ€™s MGX sovereign wealth fund contributing $7 billion each, the report added.

OpenAI has also expanded Stargate internationally, with plans for a UAE data center announced during Trumpโ€™s recent Gulf tour. The Abu Dhabi facility is planned as a 10-square-mile campus with 5 gigawatts of power.

Gogia said OpenAIโ€™s selection of Oracle โ€œis not just about raw compute, but about access to geographically distributed, enterprise-grade infrastructure that complements its ambition to serve diverse regulatory environments and availability zones.โ€

Power demands create infrastructure dilemma

The facilityโ€™s power requirements raise serious questions about AIโ€™s sustainability. Gogia noted that the 1.2-gigawatt demand โ€” โ€œon par with a nuclear facilityโ€ โ€” highlights โ€œthe energy unsustainability of todayโ€™s hyperscale AI ambitions.โ€

Shah warned that the power envelope keeps expanding. โ€œAs AI scales up and so does the necessary compute infrastructure needs exponentially, the power envelope is also consistently rising,โ€ he said. โ€œThe key question is how much is enough? Today itโ€™s 1.2GW, tomorrow it would need even more.โ€

This escalating demand could burden Texasโ€™s infrastructure, potentially requiring billions in new power grid investments that โ€œwill eventually put burden on the tax-paying residents,โ€ Shah noted. Alternatively, projects like Stargate may need to โ€œbuild their own separate scalable power plant.โ€

What this means for enterprises

The scale of these facilities explains why many organizations are shifting toward leased AI computing rather than building their own capabilities. The capital requirements and operational complexity are beyond what most enterprises can handle independently.

For IT leaders, the AI infrastructure game has become a battle of giants, with entry costs that dwarf traditional enterprise IT investments. Success will increasingly depend on choosing the right partners and specializing where smaller players can still compete effectively. Oracle and OpenAI did not respond to requests for comment on the development.