Nvidia has announced plans to invest up to $100 billion in OpenAI, marking one of the largest infrastructure partnerships in the artificial intelligence industry to date. The collaboration will focus on deploying 10 gigawatts of advanced Nvidia systems to power OpenAI’s next-generation data centers, supporting the rapid growth and scalability of artificial intelligence applications. The investment will be staged over several years and is tied to the phased deployment of AI infrastructure, beginning in 2026.

OpenAI will use Nvidia’s GPU-based computing platforms as part of its broader effort to expand capacity for training and operating large-scale AI models. The partnership designates Nvidia as OpenAI’s preferred provider of compute and networking technology for this expansion. The first installations under the agreement are expected in the second half of 2026 and will utilize Nvidia’s upcoming Vera Rubin platform. This hardware architecture is designed to support exascale AI computing, featuring high-performance GPUs tailored for AI training workloads and inference at scale.
The initial phase will involve one gigawatt of infrastructure, with additional capacity rolled out in subsequent phases to reach the 10-gigawatt target. OpenAI and Nvidia will coordinate closely on hardware and software development to ensure compatibility and performance across systems. This includes aligning Nvidia’s product roadmap with OpenAI’s requirements for training large language models and other advanced AI systems.
OpenAI to scale data centers with Nvidia’s GPU technology
The companies plan to jointly optimize system architecture, networking, and power usage to deliver scalable and energy-efficient AI data centers. Nvidia’s role in the deal includes supplying the full stack of AI computing technology, encompassing GPUs, networking components, and software tools. The systems will be integrated into purpose-built data centers designed to support OpenAI’s workloads. The project will involve extensive planning around infrastructure siting, power sourcing, and compliance with regulatory frameworks across multiple jurisdictions.
OpenAI CEO Sam Altman stated that compute infrastructure is central to advancing artificial intelligence capabilities and underpins the development of transformative technologies. Nvidia CEO Jensen Huang said the partnership represents a significant step forward in industrial-scale AI deployment, enabled by the next generation of accelerated computing systems. The financial structure of the investment was not disclosed in detail, but the companies confirmed that the full $100 billion figure is tied to the deployment and operation of the planned 10-gigawatt capacity.
Nvidia to deliver end-to-end compute and network stack
The agreement is currently based on a letter of intent, with a definitive contract expected to follow after further planning and review. News of the partnership contributed to a rise in Nvidia’s stock price, which gained over 3 percent in trading following the announcement. The company remains the dominant supplier of AI chips globally and continues to expand its footprint across major AI infrastructure projects. The announcement comes amid increasing demand for high-performance computing resources to support generative AI, large-scale language models, and enterprise AI platforms.
Both companies are working to secure long-term infrastructure capacity as part of their operational scale-up. Final details of the agreement, including deployment timelines, data center locations, and operational logistics, will be determined over the coming months. The companies are expected to engage with industry stakeholders, regulators, and infrastructure providers as the project advances. Planning will prioritize compute efficiency, scalability, and strategic global site placement for deployment. – By Content Syndication Services.
