OpenAI's Custom Chips Spark AI Hardware Race

OpenAI's partnership with Broadcom to build custom AI chips signals a shift in how AI scales, balancing efficiency, cost, and power demands for future growth.

OpenAI builds custom AI chips to boost efficiency for massive user demand. TechReviewer

Last Updated: October 13, 2025

Written by Xin Green

A New Era for AI Hardware

OpenAI's recent move to design its own AI chips with Broadcom marks a pivotal shift. Announced on October 13, 2025, this multibillion-dollar, four-year deal aims to deploy 10 gigawatts of computing power in the second half of 2026. That's enough to light up a medium-sized city. The goal? Create specialized processors, dubbed XPUs, tailored for the heavy lifting of ChatGPT's transformer models. Unlike off-the-shelf chips from Nvidia or AMD, these custom designs promise to squeeze more performance out of every watt, tackling the skyrocketing demand from over 300 million weekly users generating a billion queries daily.

This isn't just about keeping up with demand. OpenAI's strategy reflects a broader realization: generic chips leave efficiency on the table. By crafting processors specifically for AI tasks like matrix math and attention mechanisms, OpenAI aims to cut costs and boost speed. It's like swapping a gas-guzzling SUV for a sleek electric car built for a specific road. The chips, built on Taiwan Semiconductor Manufacturing Company's 3-nanometer process, integrate high-bandwidth memory and networking to handle massive AI workloads with less energy.

Learning From Google and Amazon

OpenAI isn't the first to take this path. Google's Tensor Processing Units (TPUs), launched in 2016, set the stage for custom AI hardware. TPUs excel at matrix operations, hitting over 90 percent efficiency for certain tasks compared to GPUs' 30 percent. Google's success lies in tight hardware-software integration, letting them fine-tune models for specific chips. But it wasn't easy. Building a software ecosystem to rival Nvidia's CUDA took years, and OpenAI will face similar hurdles as it develops tools for its XPUs.

Amazon's Inferentia and Trainium chips offer another lesson. Designed for AWS's AI workloads, they've scaled to regional availability, outpacing some competitors. Amazon's focus on inference, like OpenAI's, shows custom chips can slash costs for high-volume tasks. Yet, Amazon struggled with software compatibility early on, forcing developers to rewrite code. OpenAI, led by Richard Ho, who was hired in November 2023 from his role as senior director of engineering on Google's TPU program, will need to navigate this. A misstep in software could delay deployment or limit adoption, even with stellar hardware.

Why Efficiency Matters Now

The push for custom chips comes as AI's energy appetite grows. Each ChatGPT query on advanced models like GPT-5 uses about 18 watt-hours, adding up to 45 gigawatt-hours daily for OpenAI's operations. That's like running two nuclear reactors nonstop. Data centers could double global electricity demand by 2030, hitting 945 terawatt-hours, with AI driving much of that. Custom chips, with their 40 to 50 percent better price-performance ratio, could ease this strain. For users, this might mean faster responses or more affordable access to AI tools.

But it's not all smooth sailing. Building chips is risky. Industry-wide, only 14 percent of first silicon designs succeed without costly fixes. OpenAI's team, while bolstered by Google veterans, is smaller than those at established players. If transformer models evolve or new AI paradigms emerge, these chips could become obsolete. Plus, the 10-gigawatt rollout demands massive power infrastructure, raising concerns about carbon emissions and grid stress. Goldman Sachs estimates 60 percent of new data center energy will come from fossil fuels, adding 220 million tons of carbon yearly.

Shifting the Competitive Landscape

OpenAI's chip strategy shakes up the AI hardware race. Nvidia, holding 65 to 80 percent of the market, faces pressure as clients like OpenAI seek alternatives. While Nvidia's CUDA ecosystem remains a powerhouse, custom chips could chip away at its dominance by 2030, when analysts predict its share could dip to 67 percent. Broadcom, already designing chips for Google and others, gains a massive win with OpenAI's $10 billion order. Meanwhile, AMD benefits from OpenAI's separate deal for its chips, showing diversification is key.

The massive investment, hundreds of billions across OpenAI's deals with Nvidia, AMD, and others, raises questions. Will costs trickle down to users? And with data centers guzzling city-scale power, environmental trade-offs loom large. OpenAI's bet on custom chips could redefine AI's future, but only if it balances efficiency, execution, and sustainability.