Broadcom’s Thor Ultra Challenges Nvidia in AI Networking Race

Broadcom’s Thor Ultra chip scales AI clusters, challenging Nvidia and powering OpenAI’s massive deployments with open standards and blazing speeds.

Broadcom's Thor Ultra chip enables massive AI scaling with open Ethernet standards. TechReviewer

Last Updated: October 14, 2025

Written by Chloe Silva

Scaling AI’s Backbone

Broadcom’s Thor Ultra chip, unveiled on October 14, 2025, is a serious contender in the race to power AI’s future. Designed to connect hundreds of thousands of AI accelerators in data centers, this 800 gigabit-per-second networking chip tackles the bottlenecks that slow down massive AI training clusters. Think trillion-parameter models like GPT-5, which demand unprecedented scale. Unlike traditional networking solutions, Thor Ultra leverages the Ultra Ethernet Consortium’s open standards, finalized in June 2025, to deliver blazing speed and flexibility. This isn’t just about faster connections; it’s about enabling AI systems to grow bigger and smarter without hitting a wall.

The chip’s debut comes hot on the heels of Broadcom’s October 13 announcement of a 10-gigawatt custom chip deal with OpenAI. That partnership, set to roll out between 2026 and 2029, underscores the stakes. As AI models balloon in complexity, the networks linking their accelerators become the linchpin. Broadcom’s betting that open, interoperable solutions will outpace proprietary systems, and Thor Ultra is its flagship in that fight.

OpenAI’s Massive Leap

OpenAI’s decision to partner with Broadcom for a 10-gigawatt deployment offers a real-world test case. This isn’t a small experiment; it’s a commitment to building AI infrastructure on a scale equivalent to powering 8 million homes. OpenAI’s engineers say designing custom chips and networks lets them bake in lessons from frontier models, pushing capabilities beyond what off-the-shelf hardware can do. Thor Ultra’s features, like multipathing and out-of-order packet delivery, make this possible by ensuring data flows smoothly across sprawling clusters, even when packets take different routes or arrive out of sequence.

Compare that to Meta’s approach, which offers another angle. Meta has leaned into Ethernet-based networking for its AI workloads, serving billions of daily inference requests using MTIA chips developed in partnership with Broadcom. By adopting RDMA over Converged Ethernet, Meta’s data centers achieve performance close to Nvidia’s InfiniBand, with independent tests showing Ethernet edging out InfiniBand in some benchmarks, like BERT-Large training, by tiny margins. Both cases highlight a shift: open standards are closing the gap with proprietary systems, giving companies like OpenAI and Meta flexibility to scale without being locked into one vendor.

Nvidia’s Shadow Looms Large

Nvidia isn’t sitting still. On October 13, 2025, the same day as Broadcom’s OpenAI deal, Nvidia announced that Meta and Oracle would use its Spectrum-X Ethernet switches for giga-scale AI supercomputers. Nvidia’s strength lies in its end-to-end ecosystem, where GPUs, networking, and software like NCCL are tightly integrated. This gives Nvidia an edge in performance predictability, especially for customers already invested in its InfiniBand-based systems. But Broadcom’s Thor Ultra counters with advantages in interoperability and cost at scale, appealing to hyperscalers like Google and Amazon who want to avoid vendor lock-in.

The competition isn’t just about speed. Nvidia’s InfiniBand still boasts lower latency, around 5 microseconds, compared to traditional Ethernet implementations at about 10 microseconds, though optimized Ethernet solutions like DriveNets Network Cloud-AI are closing that gap to 7 microseconds. Broadcom’s chip also introduces scalable congestion control to prevent bottlenecks as clusters grow, a critical feature when you’re linking hundreds of thousands of accelerators. Still, transitioning to Ethernet requires new expertise, and some operators worry about configuration complexity compared to Nvidia’s plug-and-play approach.

Energy Costs of AI’s Ambition

The race to scale AI comes with a catch: power consumption. OpenAI’s 10-gigawatt deployment alone matches the energy needs of 8 million households, and data centers could eat up 12% of U.S. electricity by 2028, a 44% jump from 2024. Broadcom’s Thor Ultra tackles this with co-packaged optics, cutting optical interconnect power use by about 70% compared to older tech. That’s a big deal when every watt counts. But the broader trend is sobering. Goldman Sachs predicts global data center power demand could surge 165% by 2030, driven by AI’s hunger for compute.

Local communities near these mega-data centers are raising valid concerns about strained grids and water used for cooling. The environmental toll forces a tough question: can efficiency gains keep up with AI’s exponential growth? Broadcom’s push for open standards could help by fostering competition and innovation, but the industry needs broader collaboration with renewable energy providers to avoid unsustainable trade-offs.

A New Playing Field for AI

Broadcom’s Thor Ultra isn’t just a chip; it’s a signal that open standards could reshape AI’s future. By aligning with the Ultra Ethernet Consortium, Broadcom enables smaller players, from startups to research labs, to access cutting-edge infrastructure without bowing to proprietary giants. The AI chip market, valued at $52.92 billion in 2024, is projected to hit $295.56 billion by 2030, and Broadcom’s $60-90 billion slice by 2027 depends on hyperscalers like OpenAI, Google, and Meta. But the real win might be for end users, who’ll see smarter AI applications, from better language models to sharper video generation like OpenAI’s Sora.

The flip side? Concentration risks. With so much power in a few hyperscalers, smaller companies might struggle to secure compute resources. And while Thor Ultra’s interoperability promises flexibility, some experts question if vendor-specific tweaks could fragment the ecosystem. The lessons from OpenAI and Meta suggest open standards can deliver performance and scale, but only if the industry commits to true collaboration over competition.