Hyperscalers have been seeking alternatives to costly and supply-constrained graphics processing units (GPUs) from NVIDIA Corporation (NASDAQ:NVDA) and Advanced Micro Devices Inc (NASDAQ:AMD) as they race to power data centers.
The recent collaboration between Meta Platforms Inc. (NASDAQ:META) and Broadcom Inc. (NASDAQ:AVGO) aims to address this in yet another effort on the part of the Big Tech companies to leverage artificial intelligence (AI) technology. Under the extended agreement, Broadcom will develop several generations of its AI chips that will cater to the needs of Meta platforms till 2029.
However, the obvious conclusion from the partnership announcement on April 14, 2026, is that this move is targeted at reducing NVIDIA dependency. Beyond that, it represents a fundamental change in how AI technology is being developed, financed, and made efficient.
Key Takeaways
- Big Tech is actively reducing reliance on third-party GPUs by building custom AI chips, but NVIDIA remains central for high-end training workloads.
- Meta's partnership with Broadcom focuses on optimizing repetitive AI tasks, including recommendations and feeds, where efficiency and cost savings matter most.
- Hyperscalers are splitting workloads across NVIDIA, Broadcom, and others, creating a layered market where multiple chipmakers can grow simultaneously.
The Rationale for the Deal
Meta's objective is to optimize the most expensive part of its AI operations. AI infrastructure is expensive, and relying solely on third-party GPUs is not sustainable at hyperscale.
The Broadcom partnership allows Meta to design application-specific integrated circuits optimized for its own workloads, especially inference tasks like ranking, feeds, and chatbot responses.
This matters because inference is quickly becoming the dominant workload in AI. Training large models is still critical, but once deployed, those models must serve billions of users in real time. Custom chips can perform these repetitive tasks more efficiently and at lower cost than general-purpose GPUs.
What the Deal Actually Covers
This deal involves an initial deployment exceeding 1 gigawatt of computing capacity, part of Meta’s broader, massive AI hardware push. The company has projected capital expenditure of $115 billion to $135 billion on AI infrastructure in 2026 alone.
The partnership is built on Broadcom’s XPU platform, which is designed for creating custom AI accelerators. Broadcom will work with Meta across chip design, advanced packaging, and networking to help build out a massive computing foundation for real-time AI experiences at scale.
Specifically, the Meta Training and Inference Accelerator (MTIA) is optimized for inference and recommendation at scale, powering AI across all of Meta’s apps and services. It is customized for ranking content, recommending posts and ads, and running its growing family of generative AI models across Facebook, Instagram, and WhatsApp.
Why This Is Not a Direct Threat to NVIDIA
Despite its custom chip efforts, Meta's aim of achieving better long-term scalability and avoiding the volatile pricing premiums associated with external GPU supply chain constraints still requires continuous investment in NVIDIA hardware.
For instance, Meta is committing to deploy six gigawatts of AMD’s GPUs and millions of chips from NVIDIA. This is because its GPUs remain the industry standard due to their performance, software ecosystem, and developer adoption.
There are three reasons NVIDIA remains difficult to displace:
1. Training Remains a Task for GPUs
Specialized hardware performs best in focused and repetitive operations. Training next-gen AI models needs versatility, high memory bandwidth, and sophisticated software support, all strong suits of NVIDIA.
2. CUDA and Software Lock-in
NVIDIA's CUDA ecosystem is a major competitive advantage. Developers build AI systems around it, which raises switching costs.
3. Demands for Scale Are Skyrocketing
Investment in AI technology is not fixed. It is predicted that hyperscalers will invest more than $600 billion into AI infrastructure in 2026 alone.
The Widespread Industry Shift
The Meta-Broadcom deal revealed that AI infrastructure is evolving closer to cloud computing, where multiple specialized components work together. This trend is already visible across the industry. Google has long relied on tensor processing units, Amazon (AMZN:NASDAQ) uses Trainium, and even OpenAI has explored custom chips to reduce dependency on NVIDIA.
Recent deals, including large-scale partnerships involving custom silicon, show that companies are diversifying rather than consolidating around a single vendor.
Earlier this month, Broadcom also signed agreements with Alphabet’s Google (NASDAQ:GOOG) and Anthropic to develop the next generation of custom AI processors, confirming its role as the dominant independent design and packaging partner for frontier AI silicon.
It has now become a pattern for hyperscalers to go to NVIDIA for leading training capacity, and to Broadcom for custom chips optimized for specific inference workloads at volume. This positions Broadcom as a key beneficiary of the next phase of AI infrastructure buildout, which is less about training frontier models and more about deploying AI to billions of users as cheaply and efficiently as possible.
What This Means for Investors
In frontier model training, NVIDIA's dominance is unlikely to be contested anytime soon. But the rise of custom silicon will reduce its total available market in the long run, especially for inference. Each gigawatt of MTIA chips that Meta consumes is one gigawatt less for NVIDIA.
Current geopolitical and market volatility have weighed on both Broadcom and Meta, which are currently selling for 34 and 22 times forward earnings, respectively. Both companies have come off their respective highs, which presents an opportunity for an investor looking to get exposure to the custom silicon trend without paying the premium price.
Bottom Line
The Meta-Broadcom partnership indicates that AI infrastructure is maturing into a multi-layered ecosystem.
Broadcom is emerging as the leading custom chip partner for hyperscalers, having also struck deals with Google and Anthropic, whereas Meta continues buying NVIDIA and AMD GPUs for model training.
Although this partnership reduces its dependence on third-party hardware for high-volume, repetitive workloads, NVIDIA continues to power the most demanding workloads and innovation cycles.
Feature image credit: Meta News Release
Benzinga Disclaimer: This article is from an unpaid external contributor. It does not represent Benzinga’s reporting and has not been edited for content or accuracy.
Login to comment