By: Sean O'Loughlin, Joshua Buchalter, Joe Giordano, Lannie Trieu
oct. 30, 2025 - 4 minutes
Overview:
- Datacenter and artificial intelligence (AI) connectivity merits its own category of coverage, but with an effort that remains unified across networking and compute.
- We initiate on datacenter connectivity as a distinct category within our broader semis and AI coverage.
- This coverage is informed by the respective roadmaps across accelerators and hyperscalers.
- Our investment framework for connectivity asserts that low-level tech differentiation and project diversification are key.
The TD Cowen Insight
We are initiating on datacenter connectivity and networking infrastructure based on our view that connectivity is a fundamental aspect of datacenter and AI that merits its own category of coverage in the context of a unified effort across networking and compute. We develop a bottom-up, port-level connectivity model illustrating a potential over US$75 billion networking silicon total addressable market (TAM) by 2030.
Our Thesis: Connectivity Worthy of Its Own Focus
As AI workloads have scaled to clusters now exceeding 100,000 accelerators, the speed, reach and power draw of the connections between these accelerators is becoming nearly as important as the accelerator itself. To enable the next generation of AI scaling, investors see optical links and co-packaged optics as being required. However, in our view the market for both copper cables (active and passive) and pluggable optics will grow strongly well into the next decade, and investors are underappreciating the opportunity scale up networking represents. Our conclusions are underpinned by our proprietary, bottom-up forecast that aligns our views on accelerator units with networking speeds and connectivity media and has important implications for key connectivity stocks.
Amdahl's Law observes that connectivity has the potential to bottleneck entire AI workloads and is thus integral to full-stack optimization of both training and inference. Further, a fundamental property of all-to-all connected networks (such as a backend AI network) is that the number of edges (connections) grows exponentially for a given linear increase in the number of nodes (accelerators).
The era of Generative AI (GenAI) all-to-all scaling, constrained by the realities of Amdahl's Law, is driving the most significant acceleration of demand for networking technology since the 1990s. We are initiating on datacenter connectivity and networking infrastructure due to our belief that this demand is durable, and connectivity in the age of AI is most effectively covered with a unified effort across networking and compute.
What Is Proprietary?
To analyze the puts and takes across what we identify as the key debates for networking and connectivity, we created a proprietary model that aligns our existing assumptions around AI accelerator growth with the associated networking infrastructure such a buildout would require. Against these unit volumes and speed estimates, we then estimate relative share of physical media (copper vs. optics) at each speed, then map market share assumptions to specific company revenue builds.
Financial & Industry Model Implications: A Material TAM Developing
Our model illustrates the potential for an approximately US$90 billion networking infrastructure total addressable market (TAM) to develop by 2030 across physical layer connectivity and switching, >US$75 billion of which is silicon (rather than systems). We believe this is much larger than investors contemplate. Further, we note that networking silicon by 2030 will likely rival the size of the entire automotive semis market and exceed the size of the entire server central processing unit (CPU) market by 2026.
This has clear implications for small and mid-cap (SMID) companies that are the focus of this report, but is also impactful for those traditional "networking" companies as well as traditional "compute" companies.
What To Watch: Scale Up Domain Size
We make many assumptions in constructing our model (and make no claims to being the arbiter of truth on any of them), but perhaps most impactful are our assumptions regarding the number of scale up switches required per eXecution Processor Unit (XPU) and overall cluster size, which we expect will continue to trend upward (and therefore increase the connector/XPU attach rate).
More qualitatively, we believe the dual optimization points of power draw and signal integrity for pluggable optics will be the determining factor in co-packaged optics (CPO) adoption, rather than any technical specs regarding co-packaged solutions themselves. If some combination of linear optics (fully linear or receive-only) and lower-power full digital signal processings (DSPs) can deliver 1.6 trillion with an acceptable power envelope, that is likely to materially slow the pace of CPO deployment.
Subscribing clients can read the full report, The Cable(s) Guy: Initiating On Datacenter Connectivity - Ahead Of The Curve, on the TD One Portal