TrendForce concludes that as compute density rises, data traffic between racks and across clusters will continue to grow rapidly.
TrendForce reports that Google is taking a major step forward in AI data-center networking with a new high-speed interconnect architecture built around its next-generation Ironwood TPU. The design combines a 3D Torus network topology with the Apollo optical circuit switch (OCS) all-optical network, a shift aimed at handling the explosive compute and bandwidth demands created by large-scale AI workloads. According to the research firm, this approach will significantly reshape how AI clusters are interconnected at scale.
As a result of these architectural changes, TrendForce forecasts a sharp rise in demand for ultra-high-speed optics. The global shipment share of 800G and above optical transceiver modules is projected to surge from 19.5% in 2024 to more than 60% by 2026, effectively making them standard in AI-centric data centers. With Google expected to ship nearly 4 million TPUs in 2026, demand for 800G and 1.6T optical modules tied to its infrastructure alone could surpass 6 million units.
Within this OCS-enabled framework, Ironwood TPUs use high-speed copper links for short-distance connections, while the all-optical network manages inter-rack data traffic. This hybrid approach allows AI clusters to be designed from the outset with large volumes of 800G and 1.6T optical modules, ensuring high throughput while maintaining architectural flexibility as bandwidth requirements grow.
Energy efficiency and long-term cost control are key advantages of the Apollo OCS system. TrendForce highlights that the switch relies on micro-electromechanical systems (MEMS) micro-mirrors to create direct fiber-to-fiber connections, avoiding repeated optical-electrical-optical conversions that add latency and power draw. A single OCS switch consumes about 100 watts—around 95% less power than traditional switches that can draw roughly 3,000 watts. Future bandwidth upgrades, such as moving from 800G to 1.6T, can be achieved by replacing optical modules rather than overhauling entire systems, lowering upgrade costs.
On the supply side, TrendForce expects Innolight—working closely with Google on silicon photonics and 1.6T platforms—along with Eoptolink, to capture nearly 80% of Google’s 800G-plus optical module orders. Lumentum is positioned as a key supplier of OCS systems and MEMS components, with its production capacity likely to influence how quickly Apollo OCS deployments scale.
TrendForce concludes that as compute density rises, data traffic between racks and across clusters will continue to grow rapidly. In this environment, progress in high-speed optical modules, lasers, and related components will be just as critical as advances in GPUs and memory in determining the speed and economics of future AI infrastructure expansion.

