Combined capital expenditures by the world’s eight leading CSPs—Google, AWS, Meta, Microsoft, Oracle, Tencent, Alibaba, and Baidu—are projected to exceed $710 billion in 2026, representing roughly 61% year-over-year growth.
Global communication service providers (CSPs) are accelerating investment in AI servers and infrastructure to support expanding AI workloads, according to the latest TrendForce analysis of the AI server market . Combined capital expenditures by the world’s eight leading CSPs—Google , AWS, Meta, Microsoft, Oracle, Tencent, Alibaba, and Baidu—are projected to exceed $710 billion in 2026, representing roughly 61% year-over-year growth.
In addition to continued procurement of NVIDIA and AMD GPU platforms, CSPs are increasingly investing in ASICs to optimize AI workloads and improve the cost efficiency of their data centers. Alphabet, the parent company of Google, is projected to see 2026 capital expenditure surpass $178.3 billion, up 95% YoY. Google’s early development of in-house ASICs, including its TPU roadmap advancing to the next generation v8 platform, positions it ahead of peers. Driven by Google Cloud Platform and Gemini AI applications, TPUs are expected to account for nearly 78% of AI servers shipped to Google in 2026, making it the only CSP with more ASIC-based servers than GPU-based systems.
Amazon is scaling procurement of NVIDIA GB300 and V200 rack-scale GPU systems to support AI training and inference workloads. GPUs are expected to represent nearly 60% of AWS’s AI server build-out in 2026. On the ASIC front, Amazon’s next-generation Trainium 3 will ramp starting 2Q26, following Trainium 2/2.5 deployment, with shipment momentum likely stronger in the second half of the year as software and system validation mature.
Meta’s projected CapEx for 2026 exceeds $124.5 billion, up 77% YoY, with AI server deployments relying primarily on NVIDIA and AMD GPUs, which are expected to account for over 80% of its build-out. While Meta seeks to advance its in-house MTIA ASIC platform to reduce unit compute costs and supplier dependence, software-hardware tuning challenges may limit shipment volumes relative to initial targets.
Microsoft remains focused on long-term demand for large-scale AI model training and inference, continuing procurement of NVIDIA rack-scale systems while introducing its in-house Maia 200 chip for high-efficiency AI inference. Oracle is expanding GPU rack-scale deployments to support AI data center projects related to initiatives like Stargate and OpenAI integration.
In China, ByteDance’s 2026 capital expenditure is estimated to allocate over half toward AI chip procurement, with NVIDIA H200 expected to play a key role, alongside expanding adoption of domestic AI chips such as Cambricon. Tencent continues procuring NVIDIA GPUs for cloud and generative AI services while collaborating with local partners to develop in-house ASICs for networking, data centers, and AI applications.
Alibaba and Baidu are advancing proprietary ASIC development to support large-scale AI workloads. Alibaba, through T-head and Alibaba Cloud, focuses on public cloud infrastructure and Qwen LLMs for enterprise and consumer applications. Baidu plans to roll out next-generation Kunlun chips post-2026, alongside its Tianchi AI server cluster platform, capable of linking hundreds of AI chips to enhance system-level computing power.

