On December 30, 2025 China’s Ministry of Industry and Information Technology (MIIT) released guidance on accelerating the development of national “new‑type internet exchange centers.” The policy calls for broad interconnection of general data centers, intelligent computing centers and supercomputing centers, and specifically promotes deploying artificial intelligence, 400G/800G optical transport, privacy computing and quantum communication technologies in these hubs.
This article aggregates reporting from 5 news sources. The TL;DR is AI-generated from original reporting. Race to AGI's analysis provides editorial context on implications for AGI development.
MIIT’s guidance looks dry on the surface, but it’s essentially a blueprint for China’s next‑generation AI plumbing. By pushing general data centers, intelligent computing centers and supercomputers to plug into neutral internet exchange hubs, Beijing is trying to reduce latency and bandwidth bottlenecks between where data lives and where AI models run. The explicit call‑out of AI, 400G/800G optical transport, privacy computing and even quantum communication inside these exchanges shows how seriously China takes the infrastructure race underpinning advanced AI.
In practice, this policy will make it easier to stand up large, cross‑regional AI workloads — from foundation‑model training to nationwide inference services — without every company having to negotiate bespoke peering and connectivity. It’s a way of socializing some of the hardest parts of AI scale‑up: fast, cheap, secure data movement between clouds, hyperscale compute clusters and emerging “intelligent computing centers.” That nudges China closer to the kind of dense, AI‑ready interconnect fabric the US is trying to build around its own cloud regions and national labs.
For competitors, the message is clear: China isn’t just funding models and chips; it’s standardizing the network fabric that will let those resources be shared and orchestrated efficiently. That matters for AGI because the winners will be those who can marshal massive, distributed compute and data as if it were one coherent system, not just those with the biggest single data center.



