GPU Export Controls
HighActiveUS export controls and prioritization policies restrict high-end AI GPU shipments. Affects NVIDIA, AMD, and downstream AI infrastructure globally. China restrictions and retaliatory import bans compound supply uncertainty.
Overview
The GPU export controls bottleneck stems from US government restrictions on exporting advanced graphics processing units (GPUs) optimized for AI workloads. Enacted in October 2022 and expanded in October 2023, these rules, administered by the BIS, target semiconductors with TPP above 4800 (or 2400 with high-bandwidth memory), effectively encompassing NVIDIA's A100, H100, H800, and AMD's MI300 series. TPP is calculated as the product of peak FP8/FP16/FP32 performance and core count, capturing aggregate compute capacity.
Constraints apply to direct exports, re-exports, and transfers within foreign facilities using US-origin technology. China, Macau, and D5/D1 countries face presumptive denial for licenses. NVIDIA's GPUs, fabricated by TSMC on CoWoS packaging, dominate ~80-90% of AI training clusters due to CUDA software ecosystem lock-in. Controls force de-featuring (e.g., H20 with 50-70% performance haircut) or redesigns, diverting ~$10-15B in annual high-end GPU shipments from China. This creates parallel supply chains: unrestricted for allies, throttled for restricted entities, with compliance verified via end-user statements and audits.
Why It Matters
This bottleneck disrupts the semiconductor supply chain by bifurcating demand flows and amplifying lead time volatility. Upstream, it reduces orders to foundries like TSMC and Samsung, which allocate ~20% of advanced node capacity (5nm/4nm/3nm) to AI GPUs; pre-controls, China hyperscalers (e.g., ByteDance, Alibaba) consumed 15-20% of this. Foundry utilization dips 5-10% short-term, though offset by US/EU hyperscaler ramp-ups.
Midstream, OSATs (e.g., ASE for advanced packaging) face rebalancing, with CoWoS shortages extending to 18+ months. Downstream, global AI infrastructure deployment slows: non-China markets absorb excess (~500k H100-equivalents in 2024), but at 20-30% premiums. Chinese AI firms, barred from scale, accelerate domestic substitution—Huawei's 910B chips use SMIC's 7nm, consuming ~10% more capacity there—but lag 12-18 months in performance.
Broader impacts include cost inflation (GPU prices up 50% YoY) and innovation friction: restricted access hampers global AI R&D parity, while retaliation (China's rare earth curbs) risks 10-20% wafer yield hits. Supply chain resilience erodes, with inventory days rising 30-50 for GPUs, per Gartner data.
Key Players
Sources/Designers: NVIDIA (90% AI GPU market share, H100/B200 on TSMC 4NP), AMD (MI300X on GlobalFoundries/TSMC). Foundry: TSMC (80% advanced AI silicon). Affected: Chinese hyperscalers (Tencent, Baidu—shift to Huawei Ascend); global OEMs (Dell, HPE—delayed rack shipments). Beneficiaries: US hyperscalers (Microsoft Azure, absorbing 40% H100s); domestic Chinese (Biren BR100, Moore Threads—gaining 10-15% share). Regulators: US BIS (enforcer), China's MOFCOM (retaliator). Enablers: ASML (EUV tools indirectly limited via controls), Samsung (alt foundry). Relationships: NVIDIA-TSMC dyad controls 70% value chain; controls force 'China-lite' SKUs, benefiting TSMC's US Arizona fab ramp.
Current Status
The bottleneck persists and may intensify. As of Q2 2024, BIS added 140+ Chinese entities to Entity List, denying H100/H20 licenses; NVIDIA's China data center revenue fell to <5% from 25%. HBM3 supply (SK Hynix, Micron) remains tight, with AI demand outstripping 2x. Easing factors: NVIDIA's B20/B200 Blackwell series (TSMC 3nm, 2024 H2) skirt some TPP via architecture tweaks; validated end-users like Tencent receive limited H20 quotas (~10k units).
Worsening trends: Biden admin's May 2024 rules cap total chip clusters at 100k GPUs/entity in China; allies (Netherlands, Japan) align on wafer fab equipment. China countermeasures: expanded critical mineral bans, $2.5B domestic GPU subsidies. No full easing; lead times stable at 9-12 months. TSMC reports 20% capacity booked for AI through 2025.
Last verified: 4/4/2026
Severity Assessment
This constraint is significantly impacting supply and requires attention.
Current Status
This bottleneck is currently constraining supply.