type: framework-update tags: [asic, gpu, ai-inference, training, semiconductor, competitive-moat, jevons-paradox] confidence: medium created: 2026-04-01 source: NVDA stock-analysis 2026-04 persona: bert provenance: legacy source_analysis_path: null source_paragraph_quote: null source_transcript_span: null source_loss_log_path: null

ASIC Displacement Risk Must Be Segmented: Training vs. Inference vs. Frontier

Applying a single ASIC displacement discount to blended AI compute revenue overstates competitive risk. The training market (frontier model development) is a GPU fortress with ~90% NVIDIA share — ASICs cannot match the architectural flexibility required for iterative training at scale. Inference is genuinely bifurcated: commodity inference (well-defined, stable architectures, high volume) is ASIC-favorable on cost-per-token; frontier inference (agentic, multi-modal, rapidly evolving architectures) still requires GPU flexibility. Crucially, the Jevons dynamic applies: cheaper inference from ASICs expands total inference demand faster than share is lost, so a GPU supplier may grow inference revenue in absolute terms even as its share percentage declines.

Evidence

Implication

When assessing AI compute companies (NVIDIA, Marvell, Broadcom inference plays), segment the TAM three ways: (1) training — model NVIDIA's share as durable near-term, (2) commodity inference — apply ASIC displacement discount (20-30% share loss over 3 years), (3) frontier inference — model as GPU-sticky. Then apply Jevons demand expansion to the total inference market before netting the share loss. A company losing share in a 5x growing market may still grow faster in absolute revenue than a company holding share in a flat market. Do not use a single blended ASIC threat factor.