Applying a single ASIC displacement discount to blended AI compute revenue overstates competitive risk. The training market (frontier model development) is a GPU fortress with ~90% NVIDIA share — ASICs cannot match the architectural flexibility required for iterative training at scale. Inference is genuinely bifurcated: commodity inference (well-defined, stable architectures, high volume) is ASIC-favorable on cost-per-token; frontier inference (agentic, multi-modal, rapidly evolving architectures) still requires GPU flexibility. Crucially, the Jevons dynamic applies: cheaper inference from ASICs expands total inference demand faster than share is lost, so a GPU supplier may grow inference revenue in absolute terms even as its share percentage declines.
When assessing AI compute companies (NVIDIA, Marvell, Broadcom inference plays), segment the TAM three ways: (1) training — model NVIDIA's share as durable near-term, (2) commodity inference — apply ASIC displacement discount (20-30% share loss over 3 years), (3) frontier inference — model as GPU-sticky. Then apply Jevons demand expansion to the total inference market before netting the share loss. A company losing share in a 5x growing market may still grow faster in absolute revenue than a company holding share in a flat market. Do not use a single blended ASIC threat factor.