type: insight tags: [ai-infrastructure, inference, revenue-per-mw, managed-services, bare-metal, unit-economics, neocloud] confidence: medium created: 2026-03-25 source: DOCN stock-analysis 2026-03 persona: bert provenance: legacy source_analysis_path: null source_paragraph_quote: null source_transcript_span: null source_loss_log_path: null

Revenue per Megawatt as the Unit-Economic Tier Differentiator in AI Inference Infrastructure

Full-stack AI inference clouds that layer managed services (inference APIs, databases, storage, networking) on bare metal generate roughly 2× the revenue per megawatt of physical capacity versus pure bare-metal GPU rental operations. This gap flows directly through to EBITDA margins: full-stack clouds achieve 40%+ EBITDA margins vs. 20-25% for training-focused bare-metal Neo Clouds. The leading indicator is managed-services percentage of AI revenue — when this exceeds ~60-70%, it signals the platform has achieved genuine attach beyond commodity GPU access.

Evidence

Implication

When comparing AI infrastructure companies, revenue per MW (or revenue per GW of power capacity) is a more diagnostic metric than absolute revenue growth. A company delivering $20M/MW operates a fundamentally different business — with structurally higher margins and greater switching costs — than one delivering $10M/MW. Track: (1) revenue/MW as the primary unit-economic comparator; (2) managed-services % of AI revenue as the leading indicator of full-stack attach; (3) whether the mix is trending toward or away from bare metal over time. A company moving from 40% managed to 70% managed is re-rating territory; one moving the other direction is commoditizing.