Full-stack AI inference clouds that layer managed services (inference APIs, databases, storage, networking) on bare metal generate roughly 2× the revenue per megawatt of physical capacity versus pure bare-metal GPU rental operations. This gap flows directly through to EBITDA margins: full-stack clouds achieve 40%+ EBITDA margins vs. 20-25% for training-focused bare-metal Neo Clouds. The leading indicator is managed-services percentage of AI revenue — when this exceeds ~60-70%, it signals the platform has achieved genuine attach beyond commodity GPU access.
When comparing AI infrastructure companies, revenue per MW (or revenue per GW of power capacity) is a more diagnostic metric than absolute revenue growth. A company delivering $20M/MW operates a fundamentally different business — with structurally higher margins and greater switching costs — than one delivering $10M/MW. Track: (1) revenue/MW as the primary unit-economic comparator; (2) managed-services % of AI revenue as the leading indicator of full-stack attach; (3) whether the mix is trending toward or away from bare metal over time. A company moving from 40% managed to 70% managed is re-rating territory; one moving the other direction is commoditizing.