Before an AI workload touches production, someone on your team has to answer three questions: Where does our data reside? What regulations govern it? Can we run production AI there?
Most organizations can answer the first two. The third is where architectures quietly fail. The infrastructure most enterprises built to satisfy compliance requirements predates the compute demands of production AI. That gap is starting to show.
The first two questions have real infrastructure behind them
Data residency is documented. Regulatory obligations across GDPR, DORA, India's DPDP Rules, and Saudi Arabia's PDPL are tracked, if not always fully resolved. Legal and compliance teams have spent years building frameworks to address these requirements, and most enterprises of any scale have invested meaningfully in addressing them. The audit trail exists, and the certification stack is in place.
This is not a trivial achievement. The problem is that these regulations, including AI-specific ones such as the EU AI Act, only govern where data must reside and how AI systems must behave. They say nothing about whether sufficient compute infrastructure exists in that jurisdiction to train or run inference on that data.
The third question exposes a different problem
Running production AI where regulated data resides requires:
- GPU availability at jurisdictional scale
- Compute physically adjacent to data, so inference doesn't cross borders
- Metro-level proximity between storage and workloads, not just a regional presence checkbox
- Private connectivity that keeps training traffic off public networks entirely
None of those requirements are ones regulators can provision. They came from what AI workloads actually need to perform. Training a fraud detection model on billions of real-time transactions, running diagnostic AI on patient records, fine-tuning supply chain algorithms on operational data: workloads like these degrade badly when data sovereignty and compute capacity pull in opposite directions. The regulatory framework can be perfectly well-designed, and the AI workload still fails.
How "good enough" became a liability
The sovereign cloud decisions most enterprises made two or three years ago were reasonable for the workloads they were planning at the time. The evaluation criteria reflected that reality: regional certifications, data-residency SLAs, and general network performance guarantees. Check those boxes, satisfy the auditors, move on.
The problem is that those criteria were built around storage, VMs, and application tiers, not GPU-intensive training jobs and low-latency inference pipelines. Infrastructure that passed every evaluation in 2022 may be genuinely unable to support the AI workloads an organization is trying to run in 2026.
A McKinsey survey of enterprises, providers, governments, and investors published in December 2025 found that only around 30 countries currently host in-country compute infrastructure capable of supporting advanced AI workloads, putting a hard number on what "sovereign cloud" has typically left unaddressed. And because the evaluation framework hasn't changed, the gap often remains invisible until something breaks: a latency threshold is exceeded, a regulator raises questions about cross-border inference, or a model training job simply won't perform at the scale required.
Re-architecting after the fact is expensive. Doing it under regulatory pressure is worse. The organizations that will avoid that position are the ones treating the third question as a first-order infrastructure requirement now, not a problem to revisit when the compliance team flags it.
The supply side has changed, but the evaluation criteria haven't caught up
There is a practical upside here. In-country deployment options exist today that weren't available two or three years ago. Governments across the EU, Southeast Asia, and the Gulf states have made sovereign AI infrastructure a strategic priority, funding compute capacity and, in some cases, mandating in-country AI operations.
The European Commission's AI Continent Action Plan frames large-scale compute infrastructure as a prerequisite for European competitiveness. Vietnam's Law on Artificial Intelligence, enacted in December 2025, explicitly enshrines national AI sovereignty as a legislative priority. In the Gulf, as the Middle East Institute's From Crude to Compute details, sovereign wealth funds are converting energy capital into compute infrastructure positioned to serve Africa, Asia, and Europe.
The infrastructure is increasingly there. What hasn't kept pace is how enterprise buyers evaluate it. Most sovereign cloud evaluations stop at two requirements: where data lives, and what governs it. The organizations adding a third, whether production AI can actually run there, are the ones that won't be re-architecting under pressure later.
Download our full analysis, “Sovereign Cloud Is Now an AI Architecture Decision,” which maps the old evaluation criteria against the new ones and examines how the architecture works in practice, including how compute deploys adjacent to data with private connectivity as the performance bridge.

