By Chris Sharp, CTO, Digital Realty and Kevin Cochrane, CMO, Vultr
For the past five years, sovereign cloud has been a compliance conversation. Enterprises asked, "Is it sovereign?" and vendors responded with regional deployments, certifications, and data residency guarantees. Check the box and move on.
That era is ending.
In 2026, enterprises are discovering that "sovereign cloud" as a product category doesn't answer the question they're actually asking: How do I run AI on data that can't move? The conversation is shifting from "Is it sovereign?" to "What can I actually do with sovereign infrastructure?"
This isn't semantic. Sovereignty used to mean where you store data at rest. Now it means whether you can run compute-intensive AI workloads where data physically lives at the performance levels AI demands: training a fraud detection model on financial transactions; running diagnostic AI on patient records; fine-tuning supply chain algorithms on operational data. These workloads require physical infrastructure: proximity between data and compute, private connectivity that eliminates cross-border compliance risk, and architecture that supports both sovereignty and scale.
The question isn't if you need sovereignty. It's what you can build when sovereignty and performance coexist.
The catalyst: Data location becomes non-negotiable
AI training and inference require petabytes of data that enterprises can't practically move. The economics create immediate friction.
Moving petabyte-scale datasets costs tens of thousands of dollars in cloud egress fees alone – and that's for a single transfer. Add months-long migration timelines and cross-border compliance violations, and the friction becomes prohibitive. For enterprises running continuous training cycles or fine-tuning models on fresh data, these costs compound with every iteration.
Cost matters. Performance matters more.
Real-time fraud detection models need sub-millisecond access to transaction data. Diagnostic AI requires constant availability of patient records. Supply chain optimization depends on operational datasets refreshed in real time. Data Gravity – where data attracts applications and services – becomes absolute when workloads demand petabyte-scale access at training speeds.
The market validates this constraint. Gartner projects the sovereign cloud IaaS market will grow at a 36% compound annual growth rate to $169 billion by 2028. Regulatory mandates don't explain that trajectory. Enterprises are discovering they need compute where their data lives, and sovereignty provides the architecture that makes it possible.
Where sovereignty becomes operational: Two industries
Financial services and life sciences illustrate this evolution. Both face strict regulatory frameworks, but their sovereign cloud investments are driven by operational realities: AI workloads that require real-time access to data that cannot legally be moved.
Financial services
The Digital Operational Resilience Act (DORA) took effect across the EU in January 2025, mandating operational resilience and third-party ICT oversight for financial institutions. The regulatory pressure is real: 75% of global CROs now cite cybersecurity as their top concern, according to the EY/IIF Global Bank Risk Management Survey. Cumulative GDPR fines across Europe have exceeded €5.6 billion since enforcement began, with regulators increasingly targeting financial services alongside technology and telecommunications sectors.
Meeting these compliance frameworks requires infrastructure investment. AWS committed €7.8 billion to a European Sovereign Cloud launching in Germany by late 2025. Microsoft and Google have made similar sovereign cloud commitments across Europe. However, compliance alone doesn't justify billions in dedicated infrastructure.
The operational driver is AI workloads that can't tolerate the latency, cost, or compliance risk of moving data. In practice, this shows up across several financial use cases:
Banks
- Train fraud detection models on billions of real-time transactions.
- Continuously retrain as fraud patterns evolve.
Risk management systems
- Analyze customer portfolios across multiple jurisdictions.
- Requiring sub-second query times against datasets that legally can't cross borders.
Algorithmic trading
- Operates at microsecond latency.
- Cannot tolerate the delays of a transatlantic round trip.
These aren't batch-processing jobs that can wait for data migration. They're continuous training pipelines, real-time inference systems, and model fine-tuning operations that demand compute physically adjacent to petabyte-scale datasets.
The question shifts from "Where do we store data to meet DORA requirements?" to "Can we run production AI workloads where our data legally must reside?"
Life sciences
Large healthcare and life sciences enterprises cite data privacy and sovereignty as their primary barrier to AI adoption, according to NVIDIA's 2025 industry survey. HIPAA compliance sets the baseline, but the operational constraint is training diagnostic AI on patient data that can't leave hospital infrastructure. Clinical trial datasets, electronic health records, and genomic data accumulate into petabytes that are both legally restricted and practically immovable.
Drug discovery algorithms, diagnostic imaging models, and personalized medicine platforms all require sovereignty and performance simultaneously. Training a radiology AI on years of patient scans means running compute where those images physically exist and ensuring they never traverse uncontrolled networks.
These aren't edge cases. Financial services and life sciences face what every regulated industry with AI ambitions will confront – data that can't move, workloads that can't wait, and compliance frameworks that don't accommodate either.
The infrastructure answer: Physical proximity and private connectivity
Traditional cloud architectures force a choice: sacrifice performance for sovereignty or accept vendor lock-in for scale. That tradeoff assumes data and compute must remain separated.
Physical infrastructure architecture means you don’t have to choose. Colocation adjacent to cloud regions puts AI compute where enterprise data already resides, no petabyte migrations required. Private, deterministic connectivity ensures data never traverses public networks while maintaining the sub-millisecond latency AI workloads demand. Training pipelines and inference systems operate as if data and compute share the same facility, because functionally, they do.
This matters because enterprise data subject to sovereignty requirements – such as financial records, patient data, and operational systems – typically resides in on-premises or colocation facilities, not in centralized public clouds. Sovereignty starts where that data lives, not where cloud providers want to centralize it. The architecture inverts the traditional model: Instead of moving data to compute, compute deploys adjacent to data with private interconnection providing the performance bridge.
The Vultr and Digital Realty partnership applies this model across a global colocation footprint spanning more than 300 data centers across six continents, representing the world's largest network of interconnected data centers purpose-built for AI infrastructure.
ServiceFabric® provides the software-defined interconnection layer, enabling enterprises to orchestrate private connectivity between their data environments and on-demand cloud GPU infrastructure with deterministic performance. Digital Realty’s Private AI Exchange (AIPx) complements the architecture by creating dedicated, high-bandwidth pathways specifically optimized for the unique traffic patterns of AI training and inference workloads. Critically, this environment delivers both sovereignty and AI performance without hyperscaler dependency.
2026 and beyond
Sovereign cloud ceases to be a product category in 2026. Though not yet a regulatory requirement, it becomes an architectural mandate, the infrastructure layer that lets enterprises run AI where their data already lives.
CIOs face three questions:
- Where does our data reside?
- What regulations govern it?
- Can we run production AI workloads there?
The third question only works when the infrastructure supports it. Colocation adjacent to data, private connectivity between storage and compute, on-demand GPU capacity – these aren't nice-to-haves. They're the foundation for AI workloads that can't compromise on sovereignty or performance.
The question shifts from whether you need sovereignty to what you can build with it.
Learn more about how enterprises are rethinking sovereign infrastructure for AI workloads.

