Platform engineering has become the new center of gravity for enterprise AI. According to new research from Platform Engineering and Vultr, nearly nine out of ten platform engineers (89%) now use AI tools daily, and three-quarters (75%) are hosting or preparing to host AI workloads. As adoption spreads, the nature of the job is changing. Platform teams are moving beyond supporting software delivery to architecting the infrastructure that makes AI development and deployment possible.
The report describes this evolution as a “dual mandate”: using AI to improve platform operation and building the AI-native environments that allow data scientists and machine-learning engineers to develop, train, and deploy models securely at scale.
From cloud-native to AI-native
Cloud-native architectures gave organizations a way to standardize and accelerate software delivery. But AI introduces new demands. Training, fine-tuning, and serving models depend on GPU-accelerated compute, high-throughput data access, and real-time orchestration across multiple environments.
The survey data highlights a maturity gap: While 40% of platform teams have extended Kubernetes to support GPU and AI workloads, over a third (35%) still don’t orchestrate these workloads. Infrastructure designed for predictable, CPU-based applications often struggles with modern AI's distributed, data-intensive patterns.
Becoming AI-native means extending the principles that made cloud-native successful, such as automation, standardization, and product thinking, to a new class of systems. AI-native platforms need to automatically scale the right mix of compute, data, and policy controls, bringing governance into every development and deployment stage.
As discussed in Vultr CMO Kevin Cochrane’s article, Building AI-Native Infrastructure with Platform Engineering, this evolution builds on existing best practices rather than replacing them. The same disciplines that shaped modern platform engineering (composability, golden paths, and automation) now form the foundation for GPU scheduling, model orchestration, and policy enforcement in AI-driven environments.
Clarifying ownership and accountability
Even as AI becomes embedded in daily workflows, many organizations still lack clarity on who’s responsible for scaling it. According to the survey, 39% of respondents say platform engineering owns AI within their organization, 25% report shared ownership, and 13% have no clear owner. That fragmentation helps explain why so many AI initiatives stall after early success. Experimentation thrives, but long-term accountability lags.
Platform teams are well-positioned to close that gap. They already manage the automation, security, and compliance frameworks on which every AI workload depends. Extending those responsibilities to include AI orchestration and governance allows them to turn scattered experiments into integrated capabilities. It also helps align executive expectations for ROI with the operational realities of deploying models safely.
Treating AI as part of the platform product, complete with defined users, service levels, and feedback loops, creates the conditions for measurable progress. Once ownership is clear, teams can focus on consistency: standardizing infrastructure blueprints, automating review processes, and embedding metrics that track AI’s impact across the software lifecycle.
Designing the AI-native foundation
The next challenge is building the technical and operational foundation that brings AI into everyday platform workflows. Beyond GPU access, AI-native platforms require environments where data pipelines, model lifecycles, and governance all operate as part of a single system.
Many organizations are still piecing this together. While 40% of teams have extended Kubernetes for GPU orchestration, fewer have standardized the surrounding workflows for training, evaluation, and deployment. As a result, AI often runs parallel to DevSecOps rather than within it.
Vultr’s whitepaper, “A Practical Guide to Platform Engineering for Generative AI,” outlines what mature integration looks like. Successful teams focus on a few consistent patterns:
- Infrastructure optimization: Tightly integrated CPU/GPU resources in regions close to users and data to support training and low-latency inference
- Model management: Centralized development environments and private registries so versions remain accessible and auditable enterprise-wide
- Data governance: Regional storage and controls that allow safe training on proprietary data and localized fine-tuning
- Observability: Monitoring, drift detection, and audit trails embedded across LLMOps/MLOps so model behavior is visible and accountable
- Automation & self-service: Infrastructure- and policy-as-code, CI/CD hooks, and golden paths that make provisioning and rollouts repeatable and governed by default
- Composability: Modular stacks that let platform teams swap components as needs and technologies evolve, without breaking developer workflows
These patterns move platform engineering beyond infrastructure delivery into operational design. The result is familiar to any platform team: consistent templates, predictable pathways, and built-in guardrails, but applied to AI workloads so they meet production standards for performance, security, and compliance.
Looking ahead: Platform engineers as AI strategists
The evolution toward AI-native platforms marks a turning point for platform engineering itself. What began as a discipline focused on developer experience is now the foundation for intelligent systems that learn, adapt, and scale across the enterprise.
According to the research, 86% of respondents believe platform engineering is essential to realizing the full business value of AI. This reflects how closely the two domains are now intertwined. As AI matures, the role of platform engineer will continue to expand. Teams will shift their attention from managing infrastructure to setting policies, measuring outcomes, and shaping how AI behaves in production. Success will depend on balancing flexibility with control and allowing developers and data scientists to experiment within guardrails that ensure reliability, security, and compliance.
AI-native infrastructure is where that balance takes shape. It transforms experimentation into sustainable practice and positions platform engineers as true AI strategists: the people building the systems that make enterprise AI work.
Read the full report: The State of AI in Platform Engineering

