How LiquidMetal AI Uses Vultr to Support Efficient Inference_mobile

10 October, 2025

How LiquidMetal AI Uses Vultr to Support Efficient Inference

By providing Claud-native infrastructure built for easy deployment of AI applications, LiquidMetal AI is on a mission to remove the DevOps burden from AI deployment. To do so, it needs a partner that can provide reliable inference throughput at transparent pricing and with global reach.

That’s why it turned to Vultr. Because of LiquidMetal AI’s user base — its Raindrop platform is especially suited for developers and startups, who often operate with lean engineering headcounts — the company needs scalable infrastructure with the efficiency to match. LiquidMetal AI found that in Vultr, with inference at above 150 RPS per customer and low latency worldwide.

Read more about how Vultr delivers for LiquidMetal AI in its customer case study.

“Other providers promised high-performance AI inference, but consistently fell short,” said Geno Valente, head of Go-to-Market and Engineering at LiquidMetal AI. “Vultr not only met our 150+ RPS target per customer, they did it with transparency and reliability.”

LiquidMetal AI uses AMD Instinct™ MI325X GPUs on Vultr as well as edge-integrated architecture for real-time AI workloads. With up to 8 GPUs per host for scaling inference workloads, stronger throughput for mid- to large-scale models, and reduced batch inference cost, LiquidMetal AI has realized:

  • 20-30% lower cost-per-token on common inference workloads
  • Faster time-to-market with immediate GPU availability
  • Multi-cloud flexibility and reliable disaster recovery with Vultr’s global footprint

When other cloud providers lacked GPU availability and couldn’t support inference above 50 RPS per model, Vultr came through for LiquidMetal AI, enabling the company to launch Raindrop on schedule and with the consistent performance to support its users.

“Vultr gives us exactly that reliability and performance at a scale we can trust,” Valente said. “When you’re supporting everything from chatbots to physical AI systems, that kind of consistency isn’t optional; it’s foundational.”

Dive deeper into the case study to learn how LiquidMetal AI trusts Vultr to meet its edge inference demands and support its own users as well as the customers they serve.

Loading...

Loading...

More News