VEXXHOST Logo
Purple pattern background

Kubernetes vs. Virtual Machines on OpenStack (Where Each One Wins)

Ruchi RoyRuchi Roy

Not every workload belongs on Kubernetes. See where VMs still shine, how OpenStack supports both, and why Atmosphere simplifies multi-modal ops.

The decision between choosing to go with Kubernetes and virtual machines is often oversimplified. Teams aren’t debating whether to go “cloud-native” or “legacy.”  

In production environments, it’s never a binary choice between Kubernetes and VMs, it’s both.

Monolithic apps that need guaranteed CPU pinning alongside containerized microservices that need to scale on demand.  

 So, the real question isn’t "which is better?" but “which solves the specific problem in front of you today/at scale/under load/with real users?”  

We’ll get into the tradeoffs between running containers on Kubernetes and workloads on Nova-based VMs, the types of infrastructure challenges each solves, where they fall short. And how OpenStack (especially when deployed with Atmosphere) gives teams the flexibility to run both cleanly.  

The underlying difference 

At a systems level, Kubernetes abstracts at the application layer; Nova and virtual machines abstract the infrastructure or hardware layer. And that distinction shapes everything from deployment cadence to performance tuning. 

When teams use Kubernetes, they want portability, fast deploy cycles, and horizontal scale, usually with stateless services. These workloads are typically built around cloud-native assumptions: 

But the moment workloads become stateful, hardware-aware, or vertically scaled, that clean abstraction starts to bend. In scenarios where startup time isn’t the bottleneck, but sustained performance is critical, such as high-performance databases, machine learning training jobs with large GPU demands, or specialized workloads, Kubernetes’ abstraction can become a limitation. 

This is where virtual machines offer something Kubernetes doesn’t - consistent, predictable execution environments where performance tuning and deep isolation matter more than orchestration. 

Containers don’t solve every problem (and that’s fine!) 

Most infrastructure is inherited and much of it doesn’t cleanly map to Kubernetes-native models. We now have a mix of legacy systems and modern applications, all expected to coexist. Not everything benefits from containerization. Some things break. 

VMs offer a simpler path when orchestration overhead adds more fragility than flexibility. They allow for strong isolation, more predictable IOPS, and better support for kernel tuning without complex container workarounds. 

So, for a huge category of workloads, VMs remain the most operationally efficient and reliable choice.  

The role of OpenStack in unifying the structure 

OpenStack has a reputation for being a VM-first platform but that view ignores the its broader capabilities in orchestrating storage, networking, and even Kubernetes itself. 

  • Nova gives direct control over compute resources, including bare metal, CPU pinning, NUMA-aware scheduling, and more, all critical for workloads that need low latency or predictable performance. 
  • Cinder + Ceph provide durable block storage for both VMs and Kubernetes PVs. 
  • Magnum provisions CNCF-compliant Kubernetes clusters with integration into Keystone for multi-tenancy, Cinder for volumes, and Octavia for LBs. 
  • Neutron handles complex virtual networking, making it easier to support overlapping IP spaces, floating IPs, and hybrid network topologies. 

This means operators get a unified control plane with a consistent set of authentication policies, one quota system, and one networking fabric, all while forcing them into a single abstraction. 

A quick word on KubeVirt 

KubeVirt tries to solve the VM vs. container split by running VMs as pods inside Kubernetes. It’s appealing in theory: one scheduler, one API, one management surface. But in practice, it’s an abstraction on top of an abstraction, and that brings limits

KubeVirt can’t match the full capabilities of Nova in bare-metal or NUMA-aware scheduling. You lose deep performance tuning, mature driver support, and features like live migration or I/O bandwidth guarantees. It’s workable for some transient VM workloads, but it’s not a great fit for tightly coupled or performance-sensitive deployments. 

Atmosphere takes a cleaner approach - run each workload on the right abstraction, then stitch them together with shared identity, storage, and networking. 

Where things usually break (and what to look for) 

The biggest failure mode isn’t "choosing the wrong thing." It’s choosing one model and forcing every workload to conform to it. 

Teams running legacy software often get pushed into Kubernetes prematurely, then spend months containerizing apps that gain nothing from it. Meanwhile, teams that are fully cloud-native sometimes get stuck with VM-based workflows because it’s "how the infra team does it." 

This misalignment between workloads and infrastructure models results in inefficiencies, frustration, and slower delivery cycles. 

What solves this? 

Infrastructure that’s designed to support both models (VMs and Kubernetes) while giving platform teams the flexibility to make workload-specific decisions. It also requires platform teams that are empowered to make those decisions pragmatically, based on technical needs rather than rigid tooling preferences. For example: 

  • "This app needs node affinity and GPU passthrough."

Spin it up on Nova.

  • "This API is stateless and built for horizontal scale."

Put it on Kubernetes.

  • "This batch job doesn’t need a long-lived machine."

Use K8s Jobs on a preemptible node pool.

Successful multi-model infrastructure depends less on tooling alone and more on enforcing architecture boundaries that reflect how teams actually work. 

What Atmosphere does differently 

Atmosphere assumes operators aren’t choosing between VMs and Kubernetes; they’re running both, at scale, and need guardrails to manage the complexity. Here’s how Atmosphere addresses the challenges of multi-modal infrastructure: 

  • GPU workloads

GPU-intensive workloads can be scheduled on either VMs or Kubernetes, leveraging features like GPU passthrough, virtual GPUs (vGPUs), or Kubernetes node labeling for workload affinity. 

  • Unified storage

Block storage is shared across platforms, eliminating the need for separate provisioning workflows for VMs and Kubernetes persistent volumes. This ensures seamless compatibility across workloads. 

  • Identity and access management

Authentication, authorization, and access control are unified across the control plane. Whether using Nova for VMs or Kubernetes for containers, quota enforcement, isolation, and billing policies remain consistent. 

  • Day-2 operations

Atmosphere simplifies ongoing operations, including upgrades, scaling, monitoring, and troubleshooting, to reduce operational overhead across both platforms. 

Magnum and Cluster API 

Teams that want declarative provisioning of Kubernetes clusters, using GitOps or Terraform or a control loop, need more than a CLI. That’s why VEXXHOST developed a custom Cluster API (CAPI) provider for OpenStack. 

This lets Atmosphere users provision, upgrade, and scale Kubernetes clusters through Magnum using the CAPI standard. Now operators can build true self-service flows for Kubernetes without giving up control, quotas, or policy enforcement. It also makes Kubernetes cluster lifecycle management consistent with how most teams manage applications and infrastructure today. 

LOKI and the Future of Open Infrastructure 

The OpenInfra community has been vocal about a Linux-OpenStack-Kubernetes-Infrastructure (LOKI) future where Linux handles the host, OpenStack manages hardware, and Kubernetes manages containers. 

Atmosphere aligns with this direction by maintaining clean boundaries between layers. It lets platform teams run both Nova and Kubernetes side-by-side without duplication, without overlaying orchestration logic that fights with itself. 

Instead of blurring the lines, it makes them interoperable. That’s what makes the LOKI model sustainable. And that’s what keeps teams sane when scaling multi-modal infrastructure. 

Multi-modal infrastructure 

The future of infrastructure is context-first. Some workloads need rapid scaling and orchestration. Others need tuned, isolated environments with tight control over memory, I/O, and device access. Businesses don’t have to pick one model over the other.   

Instead, they need infrastructure that supports both, with unified policy enforcement and day-2 ops that won’t fall apart when something breaks. Atmosphere makes that duality manageable. Curious how we can support you? Schedule a free consultation with a VEXXHOST expert.

Share on social media

Virtual machines, Kubernetes & Bare Metal Infrastructure

Choose from Atmosphere Cloud, Hosted, or On-Premise.
Simplify your cloud operations with our intuitive dashboard.
Run it yourself, tap our expert support, or opt for full remote operations.
Leverage Terraform, Ansible or APIs directly powered by OpenStack & Kubernetes

Kubernetes vs. Virtual Machines on OpenStack | VEXXHOST