VEXXHOST Logo
Purple pattern background

Inside the Kubernetes Sandwich: How Atmosphere Orchestrates Kubernetes and OpenStack for Scalable Infrastructure

Ruchi RoyRuchi Roy

Learn how Atmosphere flips the traditional cloud stack by running OpenStack inside Kubernetes and provisioning Kubernetes clusters via OpenStack.

Kubernetes is now the standard for managing containerized workloads. OpenStack remains a go-to for virtual machines, networking, and storage. Atmosphere brings these worlds together but not in the traditional sense.  

Instead of layering Kubernetes on top of OpenStack, it runs OpenStack within Kubernetes and enables OpenStack to provision Kubernetes clusters. This architecture gives infrastructure teams better control, visibility, and scalability without the usual overhead. 

And that’s what Atmosphere is - a production-grade OpenStack distribution that runs within Kubernetes and provisions Kubernetes clusters using OpenStack. It solves practical challenges around provisioning, scale, operational sprawl, and lifecycle management. 

Why OpenStack Inside Kubernetes? 

Traditionally, OpenStack provisions virtual machines, and Kubernetes is manually installed on top. It works, but it’s messy. Tools don’t match across environments, upgrades can be painful, and troubleshooting will be a slog. 

Atmosphere flips that setup. It runs containerized OpenStack services inside Kubernetes, gaining modularity and manageability. Kubernetes handles scheduling, health checks, and rolling upgrades. OpenStack-Helm simplifies deployments by enabling declarative, repeatable provisioning of OpenStack components. 

This lets infrastructure management follow cloud-native patterns rather than relying on brittle scripts. 

Layer by Layer 

Atmosphere uses a three-tiered architecture: 

  • Base layer: A Kubernetes cluster acts as the foundation, running containerized OpenStack services. It provides scheduling, self-healing, and scalable orchestration. 
  • Middle layer: OpenStack, deployed via OpenStack Helm, delivers core services like compute (Nova), networking (Neutron), identity (Keycloak), and storage (Cinder). 
  • Top layer: Kubernetes clusters are provisioned via OpenStack Magnum, enhanced with a Cluster API (CAPI) driver to manage lifecycle operations natively. 

Each layer is modular but tightly integrated. The base layer provides resilience, the middle manages resources, and the top connects users to the orchestration they need. 

What This Unlocks 

Cluster Management at Speed 

Atmosphere supports declarative cluster provisioning using Magnum and Cluster API. Teams can spin up Kubernetes clusters, scale them, and roll out upgrades without writing custom automation. 

Internal endpoints reduce reliance on ingress controllers. With isolated control planes and secured communication paths, security improves. Auto-scaling ensures infrastructure responds to changing workload needs without over-provisioning. 

Unified Storage for Any Workload 

Ceph-backed storage supports both VMs and Kubernetes workloads. Kubernetes uses CSI drivers for persistent volumes, while VMs get scalable block storage via Cinder. 

This setup supports hybrid use cases. For example, machine learning workflows might preprocess data on VMs and run training jobs in pods both using the same Ceph storage. Teams avoid duplicated storage and simplify access control. 

Ceph must be properly configured to achieve consistent performance. That means tuning replication policies, defining storage pools, and ensuring network bandwidth is sufficient for high I/O loads. 

Multi-Tenant Identity and Governance 

Keystone handles multi-tenancy, with project-based isolation, RBAC enforcement, and quota management. Atmosphere extends this to Kubernetes clusters provisioned via Magnum using identity federation through Keycloak (LDAP, SAML, OpenID Connect). 

This gives admins control across all environments. Teams can segment resources and enforce quotas to ensure fair usage and to maintain compliance. 

Day-2 Operations That Don’t Get in the Way 

Atmosphere includes Prometheus, Grafana, and centralized logging for full-stack visibility. These tools monitor infrastructure health, track performance metrics, and help detect issues early. Over 300 pre-set alerts are available, and teams can customize them to fit specific environments. 

Rolling upgrades are available for both OpenStack and Kubernetes clusters. Auto-healing ensures node failures are automatically handled. These features let ops teams spend less time fixing breakage and more time improving workflows. 

Where It’s Being Used 

AI/ML Training 

  • GPU-backed bare-metal nodes via Nova and Ironic 
  • Shared Ceph-backed storage for training datasets 
  • Kubernetes clusters optimized for model training provisioned via Magnum 

Teams streamline pipelines, maintain dataset integrity, and scale compute based on demand. 

CI/CD Workflows 

  • Ephemeral Kubernetes clusters spun up for testing or simulations 
  • TTL (Time-to-Live) policies applied via Kubernetes-native tools 
  • Atmosphere Usage Service tracks resource use by team or pipeline 

Short-lived clusters ensure environments clean themselves up. Developers move faster. Ops teams retain control and visibility. 

Edge Deployments 

  • Kubernetes deployed at remote or on-prem locations 
  • OpenStack services run in containers inside base Kubernetes layer 
  • Workloads pushed closer to data sources to reduce latency 

With containerized services and central orchestration, remote updates are easier and edge sites stay synchronized. 

Building Infrastructure That Adapts 

Atmosphere is designed for long-term flexibility: 

  • Containerized Architecture

All components are containerized for portability and ease of deployment. 

  • Unified Storage

Ceph provides consistent storage across VMs and pods. 

  • Governance at Scale

Keystone manages access and quotas across all environments. 

  • Observability Built-In

Full-stack metrics and logging give teams real-time insights. 

  • Open-Source Foundation

Atmosphere evolves with the OpenInfra community, not behind vendor walls. 

Whether you’re managing a hybrid cloud, supporting edge locations, or simplifying internal development environments, Atmosphere provides the foundation to do it reliably. 

See how this architecture can fit your infrastructure roadmap. Book a walkthrough to explore what’s possible. 

Share on social media

Virtual machines, Kubernetes & Bare Metal Infrastructure

Choose from Atmosphere Cloud, Hosted, or On-Premise.
Simplify your cloud operations with our intuitive dashboard.
Run it yourself, tap our expert support, or opt for full remote operations.
Leverage Terraform, Ansible or APIs directly powered by OpenStack & Kubernetes

What's Inside VEXXHOST's Kubernetes Sandwich | VEXXHOST