Bringing Browser-Based MFA SSO to the OpenStack CLI
Learn how a lightweight keystoneauth1 plugin brings your existing browser-based MFA and SSO to the OpenStack CLI, with no changes to any client tools.
Perspectives, mises à jour et histoires de notre équipe
Learn how a lightweight keystoneauth1 plugin brings your existing browser-based MFA and SSO to the OpenStack CLI, with no changes to any client tools.
Hyperscaler AI looks fast but hides long-term lock-in and rising costs. See how OpenStack and Kubernetes deliver GPU infrastructure you actually control.
Many AI clusters run at only 30–50% GPU utilization. Learn why GPUs sit idle and how Kubernetes, scheduling, and better infrastructure design can improve AI infrastructure efficiency.
When public-cloud limits throttle growth, DevOps teams turn to OpenStack for cost-controlled scale, deeper observability, and GPU-ready performance. Here's why
DevOps teams move fast. But at a certain scale, public cloud stops keeping up.
Of course, the issues don't hit all at once. Initially, it's only a few delayed provisioning requests for GPU-backed nodes. Then it’s a patchwork of cost reports that don’t map to internal projects. Eventually, the team needs infrastructure that responds to real workload demands, not just what a service catalog allows.
This is the point where many teams start exploring OpenStack. Not to replace public cloud altogether, but to build a parallel environment they can tune more precisely. With OpenStack, and tools like Atmosphere, DevOps teams gain access to programmable infrastructure that mirrors their operational priorities.
Let’s walk through the specific pressure points that lead teams here.
Public cloud handles most workloads well, but high-demand resources like GPUs and memory-heavy nodes aren’t always available on demand, especially in certain regions or during peak hours. And some teams end up overprovisioning just to ensure availability making it very expensive.
Standard observability tools like CloudWatch or Stackdriver work for most metrics. But GPU memory usage, MIG partitioning stats, and kernel-level diagnostics often need extra tooling like NVIDIA DCGM, Prometheus exporters, and custom dashboards. You can wire these in, but permissions and integrations can get complicated quickly.
Cloud cost tools exist but mapping actual resource usage like GPU-hour spend per project often requires tagging, custom metrics, and manual reconciliation. Finance teams operate on lagging data and DevOps teams have little visibility into resource efficiency.
Atmosphere gives DevOps teams building blocks - compute, storage, networking, and identity. This can be configured to match real workloads like Kubernetes, CI/CD pipelines, AI training jobs, and persistent applications all benefit from infrastructure that can be shaped at the platform level.
Here’s how it works in practice.
In Hosted and On-Premise editions, teams can define custom flavors to match workload requirements, including CPU-to-memory ratios, local NVMe storage, or isolated GPU hosts. GPU passthrough, SR-IOV, and OVN overlays are supported on compatible hardware in Hosted and On-Premise deployments.
When virtualized overhead is too high, teams can use OpenStack Ironic to provision bare-metal instances in Hosted and On-Premise editions. This gives full hardware access for low-latency, high-throughput environments like AI/ML pipelines or real-time analytics clusters.
Atmosphere supports Kubernetes-native features like taints, tolerations, and node selectors. You can implement MIG slicing, bin-packing, or affinity rules using NVIDIA DCGM and scheduler extensions. These don’t come prepackaged with Atmosphere but we offer a consultative approach to building your architecture and can help you customize things to your needs.
Ceph is Atmosphere’s default backend, providing resilient block, object, and file storage. Teams can configure NVMe-backed pools for IOPS-heavy workloads, or deploy erasure-coded tiers for cost-efficient archival. It’s flexible storage, not fixed SKU tiers.
Prometheus, Grafana, Loki, and AlertManager are fully supported for integration. Teams can set up a full observability stack tailored to their needs including GPU-specific telemetry using NVIDIA DCGM exporters. Atmosphere doesn’t limit your access to metrics or logs.
Atmosphere supports integrations with tools like Kubernetes Velero. These require configuration, but allow teams to implement multi-region failover, incremental backups, and workload-specific DR plans without being locked into one model.
Atmosphere isn’t just for greenfield. Many teams adopt it as a landing zone for complex migrations from VMware or hybrid public cloud setups. VEXXHOST provides tooling and services to make these transitions smoother.
MigrateKit, a CLI tool developed and used by VEXXHOST, supports staged migrations from VMware, including delta syncing and near-zero downtime cutovers. It’s a toolkit refined from real-world migration projects.
Atmosphere is built and maintained by the same team that contributes upstream to OpenStack core projects like Nova, Keystone, and Magnum. That means your deployment is aligned with the latest improvements, patches, and tested features without vendor lag or forked distributions.
And because VEXXHOST’s engineering team spans global time zones, you get 24/7 support when something breaks (not when someone’s shift starts!)
Teams don’t need to choose between public cloud and OpenStack. OpenStack is now a practical, proven option for DevOps teams who want deeper visibility, lower-cost scale, and infrastructure they can tune to match how they actually work.
In fact, many of our customers run both, using Atmosphere to host performance-critical workloads, regulated data, or infrastructure they want full control over.
These hybrid setups take planning. OpenStack deployments can be challenging, especially when it comes to lifecycle operations, configuration consistency, and operational maturity. That’s why Atmosphere was built around Infrastructure-as-Code principles - it lets you declare your environment in code, repeat deployments without drift, and manage updates and scaling as part of your CI/CD workflows. With support from VEXXHOST (consultation, migration, and long-term operations) teams can integrate OpenStack into their architecture smoothly, without rebuilding from scratch or taking on the full operational burden.
If your team is navigating cost unpredictability, limited telemetry, or rigid abstractions, let’s talk about what open infrastructure makes possible and how we can help you get there.
Choose from Atmosphere Cloud, Hosted, or On-Premise.
Simplify your cloud operations with our intuitive dashboard.
Run it yourself, tap our expert support, or opt for full remote operations.
Leverage Terraform, Ansible or APIs directly powered by OpenStack & Kubernetes