Bringing Browser-Based MFA SSO to the OpenStack CLI
Learn how a lightweight keystoneauth1 plugin brings your existing browser-based MFA and SSO to the OpenStack CLI, with no changes to any client tools.
Perspectives, mises à jour et histoires de notre équipe
Learn how a lightweight keystoneauth1 plugin brings your existing browser-based MFA and SSO to the OpenStack CLI, with no changes to any client tools.
Hyperscaler AI looks fast but hides long-term lock-in and rising costs. See how OpenStack and Kubernetes deliver GPU infrastructure you actually control.
Many AI clusters run at only 30–50% GPU utilization. Learn why GPUs sit idle and how Kubernetes, scheduling, and better infrastructure design can improve AI infrastructure efficiency.
What if your Kubernetes clusters could scale all the way down to zero? We’ve rolled out a powerful update to the Magnum Cluster API driver that gives you more control, more efficiency, and a whole new level of flexibility.
We’re introducing a powerful new update to the Magnum Cluster API driver from VEXXHOST. With this new release, you can now deploy Kubernetes clusters with node_count set to zero - that means only the control plane is created by default. From there, our built-in autoscaler does the rest. When min_node_count is set to zero, your node groups can now scale all the way down to zero, giving you more flexibility and even better cost efficiency in your OpenStack environments.
Until now, a default node group was automatically created during cluster deployment. Now, with Magnum, you simply create your cluster with node_count set to zero. Here’s what that unlocks:
Alongside zero-worker clusters, we’ve made our autoscaler even smarter. Now, when you set min_node_count to zero, your node groups can scale all the way down to zero and back up again as traffic or workload demands rise.
It uses the same autoscaling mechanism that’s already baked into the driver. There’s no extra setup required. Just set the min_node_count to zero and let the autoscaler handle the rest.
Whether you’re working with short-lived workloads, batch jobs, or just want to avoid burning resources during off-hours, scale-to-zero makes your Kubernetes clusters work smarter. You can start with just the control plane and build your infrastructure exactly as needed with custom node groups.
The system reacts to real-time demand.
So, you get dynamic resource allocation that scales down when it should, scales up when it must, and keeps your costs in check the entire time. This way you get to maintain full control over node provisioning without sacrificing performance or responsiveness.
It’s a cleaner, more cost-efficient approach to running Kubernetes on OpenStack. And it gives you a lot more breathing room when it comes to cluster design and scaling strategy.
As more organizations adopt Kubernetes for dynamic, event-driven workloads, the ability to start small and scale intelligently becomes critical. By enabling control plane-only clusters and true scale-to-zero autoscaling, Atmosphere is helping teams move toward more responsive, resource-efficient infrastructure without compromising on flexibility or performance.
At VEXXHOST, we’ve been building cloud management infrastructure for nearly two decades. So, this new feature is part of a larger effort to build tight integrations that simplify operations, reduce overhead, and give users more ways to shape infrastructure to their needs.
There’s more to come. But for now, if you're exploring how to fine-tune your Kubernetes clusters, or want a cloud that responds to your workloads instead of the other way around, we’re here to help. Reach out to our team or explore the documentation for a closer look at how it works.
Choose from Atmosphere Cloud, Hosted, or On-Premise.
Simplify your cloud operations with our intuitive dashboard.
Run it yourself, tap our expert support, or opt for full remote operations.
Leverage Terraform, Ansible or APIs directly powered by OpenStack & Kubernetes