What changed at VEXXHOST in 2025? Safer upgrades, dynamic credentials, OpenStack 2025.2 support, and real day-2 improvements for operators.
If you build or operate clouds, you probably track releases and new features. You also track something else, even if you do it quietly. You track how often upgrades cause surprises, how often credentials leak into places they should never live, and how much time you spend babysitting parts of the stack that should behave predictably.
That’s the bar we used in 2025.
We shipped a set of changes that aim at the day-2 and day-100 realities of OpenStack. You’ll see a theme across them.
We keep pushing toward a stack that behaves well in hybrid setups, stays current with upstream, and fits enterprise requirements without turning your platform team into a ticket machine, especially when you’re building infrastructure that needs to be ready for AI workloads.
The bigger picture
If you don’t want to read everything below and need the shortest useful takeaway, here it is:
- You can reduce credential sprawl by issuing short-lived application credentials through Vault or OpenBao.
- You can plan upgrades with fewer data plane surprises by controlling when OVS rolls, and by tightening restart behaviour when it does.
- You can keep pace with upstream through Atmosphere v7.0.0 on OpenStack 2025.2 Flamingo, while still getting practical improvements in observability, networking, and security defaults.
- You can run hybrid Kubernetes environments with more control using control plane-only clusters and scale-to-zero worker pools.
- Our goal is to ship and show up for operators. That’s why our 2025 looked this way.
We tightened access with dynamic OpenStack credentials
Teams still end up with static OpenStack credentials scattered across CI jobs, config files, and environment variables. When those credentials live forever, your exposure window lives forever too.
So, we released major updates to our open-source OpenStack Secrets Engine, with support for both HashiCorp Vault and OpenBao. The plugin generates short-lived OpenStack application credentials on demand, and it now supports multi-project workflows so you can scope rolesets per project.
Two details matter a lot when you try to use this in real systems:
- OpenStack-native configuration. We aligned configuration with OpenStack naming, so parameters like
user_domain_idandproject_domain_namematch the tooling your team already uses. - A modernized codebase. We rebuilt on Gophercloud v2 and Go 1.25, keeping the engine closer to upstream conventions.
This is also a clean fit for audits. Instead of defending rotation schedules, you can demonstrate that credentials physically cannot be long-lived, and that every request is authenticated, authorized, and logged.
Open source note: the plugin is Apache 2.0 licensed, and we built it because we needed it. We run it in production on VEXXHOST infrastructure.
We made upgrades calmer for the data plane
Upgrades often include a hidden tax. You change control plane services, and you still get a brief data plane interruption because Open vSwitch restarts. If you run performance-sensitive workloads, those seconds matter.
We addressed this in two layers.
Stop restarting OVS when nothing changed
In Atmosphere, Open vSwitch used to rebuild as part of the main release pipeline. That meant Kubernetes would see a new image digest and roll the OVS DaemonSet even when OVS itself stayed the same.
We decoupled OVS builds from Atmosphere releases so OVS only rebuilds and rolls when there is an actual OVS change. The result is simpler upgrade planning and fewer network blips during maintenance windows.
While we were there, we also improved the baseline performance profile for modern CPUs using x86_64-v2 optimizations. This way operators get better efficiency out of the box.
Reduce restart impact when you do need a rollout
We then shipped two additions that target the time you still spend on restarts.
- AVX-512 optimized OVS builds for compatible Intel CPUs, which can improve packet processing efficiency for both kernel and DPDK datapaths depending on workload and hardware.
- ovsinit, a small utility that changes how OVS daemons restart inside Kubernetes. It aims to keep the transition tight during rolling updates, so the “gap” is shorter and more predictable.
In testing across production-like environments, we saw downtime around ~1 second for kernel datapaths and ~3 seconds for DPDK datapaths (results will vary depending on startup conditions, image pulls, and node state).
If you run private AI infrastructure, this matters even more than it sounds. Training and inference clusters push east-west traffic hard. You want maintenance windows to feel boring, because your users will treat every blip as a platform problem.
We shipped a major platform release with OpenStack 2025.2 Flamingo support
Upstream still moves fast, and staying aligned is part of staying useful.
OpenStack 2025.2 Flamingo released on October 1, 2025. We shipped Atmosphere v7.0.0 with full support for Flamingo soon after.
This release includes a wide set of improvements across the stack, but a few changes map directly to what platform teams ask for:
- Broader enterprise Linux support. We added Rocky Linux 9 and AlmaLinux 9 support across both Ceph and Kubernetes collections, giving you more flexibility on modern enterprise baselines.
- Smoother backup workflows. Percona backup jobs now automatically use a default backup image, reducing per-job configuration while keeping consistency across clusters.
- Networking, routing, and visibility improvements. The release includes support for frr-k8s deployments for BGP routing with OVN, and improved DPDK interface configuration that supports both interface names and PCI IDs.
- Better observability for storage and networking. We added new dashboards for Ceph monitoring, including NVMe-oF, SMB, multi-cluster, and application-level dashboards.
- Security hardening that shows up in operations. The release includes changes like non-privileged execution for Horizon, updated CORS and allowed hosts configuration, TLS 1.3 enforcement for libvirt’s remote API, and NGINX ingress security updates.
For developers and DevOps teams evaluating platforms, this is the kind of release that signals intent. we want you to be able to run a modern OpenStack and Kubernetes stack, keep it patched, and keep your operational workflow predictable.
We improved observability and reliability across the year
The mid-year Atmosphere releases covered a lot of what operators run into during regular life.
Atmosphere 2.3.0 improved Octavia monitoring and alerting, including better amphora visibility and operational signals.
Atmosphere 3.4.0 moves forward on control plane resiliency with Octavia Amphora V2 enabled by default, supported by the Valkey service.
Atmosphere 3.4.1 (and again called out later in the 4.5.1 cycle) reactivates the Keystone auth token cache after upstream Ceph fixes, improving authentication responsiveness in deployments where token validation volume is high.
Atmosphere 4.6.0 has added Neutron plugins for dynamic routing and networking-generic-switch.
Atmosphere 4.6.1 has improved iSCSI LUN performance for Pure Storage devices via udev rules and upgraded Cert-Manager to address Cloudflare API compatibility issues for ACME DNS-01 challenges.
These changes add up, shrinking the amount of manual work it takes to keep a cloud healthy.
We gave Kubernetes teams more control over cluster shape and cost
In 2025, we also shipped a practical improvement to the Magnum Cluster API driver: the ability to create control plane-only Kubernetes clusters by setting node_count to zero, and to scale worker node groups down to zero by setting min_node_count to zero.
That gives you more flexibility for hybrid environments and event-driven workloads: start small, scale up when needed, and stop paying for idle workers when you don’t.
Community and ecosystem moves to widen our reach
Partnering with Btech in Indonesia
In June, we announced a new partner in Indonesia, Btech. The goal is simple. Help teams in the region deploy and run OpenStack with Atmosphere, with local support and a clearer path from plan to production.
For buyers evaluating open infrastructure, local partners change the adoption story. You get a clearer path for rollout, operations, and escalation, particularly if your environment spans regions or mixes on-prem with public cloud.
Kept investing in the plumbing that the OpenStack ecosystem relies on
OpenInfra’s Superuser team shared the story of our work with OpenDev. Since 2016, we’ve supported OpenDev by providing infrastructure that runs core services and CI workloads. That includes GPU-enabled instances and other specialised compute for CI, including high-memory VMs and nested virtualisation.
Co-hosting OpenStack’s 15th birthday in Montreal with Red Hat and OVHcloud
We also showed up for the community at home.

In November, we co-hosted OpenStack’s 15th birthday in Montreal with Red Hat and OVHcloud. We also published our reflection on 15 years of OpenStack and why it keeps working as a foundation for private, hybrid, and edge clouds.
What we’re taking into 2026
We will keep doing three things.
- We will keep contributing upstream and supporting the community systems that everyone depends on.
- We will keep making upgrades and operations less risky, especially around networking and control plane behaviour.
- We will keep building for an AI-native reality, where you want the same workflows across Cloud, Hosted, and On-Premise.
If you want to compare notes on upgrades, security posture, or how you are planning for AI workloads on OpenInfra, come talk to us.