Sovereign by Architecture: Building AI Infrastructure for the EU AI Act
The EU AI Act takes effect August 2026. Compliance starts at the infrastructure layer. Learn why sovereign AI needs OpenStack, Kubernetes, and Atmosphere.
Perspectives, mises à jour et histoires de notre équipe
The EU AI Act takes effect August 2026. Compliance starts at the infrastructure layer. Learn why sovereign AI needs OpenStack, Kubernetes, and Atmosphere.
Learn how a lightweight keystoneauth1 plugin brings your existing browser-based MFA and SSO to the OpenStack CLI, with no changes to any client tools.
Hyperscaler AI looks fast but hides long-term lock-in and rising costs. See how OpenStack and Kubernetes deliver GPU infrastructure you actually control.
What changed at VEXXHOST in 2025? Safer upgrades, dynamic credentials, OpenStack 2025.2 support, and real day-2 improvements for operators.
If you build or operate clouds, you probably track releases and new features. You also track something else, even if you do it quietly. You track how often upgrades cause surprises, how often credentials leak into places they should never live, and how much time you spend babysitting parts of the stack that should behave predictably.
That’s the bar we used in 2025.
We shipped a set of changes that aim at the day-2 and day-100 realities of OpenStack. You’ll see a theme across them.
We keep pushing toward a stack that behaves well in hybrid setups, stays current with upstream, and fits enterprise requirements without turning your platform team into a ticket machine, especially when you’re building infrastructure that needs to be ready for AI workloads.
If you don’t want to read everything below and need the shortest useful takeaway, here it is:
Teams still end up with static OpenStack credentials scattered across CI jobs, config files, and environment variables. When those credentials live forever, your exposure window lives forever too.
So, we released major updates to our open-source OpenStack Secrets Engine, with support for both HashiCorp Vault and OpenBao. The plugin generates short-lived OpenStack application credentials on demand, and it now supports multi-project workflows so you can scope rolesets per project.
Two details matter a lot when you try to use this in real systems:
user_domain_id and project_domain_name match the tooling your team already uses. This is also a clean fit for audits. Instead of defending rotation schedules, you can demonstrate that credentials physically cannot be long-lived, and that every request is authenticated, authorized, and logged.
Open source note: the plugin is Apache 2.0 licensed, and we built it because we needed it. We run it in production on VEXXHOST infrastructure.
Upgrades often include a hidden tax. You change control plane services, and you still get a brief data plane interruption because Open vSwitch restarts. If you run performance-sensitive workloads, those seconds matter.
We addressed this in two layers.
In Atmosphere, Open vSwitch used to rebuild as part of the main release pipeline. That meant Kubernetes would see a new image digest and roll the OVS DaemonSet even when OVS itself stayed the same.
We decoupled OVS builds from Atmosphere releases so OVS only rebuilds and rolls when there is an actual OVS change. The result is simpler upgrade planning and fewer network blips during maintenance windows.
While we were there, we also improved the baseline performance profile for modern CPUs using x86_64-v2 optimizations. This way operators get better efficiency out of the box.
We then shipped two additions that target the time you still spend on restarts.
In testing across production-like environments, we saw downtime around ~1 second for kernel datapaths and ~3 seconds for DPDK datapaths (results will vary depending on startup conditions, image pulls, and node state).
If you run private AI infrastructure, this matters even more than it sounds. Training and inference clusters push east-west traffic hard. You want maintenance windows to feel boring, because your users will treat every blip as a platform problem.
Upstream still moves fast, and staying aligned is part of staying useful.
OpenStack 2025.2 Flamingo released on October 1, 2025. We shipped Atmosphere v7.0.0 with full support for Flamingo soon after.
This release includes a wide set of improvements across the stack, but a few changes map directly to what platform teams ask for:
For developers and DevOps teams evaluating platforms, this is the kind of release that signals intent. we want you to be able to run a modern OpenStack and Kubernetes stack, keep it patched, and keep your operational workflow predictable.
The mid-year Atmosphere releases covered a lot of what operators run into during regular life.
Atmosphere 2.3.0 improved Octavia monitoring and alerting, including better amphora visibility and operational signals.
Atmosphere 3.4.0 moves forward on control plane resiliency with Octavia Amphora V2 enabled by default, supported by the Valkey service.
Atmosphere 3.4.1 (and again called out later in the 4.5.1 cycle) reactivates the Keystone auth token cache after upstream Ceph fixes, improving authentication responsiveness in deployments where token validation volume is high.
Atmosphere 4.6.0 has added Neutron plugins for dynamic routing and networking-generic-switch.
Atmosphere 4.6.1 has improved iSCSI LUN performance for Pure Storage devices via udev rules and upgraded Cert-Manager to address Cloudflare API compatibility issues for ACME DNS-01 challenges.
These changes add up, shrinking the amount of manual work it takes to keep a cloud healthy.
In 2025, we also shipped a practical improvement to the Magnum Cluster API driver: the ability to create control plane-only Kubernetes clusters by setting node_count to zero, and to scale worker node groups down to zero by setting min_node_count to zero.
That gives you more flexibility for hybrid environments and event-driven workloads: start small, scale up when needed, and stop paying for idle workers when you don’t.
In June, we announced a new partner in Indonesia, Btech. The goal is simple. Help teams in the region deploy and run OpenStack with Atmosphere, with local support and a clearer path from plan to production.
For buyers evaluating open infrastructure, local partners change the adoption story. You get a clearer path for rollout, operations, and escalation, particularly if your environment spans regions or mixes on-prem with public cloud.
OpenInfra’s Superuser team shared the story of our work with OpenDev. Since 2016, we’ve supported OpenDev by providing infrastructure that runs core services and CI workloads. That includes GPU-enabled instances and other specialised compute for CI, including high-memory VMs and nested virtualisation.
We also showed up for the community at home.

In November, we co-hosted OpenStack’s 15th birthday in Montreal with Red Hat and OVHcloud. We also published our reflection on 15 years of OpenStack and why it keeps working as a foundation for private, hybrid, and edge clouds.
We will keep doing three things.
If you want to compare notes on upgrades, security posture, or how you are planning for AI workloads on OpenInfra, come talk to us.
Choose from Atmosphere Cloud, Hosted, or On-Premise.
Simplify your cloud operations with our intuitive dashboard.
Run it yourself, tap our expert support, or opt for full remote operations.
Leverage Terraform, Ansible or APIs directly powered by OpenStack & Kubernetes