Bringing Browser-Based MFA SSO to the OpenStack CLI
Learn how a lightweight keystoneauth1 plugin brings your existing browser-based MFA and SSO to the OpenStack CLI, with no changes to any client tools.
Trends, best practices, and technical deep dives on open source cloud infrastructure.
Learn how a lightweight keystoneauth1 plugin brings your existing browser-based MFA and SSO to the OpenStack CLI, with no changes to any client tools.
Hyperscaler AI looks fast but hides long-term lock-in and rising costs. See how OpenStack and Kubernetes deliver GPU infrastructure you actually control.
Many AI clusters run at only 30–50% GPU utilization. Learn why GPUs sit idle and how Kubernetes, scheduling, and better infrastructure design can improve AI infrastructure efficiency.
AI sovereignty starts at the infrastructure layer. Learn why governments choose OpenStack and Kubernetes to control compute, data, and AI workloads.
AI demand is outpacing GPU supply. Learn why enterprises are rethinking where AI workloads run in 2026.
GPUs are the new cloud lock-in. Learn how OpenStack, Kubernetes, and Atmosphere give you AI infrastructure control without hyperscaler dependency.
In 2026, enterprises are rethinking public cloud dependency. Explore sovereign cloud, compliance, control, and hybrid strategies.
For platform engineers running a serious evaluation of Atmosphere and OpenStack.
Does your AI workload data really stay in the EU? With EU AI compliance getting stricter, see where hyperscaler data flows create risk and how to keep AI compute inside your jurisdiction.
Kubernetes can introduce hidden lock-in. Explore how upstream OpenStack and Kubernetes preserve portability, control, and sovereignty.
Insights, updates, and stories from our team
Learn how a lightweight keystoneauth1 plugin brings your existing browser-based MFA and SSO to the OpenStack CLI, with no changes to any client tools.
Hyperscaler AI looks fast but hides long-term lock-in and rising costs. See how OpenStack and Kubernetes deliver GPU infrastructure you actually control.
Many AI clusters run at only 30–50% GPU utilization. Learn why GPUs sit idle and how Kubernetes, scheduling, and better infrastructure design can improve AI infrastructure efficiency.