Kubernetes can introduce hidden lock-in. Explore how upstream OpenStack and Kubernetes preserve portability, control, and sovereignty.
Cloud-native infrastructure was designed to improve scalability, accelerate innovation, and increase architectural flexibility.
Yet in many environments, those gains now coexist with growing reliance on proprietary services, complex pricing models, and infrastructure layers that organizations don’t fully control.
This isn’t a fringe concern. According to industry data, 89 % of organizations now pursue multi-cloud architectures to avoid lock-in and retain flexibility, while 78 % say hybrid/multi-cloud is their go-to approach to avoid vendor dependence.
Infrastructure strategy today is increasingly about control. That means understanding dependencies, identifying where risk accumulates, and making architectural choices that preserve flexibility over time.
This article examines sovereignty considerations in modern Kubernetes environments, the structural drivers of vendor lock-in, and our open-source OpenStack platform and Kubernetes services support long term infrastructure independence.
§ 1 Kubernetes Isn’t Sovereign by Default
Kubernetes is the backbone of modern cloud-native infrastructure. Its portability is one of its strongest advantages: define your workloads once and run them across environments. In practice, that portability has limits.
While pods and deployments are portable, the underlying layers usually are not. The control plane, networking, storage, identity systems, load balancing, GPU scheduling, and observability often depend on the platform where Kubernetes runs.
In public cloud environments, those dependencies are tightly integrated. The control plane is managed by the provider. Storage classes map to proprietary services. Ingress connects to provider load balancers. Secrets may rely on external key management systems. Monitoring data flows through hosted observability stacks.
Over time, these integrations introduce structural dependency.
As data sovereignty requirements increase and sensitive workloads move back on premise, many enterprises are reassessing this model. Kubernetes sovereignty requires ownership of the full stack, from compute and networking to storage, identity, and the control plane.
If you want to learn more about how OpenStack and Kubernetes are a natural fit for sovereign cloud, we highly encourage you to read this blog post. Atmosphere integrates OpenStack and Kubernetes into a unified platform. Instead of simply layering Kubernetes on top of infrastructure, Atmosphere runs containerized OpenStack services inside Kubernetes, aligning infrastructure and cloud-native operations under a common control plane.
§ 2 The Hidden Risk in Your Kubernetes Stack: Open-Source Dependency Concentration
Open-source software removes licensing lock in. You can inspect the code, contribute upstream, and modify it if needed. That provides structural flexibility. But open-source does not automatically eliminate supply chain concentration.
Container images still originate somewhere. They are built, signed, and distributed through specific registries and pipelines. When a critical CVE appears, your ability to respond depends on who controls that process. If a single upstream project or vendor manages the build system, signing keys, and distribution channel, the risk becomes centralized, even if the software is permissively licensed.
The same dynamic applies to Kubernetes distributions. If a distribution bundles proprietary operators, introduces custom resource definitions tied to a specific platform, or modifies core components with vendor specific extensions, portability becomes constrained over time.
Maintaining consistent security posture, auditability, and supply chain guarantees across cloud and on-premise environments become more complex when multiple vendor distributions and customized images are involved.
Atmosphere is built directly on upstream OpenStack and upstream Kubernetes. It avoids proprietary forks and vendor specific extensions that introduce structural dependency. By aligning with the upstream projects themselves, Atmosphere is designed to preserve portability, transparency, and long-term architectural flexibility.
At VEXXHOST, we do not simply deploy open-source technologies. We contribute to them.
We have been active OpenStack contributors since 2011 and are maintainers of OpenStack Magnum and the Magnum Cluster API project, with ongoing contributions to the Kubernetes ecosystem. This upstream involvement ensures direct alignment with core project roadmaps and deep expertise within the communities that define modern cloud infrastructure.
§ 3 What Does Vendor Lock-In Mean in 2026?
Lock in is no longer just about licensing. It operates across multiple layers of the stack, often independently.
Compute Lock-In
Workloads become optimized for specific instance types, GPUs, or proprietary hardware. Migrating means re benchmarking and qualifying performance across the stack.
API Lock-In
Applications integrate with proprietary managed services, custom operators, or platform specific APIs that are difficult to replace without redesign.
Data Gravity
Large data sets accumulate within a provider’s storage systems. Egress costs, replication complexity, and operational risk make migration financially and technically challenging.
Skills Lock-In
Teams build deep familiarity with a provider’s tooling, IAM model, and operational patterns. Transitioning platforms require retraining and operational adjustment.
Licensing Lock-In
Commercial models can change unexpectedly, altering cost structure and long-term commitments with limited flexibility.
Governance Lock-In
Jurisdictional exposure, data residency requirements, and foreign legal frameworks introduce compliance and sovereignty considerations beyond technology alone.
Atmosphere is built to address these dimensions structurally, using upstream OpenStack and Kubernetes to preserve portability, cost transparency, and jurisdictional control across the stack. You can read more about this topic in this blog post.
§ 4 Sovereign AI Infrastructure: Why GPU Clouds Change the Lock-In Equation
AI workloads intensify infrastructure dependency.
Training data, model weights, and inference logs are often among an organization’s most sensitive assets. Large training datasets, specialized GPUs, and high bandwidth east west networking increase reliance on the platform that hosts them.
In hyperscale environments, GPU availability, storage locality, networking performance, and pricing models are controlled by the provider. Capacity allocation and cost volatility can directly impact deployment timelines and financial planning.
AI also magnifies data gravity. Moving hundreds of terabytes of training data across regions or providers is operationally and financially complex. Jurisdictional exposure becomes a material governance concern when proprietary models and sensitive datasets are involved.
Atmosphere supports GPU workloads through upstream OpenStack and Kubernetes, including PCI passthrough and high-performance networking. This allows organizations to deploy AI infrastructure on hardware they control, with predictable cost models and clear data residency boundaries.
§ 5 Architecting for Control: How Atmosphere OpenStack Delivers
The sovereignty gaps in Kubernetes, supply chain concentration, lock in vectors, and AI infrastructure all lead to a practical question: who controls your stack?
At VEXXHOST, we design our products to ensure that control remains with you.
Trusted, Upstream Foundations
Atmosphere is built entirely on upstream open-source projects, including OpenStack, Kubernetes, Ansible, Helm, and Prometheus.
Container image integrity is enforced through Docker Content Trust and Cosign, providing verifiable image signatures and transparent build provenance. Images are published to dedicated repositories with independent versioning, improving traceability, and reducing build pipeline concentration.
Timely, Operational Updates
Atmosphere supports continuous updates and zero disruption upgrades across OpenStack releases, aligned closely with upstream timelines. For sensitive environments, updates can be delivered and applied offline, allowing air-gapped deployments to remain current without external connectivity.
Certified and Conformant
Atmosphere stands out as a fully certified and conformant open-source platform. With both OpenStack powered certification and certified Kubernetes, it has passed all conformance tests. This isn't just a badge it's a guarantee that your workloads will behave consistently, that your APIs are standards-compliant, and that your skills are portable.
Full-Stack Day-2 Operations
Deployment is day one. Day two is forever. Beyond deploying OpenStack, Atmosphere handles day 2 operations, including logging, monitoring, alerting, and conducting native smooth upgrades, ensuring seamless cloud operation for on-premise deployments.
With over 300 built-in monitoring alerts and a full suite of services, Atmosphere gives organizations complete control over their data while maintaining scalability, security, and performance.
Deploy Your Way
Atmosphere's adaptability allows for deployment by anyone, even in on-premise settings, giving you total control over your cloud environment. With Atmosphere, you decide where and how your cloud operates.
Whether you choose our battle-tested deployment platform deployed in our SOC 2 certified data centers or on-premises with expert management or support, with access to public cloud regions with pay-as-you-go pricing, all built on 100% open-source components for complete freedom.
Compliance Without Compromise
Atmosphere enables organizations to deploy in specific geographic regions or within their own data centers, supporting jurisdictional compliance with standards like HIPAA, GDPR, and PCI-DSS. With encrypted communication, role-based access control, and audit logs as standard features, Atmosphere empowers businesses to build secure cloud environments without compromise.
The Bottom Line
Cloud adoption increased flexibility, but it also introduced new layers of dependency that are not always visible until change becomes difficult.
Long term resilience depends on architecture, not brand selection. Organizations need infrastructure that preserves control, transparency, and portability.
Atmosphere combines upstream Kubernetes and OpenStack to support that model. It enables modern application delivery while keeping infrastructure, data, and governance within your control.
If you are evaluating your infrastructure strategy, we welcome the conversation.
Want to explore Atmosphere firsthand? It's available as Cloud, Hosted, or On-Premise — and as always, the code is open source.