Sovereign by Architecture: Building AI Infrastructure for the EU AI Act
The EU AI Act takes effect August 2026. Compliance starts at the infrastructure layer. Learn why sovereign AI needs OpenStack, Kubernetes, and Atmosphere.
Insights, updates, and stories from our team
The EU AI Act takes effect August 2026. Compliance starts at the infrastructure layer. Learn why sovereign AI needs OpenStack, Kubernetes, and Atmosphere.
Learn how a lightweight keystoneauth1 plugin brings your existing browser-based MFA and SSO to the OpenStack CLI, with no changes to any client tools.
Hyperscaler AI looks fast but hides long-term lock-in and rising costs. See how OpenStack and Kubernetes deliver GPU infrastructure you actually control.
Running e-learning at scale is a feat given the tremendous appetite for AI- and lab-based learning. Here's a practical build guide if you want to deploy private clouds for GPU labs, low-latency access, and predictable costs.
Universities are no longer in competition with just each other. Students now have the option to choose between faculty and ChatGPT.
In a world reshaped by generative AI, attention has shifted from lecture halls to learning that happens on-demand, asynchronously, and across devices. The institutions that thrive in this new era will be those that mirror these real-world shifts and scale with remote-first needs.
When done well, this approach supports equitable access, deeper skill development, and stronger outcomes, whether you're running a data science class, AI workload, or a hybrid cybersecurity bootcamp.
During the pandemic, e-learning became necessary.
In fact, one major European university documented over 600 virtual classes per day and served 16,000 students concurrently on its own network after shifting online.
However, post-pandemic, it is expected.
But that kind of scale and concurrency pushes edge‑networking, storage and compute systems harder than typical lecture‑based e‑learning platforms.
In engineering, science and cybersecurity curricula, students require hands‑on interaction with lab environments and virtual machines.
Because virtualized environments also promise better learning through improved engagement and accessibility. According to a 2024 study, 73% of educators said a virtualized lab drove strong student engagement, particularly when paired with structured scaffolding and on-demand access to compute-backed exercises.
And while many institutions already provide online courses, traditional IT architectures were never built for this level of flexibility.
If you’re trying to support GPU-backed notebooks, ephemeral sandbox environments, or fast resource spin-ups without manual provisioning, you need a new foundation. Because these environments require flexible resource allocation, secure identity integration, and cost-aware scheduling.
Using public cloud services alone created unpredictable cost models, latency challenges (when learners connected from multiple geographies), and regional data‑sovereignty or network‑egress constraints, especially for labs requiring high I/O, GPU access or tight control over hardware sets.
Instead of confining infrastructure to local labs or outsourcing to hyperscalers with opaque pricing, universities are now deploying on-premise or hybrid private clouds built on open-source platforms like OpenStack.
A private campus cloud brings three key advantages:
When lab VM instances, storage systems and instrumentation are on‑campus (or within a regional data centre tightly coupled via low-latency links), students experience real‑time responsiveness akin to physical labs.
Many remote labs require GPU passthrough, SR‑IOV virtualization or specific instrumentation (e.g., oscilloscopes or FPGA boards). A private cloud lets you allocate, partition and manage these resources explicitly rather than relying on general-purpose public cloud instances.
Rather than unpredictable hour‑based billing or data‑egress fees, a campus private cloud offers fixed‑cost or consumption‑based internal metering.
From a design perspective, the architecture typically involves a cluster of compute nodes (some general‑purpose, others hardware‑accelerated for labs), a shared object/block storage backend (Ceph is a common choice), network segmentation supporting VLANs or tenant‑networks per lab/class and an orchestration layer (OpenStack, Kubernetes) to let instructors spin up lab environments on‑demand.
In one documented case study, a university deployed an OpenStack + Ceph environment consisting of 11 nodes, 96 vCPUs, 312 GB of RAM, and 27 TB of Ceph storage to power their remote education initiative.
Consider a semester on embedded systems with remote access to boards and virtual machines. On the private cloud:
This model enhances reproducibility (labs spin up identical), scalability (hundreds of students simultaneously) and manageability (usage is tracked, hardware is dedicated but pooled).
A national cybersecurity bootcamp wants to create identical lab environments for students in six different regions, with access governed by a national identity system. Their requirements: sandboxed VMs, region-specific data boundaries, and centralized control.
Using Heat templates and Keystone federation, the platform team deploys lab environments that reflect each region’s compliance zone, with Neutron security groups enforcing east-west isolation.
Labs can be provisioned, torn down, and replicated with near-zero overhead, and telemetry pipelines provide feedback to instructors about resource consumption and suspicious activity.
Compute hosts should be synchronised with uniform CPU feature‑sets (important if live‑migration is used), and support required extensions (e.g., VT‑d for passthrough, SR‑IOV for network). Storage clusters need to support high IOPS bursts (for VM boot storms at class start) and high throughput (for video/virtualisation).
Neutron (OpenStack) or Calico/Flannel (K8s) should support multi‑tenant routing, security‑groups and micro‑segmentation. For remote labs, north‑south connectivity must consider student WAN latency, but east‑west between compute/storage must stay low. Enabling jumbo frames (MTU 9000) across the storage and tenant networks improves throughput for lab‑VMs transferring large datasets.
Many e‑learning patterns follow “class start” waves wherein hundreds of VMs are booting within minutes. The orchestration engine must handle that boot storm: ensure compute node resources are pre‑warmed, storage backends (for example Ceph OSDs) are healthy and metadata caches are warm. Without it, student login latency kills experience.
Especially for government or national‑education contexts, data locality, encryption at rest, audit logs and role‑based access control matter. Studies recognised that complexity, institution size and technology readiness are significant predictors for cloud adoption in academic institutions.
Faculty can want agility but IT will needs guardrails.
OpenStack’s native role-based access control (RBAC), project scoping, and usage telemetry (via Telemetry, Prometheus, or 3rd-party exporters) give campus operators visibility without bottlenecks. Students can provision VMs for coursework, but only within assigned quotas. Researchers can request GPU-backed instances, but only through an approval workflow. And with automated billing scripts, teams can assign cloud usage to grant codes or departmental budgets.
Other operational perks include:
The benefit is two‑fold: deliver a responsive, modern student‑experience platform while shifting from reactive hardware refresh‑cycles to a stable, software‑driven cloud platform. Additionally, by adopting open‑source platforms (OpenStack, Ceph) you avoid vendor‑lock‑in, reduce licence costs and can repurpose nodes across semesters (for research, HPC, labs etc.).
On the cost end, a 2024 report noted that implementing a private‑cloud platform using open‑source stacks (for teaching resources) can reduce “service interruption time to less than 5 minutes per year” and help with a 20% reduction in resource‑waste.
While OpenStack and Ceph provide the foundational components, getting everything running smoothly requires experience in architecture design, networking, and identity integration.
Platforms like Atmosphere, VEXXHOST’s OpenStack distribution, help by providing pre-integrated deployment tooling, observability pipelines, and optional professional services for educational deployments. When deployed with Atmosphere, teams also benefit from:
With Hosted and On-Premise editions, institutions retain full control while offloading day-to-day operations.
If your institution is looking to bring compute, storage, and orchestration in-house to support remote learning, AI/ML courses, or modern research, schedule a free consultation with one of our experts.
Choose from Atmosphere Cloud, Hosted, or On-Premise.
Simplify your cloud operations with our intuitive dashboard.
Run it yourself, tap our expert support, or opt for full remote operations.
Leverage Terraform, Ansible or APIs directly powered by OpenStack & Kubernetes