VEXXHOST Logo
Production Ceph Since 2015

Enterprise Storage.
Open source. Battle-tested.

Block, object, and file from one cluster — no proprietary licensing, no vendor lock-in. Backed by a team with 10 major Ceph upgrades and zero data loss.

Block (RBD)
Object (S3)
File (CephFS)
100% Data Integrity
100% Upstream Ceph
No fork, no proprietary layers
10 Major Upgrades
Hammer through Squid, zero data loss
Data Sovereignty
Your data center or ours
24/7 Storage Engineers
Ceph specialists, not generalists

Trusted by engineering teams at

Red Hat
Apple
Linux Foundation
Arm
AMD
Ciena
DSV
Kaseya
CompuGroup Medical
SpecterOps
Zeta Global
The Weather Network
Higher Logic
University of Victoria
Simon Fraser University
University at Buffalo
Gumtree
Corvex
Virtual Systems
Red Hat
Apple
Linux Foundation
Arm
AMD
Ciena
DSV
Kaseya
CompuGroup Medical
SpecterOps
Zeta Global
The Weather Network
Higher Logic
University of Victoria
Simon Fraser University
University at Buffalo
Gumtree
Corvex
Virtual Systems

Deployed alongside 20,000+ compute cores and 1.7 PB of Ceph storage at the University at Buffalo Center for Computational Research.

Serving production storage workloads across industries

Why Us

Operational Depth You Can't Hire For

The team behind your storage has been running production Ceph for over a decade. That experience shows in every upgrade, every incident, and every architecture decision.

10 Major Upgrades. Zero Data Loss.

Production Ceph since Hammer in 2015 — through every major release to Squid. Every upgrade planned, tested, and executed without losing a byte. Automated zero-downtime migrations.

Upstream Ceph. Open-Source Tooling.

No fork. No proprietary management layer. No per-socket licensing traps. We deploy upstream Ceph with Cephadm and maintain open-source Ansible automation. No vendor lock-in, ever.

Runs Where Your Data Lives

Deploy in our data centers or on your hardware for complete data sovereignty. Encryption at rest, CRUSH-aware failure domains, and compliance-ready architecture for HIPAA and SOC 2.

One Cluster. Three Protocols.

Block storage via RBD, S3-compatible object storage via RGW, and file storage via CephFS. One deployment, one support contract, three ways to access your data — today and tomorrow.

Storage Protocols

Three Protocols. One Cluster. One Team to Call.

Every Ceph deployment includes all three storage interfaces. Use one today, add another tomorrow. No separate products, no extra fees per protocol.

Block Storage

Ceph RADOS Block Device (RBD)

High-performance block volumes for VMs, databases, and Kubernetes persistent volumes. NVMe-backed pools for latency-sensitive workloads. Snapshot-capable, replicated across failure domains, and thin-provisioned. RBD mirroring for DR.

VM disksDatabasesKubernetes PVsNVMe pools

Object Storage

S3-Compatible via RADOS Gateway (RGW)

S3-compatible API for unstructured data at any scale. NVMe-accelerated metadata for high-throughput ingest workloads. Bucket versioning, lifecycle policies, and multi-site replication for geo-distributed durability. Zero egress fees.

Data lakesBackupsMedia assetsNVMe metadata

File Storage

CephFS with NFS Gateway

POSIX-compliant distributed file system for shared workloads. Flash-tier pools for latency-sensitive shared storage. Native CephFS kernel client for Linux, NFS gateway for everything else. Snapshots, quotas, and multi-tenant isolation.

Shared directoriesHPC scratchFlash-tier pools
Use Cases

Storage Problems We've Already Solved

01

Private Cloud Storage Backend

The storage layer underneath OpenStack, Kubernetes, or both. Block volumes for VMs, persistent volumes for containers, and S3 for application data — one Ceph cluster backs the entire stack.

02

AI & ML Data Pipelines

Store training datasets on S3-compatible object storage with no egress fees. Feed data directly to GPU nodes over high-speed networks. Keep raw data, processed features, and model checkpoints in one cluster.

03

Backup & Long-Term Archive

Erasure-coded pools for cost-efficient cold storage. S3 lifecycle policies automate tiering. Store years of backups and compliance archives without per-TB licensing costs.

04

VMware & Legacy Storage Migration

Replace proprietary SAN and NAS with Ceph. Migrate VMware datastores to RBD-backed volumes with MigrateKit. Our phased migration covers assessment, pilot validation, and cutover with no downtime.

Engagement Models

Choose How You Want to Work

Mix and match your infrastructure and engagement model. Deploy on our hardware or yours, with full management or expert support.

Hosted + Managed

We deploy and operate Ceph in our data centers. You consume storage via S3, RBD, or CephFS.

  • Zero ops burden
  • Predictable per-node pricing
  • SLA-backed uptime

Hosted + Support

Your Ceph cluster runs on our infrastructure with 24/7 expert guidance, architecture reviews, and direct engineer escalation.

  • 24/7 Ceph engineers
  • Architecture reviews
  • Upgrade assistance

On-Premise + Managed

We operate your Ceph cluster in your data center with full 24/7 responsibility for deployments, upgrades, and incident response.

  • Full data sovereignty
  • Predictable per-node pricing
  • Your hardware, our ops

On-Premise + Support

You operate the cluster in your facility with direct access to our engineers for troubleshooting, upgrades, and architecture planning.

  • Direct engineer access
  • Incident escalation
  • Upgrade planning
2015
In production since
First deployed on Ceph Hammer
100%
Data integrity
Across every upgrade and migration
5+ PB
Storage under management
Across hosted and on-premise clusters
10
Major upgrades completed
Hammer through Squid, zero data loss
Evaluate

10 Questions to Ask Any Ceph Vendor

Different organizations approach Ceph support differently. Here's what matters when evaluating providers.

Is it unmodified upstream Ceph, or a vendor fork?

Upstream Ceph, no fork

Does pricing get complicated as you scale?

Simple per-node pricing — no per-socket or per-TB surprises

Is deployment tooling open source or proprietary?

Open-source Cephadm and Ansible automation

Does the vendor run Ceph in production themselves?

5+ PB in production since 2015

How many major Ceph upgrades has the vendor executed?

10 major releases, zero data loss

Does it integrate with Kubernetes and OpenStack?

Native integration via Atmosphere stack

Is the vendor a Ceph Foundation member?

General Member of the Ceph Foundation

Can you leave without a forklift migration?

100% upstream — full portability guaranteed

Are operations and support backed by SLAs?

SLA-backed uptime and response times on all engagement models

Is it cheaper than hiring and retaining a Ceph team in-house?

Predictable per-node costs vs. 3-5 FTE storage engineer salaries
Integration

Fits Your Stack

Ceph works on its own or as the storage layer for your broader infrastructure. Start with what you need. Add more later.

Standalone Ceph

Ceph as a dedicated storage service. S3-compatible object storage, block volumes, or shared file systems without requiring any other platform.

Teams that need scalable storage and already run their own compute.

Ceph + Kubernetes

Persistent volumes backed by Ceph RBD via CSI. S3 object storage for application data. One storage cluster serving all your containerized workloads.

Platform teams running stateful workloads on Kubernetes.

Ceph + OpenStack

Block storage for Nova instances via Cinder. Object storage via Swift-compatible RGW. Image storage for Glance. Ceph is the native storage backend for OpenStack.

Private cloud operators who need unified storage across all OpenStack services.

Full Atmosphere Stack

Ceph, Kubernetes, and OpenStack deployed together as an integrated platform. Storage, compute, and orchestration with one support contract.

Organizations building complete private cloud infrastructure.

Your Storage Runs. Your Team Ships.

Whether you're deploying a new Ceph cluster or migrating off proprietary storage, our engineers have done it before. Book a free 30-minute architecture review — no commitment, no sales pitch.

What you'll get

  • Architecture review and capacity planning for your specific workloads — from initial deployment design to growth projections based on a decade of production operations data
  • Deployment on your hardware or ours, using upstream Ceph with open-source tooling — no proprietary management layers, no billing surprises after you go live
  • Upgrades executed by engineers who have completed 10 major Ceph release transitions — tested against your configuration before touching production
  • 24/7 monitoring and incident response from storage specialists who work on Ceph daily — not generalist help desk forwarding tickets
  • S3-compatible object storage, block volumes, and CephFS from one cluster with zero egress fees and predictable per-node monthly costs
  • Native integration with Kubernetes persistent volumes and OpenStack storage services, or standalone deployment if storage is all you need

Emergency Storage Support

Experiencing a storage incident? Mark your message urgent for immediate 24/7 response from our storage engineering team.

Contact Our Team

Let us help you with your storage needs