ceph
Packages Ceph components used to build a distributed storage cluster that provides object (RGW), block (RBD), and POSIX-style file (CephFS) storage
What is ceph image?
The ceph image packages Ceph components used to build a distributed storage cluster that provides object (RGW), block (RBD), and POSIX-style file (CephFS) storage. Ceph is designed for high availability, horizontal scalability, and strong fault tolerance, using replication or erasure coding across many nodes.
In containerized infrastructure, the ceph image is typically used to run MONs (monitors), OSDs (object storage daemons), MDS (metadata servers), and RGW (Rados Gateway) either directly with Docker/Podman or via orchestration layers like Rook on Kubernetes. It’s relevant wherever clusters need software-defined storage for stateful workloads, persistent volumes, or S3-compatible object storage without relying on a cloud provider.
How to use this image
The ceph image is usually not run as a single standalone container; instead, multiple containers form a full cluster. It can be used directly or through higher-level tooling like cephadm or Rook.
Example: start a demo Ceph cluster (development only):
Using cephadm to bootstrap (host-level, common in prod-like setups):
Kubernetes with Rook (typical for production clusters):
Ceph daemons log to stdout (in containerized setups) and expose multiple ports depending on the service (MON, OSD, RGW, etc.). In practice, most users rely on orchestration (Rook or cephadm) instead of raw docker run for anything beyond experiments.
Image variants
Published primarily under ceph/ceph and ceph/daemon, the Ceph images are available in multiple variants:
Ceph images track upstream Ceph releases and include security fixes and compatibility updates. For production, deployments usually pin to a specific Ceph release name (e.g. quincy, reef) and upgrade through controlled cluster procedures rather than ad hoc tag changes.
.avif)