ceph

Packages Ceph components used to build a distributed storage cluster that provides object (RGW), block (RBD), and POSIX-style file (CephFS) storage

ceph/daemon
rook/ceph
minio/minio
openebs/jiva

What is ceph image?

The ceph image packages Ceph components used to build a distributed storage cluster that provides object (RGW), block (RBD), and POSIX-style file (CephFS) storage. Ceph is designed for high availability, horizontal scalability, and strong fault tolerance, using replication or erasure coding across many nodes.

In containerized infrastructure, the ceph image is typically used to run MONs (monitors), OSDs (object storage daemons), MDS (metadata servers), and RGW (Rados Gateway) either directly with Docker/Podman or via orchestration layers like Rook on Kubernetes. It’s relevant wherever clusters need software-defined storage for stateful workloads, persistent volumes, or S3-compatible object storage without relying on a cloud provider.

How to use this image

The ceph image is usually not run as a single standalone container; instead, multiple containers form a full cluster. It can be used directly or through higher-level tooling like cephadm or Rook.

Example: start a demo Ceph cluster (development only):

docker run -d --name ceph-demo \ -p 6789:6789 -p 6800-6810:6800-6810 \ ceph/daemon:latest-luminous demo

Using cephadm to bootstrap (host-level, common in prod-like setups):

cephadm pull cephadm bootstrap --mon-ip <host-ip>

Kubernetes with Rook (typical for production clusters):

helm repo add rook-release https://charts.rook.io/release helm install rook-ceph rook-release/rook-ceph

Ceph daemons log to stdout (in containerized setups) and expose multiple ports depending on the service (MON, OSD, RGW, etc.). In practice, most users rely on orchestration (Rook or cephadm) instead of raw docker run for anything beyond experiments.

Image variants

Published primarily under ceph/ceph and ceph/daemon, the Ceph images are available in multiple variants:

  • ceph/ceph: General Ceph container image tagged by Ceph release (e.g. quincy, reef). Used by cephadm and other orchestration tools to run MON, OSD, MDS, RGW, and other daemons.
  • ceph/daemon:latest- (legacy style) Older daemon-image layout (e.g. latest-luminous, latest-nautilus). Primarily for legacy environments and demo setups; newer deployments favor cephadm and ceph/ceph.
  • Rook-managed images (e.g. rook/ceph: ) Rook wraps Ceph images and manages them as Kubernetes CRDs. This is the most common pattern for running Ceph in Kubernetes clusters.

Ceph images track upstream Ceph releases and include security fixes and compatibility updates. For production, deployments usually pin to a specific Ceph release name (e.g. quincy, reef) and upgrade through controlled cluster procedures rather than ad hoc tag changes.

Interested in base images that start and stay clean?