Ceph
Ceph is a powerful, scalable, and highly resilient distributed storage platform that can provide object, block, and file storage under a unified system. It consists of various daemons such as Monitors (MONs), Object Storage Daemons (OSDs), and Metadata Servers (MDS) for CephFS. Ceph's primary interface for interacting with the system is the ceph command-line tool, which provides various subcommands to manage, monitor, and troubleshoot the Ceph cluster.
Below is a detailed guide on the most commonly used ceph subcommands and their purposes.
General Structure of ceph Command
ceph Commandceph [options] <subcommand> [arguments][options]: Flags such as--admin-daemon,--cluster,-sfor status, etc.<subcommand>: The specific operation or management action you wish to perform (e.g.,status,health,osd,monitor).[arguments]: Additional arguments required by the subcommand.
Common ceph Subcommands
ceph Subcommands1. Cluster Information and Status
ceph status (or ceph -s)
ceph status (or ceph -s)Displays the overall health and status of the Ceph cluster. This command is frequently used for monitoring purposes.
ceph statusExample Output:
cluster: id: e4a89f8c-4a64-4df5-bcdf-e246b6f8120e health: HEALTH_OK services: mon: 3 daemons, quorum mon1,mon2,mon3 mgr: mgr1(active), mgr2(standby) osd: 20 osds: 20 up, 20 in mds: cephfs-1/1/1 up {0=mds1=up:active}, 1 up:standby
ceph health
ceph healthReturns a concise health report for the cluster. This can be combined with additional options to filter warnings or errors.
ceph healthExample Output:
HEALTH_OK
Options:
ceph health detail– Provides a more detailed health report with issues or warnings.
2. Monitor Commands
ceph mon dump
ceph mon dumpDisplays the list of monitors, their ranks, and their addresses.
ceph mon dumpceph quorum_status
ceph quorum_statusProvides information about the quorum of the monitor nodes.
ceph quorum_statusceph mon stat
ceph mon statShows the status of the monitor nodes.
ceph mon stat3. OSD Commands
ceph osd status
ceph osd statusProvides a list of the Object Storage Daemons (OSDs) in the cluster and their statuses (e.g., up, down, in, out).
ceph osd statusceph osd tree
ceph osd treeDisplays the OSDs organized in a hierarchical tree, often including information about the CRUSH hierarchy.
ceph osd treeceph osd df
ceph osd dfDisplays a detailed breakdown of disk usage and available space across OSDs in the cluster.
ceph osd dfceph osd pool create
ceph osd pool createCreates a new pool within the Ceph cluster. A pool is a logical group of objects that can have its own replication settings.
ceph osd pool create <pool-name> <pg-num>Example:
ceph osd pool create mypool 128ceph osd pool delete
ceph osd pool deleteDeletes a pool from the cluster. Be cautious with this command, as it permanently deletes data.
ceph osd pool delete <pool-name> <pool-name> --yes-i-really-really-mean-itceph osd crush rule list
ceph osd crush rule listLists the CRUSH (Controlled Replication Under Scalable Hashing) rules used for data placement in the cluster.
ceph osd crush rule listceph osd reweight
ceph osd reweightAdjusts the weight of an OSD to rebalance data across the cluster.
ceph osd reweight <osd-id> <weight>Example:
ceph osd reweight osd.1 0.8ceph osd out
ceph osd outMarks an OSD as "out" of the cluster, which stops data from being written to that OSD.
ceph osd out <osd-id>ceph osd in
ceph osd inMarks an OSD as "in" the cluster, allowing it to receive data again.
ceph osd in <osd-id>4. Pool Commands
ceph osd pool ls
ceph osd pool lsLists all pools in the cluster.
ceph osd pool lsceph osd pool set
ceph osd pool setSets or modifies parameters of a given pool.
ceph osd pool set <pool-name> <option> <value>Example (adjusting replication size):
ceph osd pool set mypool size 3ceph osd pool get
ceph osd pool getGets the value of a particular option for a given pool.
ceph osd pool get <pool-name> <option>Example (getting the replication size):
ceph osd pool get mypool size5. MDS and CephFS Commands
ceph fs status
ceph fs statusDisplays the status of CephFS (Ceph File System).
ceph fs statusceph mds stat
ceph mds statShows the status of all metadata servers (MDS) in the cluster.
ceph mds statceph fs ls
ceph fs lsLists all CephFS file systems in the cluster.
ceph fs lsceph fs new
ceph fs newCreates a new CephFS file system.
ceph fs new <fs-name> <metadata-pool> <data-pool>6. Client and Admin Commands
ceph config dump
ceph config dumpDisplays the current configuration for all Ceph components.
ceph config dumpceph log
ceph logDisplays logs for the Ceph cluster.
ceph log [daemon] <loglevel>Example (viewing logs of the monitor service):
ceph log mon7. RBD (RADOS Block Device) Commands
rbd ls
rbd lsLists all RADOS block devices in the cluster.
rbd lsrbd create
rbd createCreates a new RADOS block device.
rbd create <image-name> --size <size>Example:
rbd create mydisk --size 10Grbd rm
rbd rmRemoves an existing RBD image.
rbd rm <image-name>rbd map
rbd mapMaps a RBD device to a local kernel module, making it available to be mounted as a block device on the system.
rbd map <image-name>rbd unmap
rbd unmapUnmaps a previously mapped RBD device.
rbd unmap <image-name>8. Administrative Commands
ceph auth list
ceph auth listLists all authorization keys and capabilities in the cluster.
ceph auth listceph auth add
ceph auth addAdds a new authorization key with specified capabilities.
ceph auth add <entity> <caps>Example:
ceph auth add client.admin mon 'allow *' osd 'allow *' mds 'allow'ceph mgr module enable
ceph mgr module enableEnables a manager module in Ceph. Ceph Manager (Mgr) modules offer additional services to the cluster.
ceph mgr module enable <module-name>Example:
ceph mgr module enable dashboardConclusion
The ceph command-line tool is highly versatile and allows administrators to control every aspect of a Ceph cluster. Whether it's managing OSDs, monitoring cluster health, configuring pools, or setting up RBD devices, ceph provides robust commands to handle these tasks efficiently.
For daily monitoring and administration, commands like ceph status, ceph osd tree, and ceph osd df provide essential insights into cluster performance, health, and resource utilization.
Last updated