Ceph
Ceph is a powerful, scalable, and highly resilient distributed storage platform that can provide object, block, and file storage under a unified system. It consists of various daemons such as Monitors (MONs), Object Storage Daemons (OSDs), and Metadata Servers (MDS) for CephFS. Ceph's primary interface for interacting with the system is the ceph
command-line tool, which provides various subcommands to manage, monitor, and troubleshoot the Ceph cluster.
Below is a detailed guide on the most commonly used ceph
subcommands and their purposes.
General Structure of ceph
Command
ceph
Commandceph [options] <subcommand> [arguments]
[options]
: Flags such as--admin-daemon
,--cluster
,-s
for status, etc.<subcommand>
: The specific operation or management action you wish to perform (e.g.,status
,health
,osd
,monitor
).[arguments]
: Additional arguments required by the subcommand.
Common ceph
Subcommands
ceph
Subcommands1. Cluster Information and Status
ceph status
(or ceph -s
)
ceph status
(or ceph -s
)Displays the overall health and status of the Ceph cluster. This command is frequently used for monitoring purposes.
ceph status
Example Output:
cluster: id: e4a89f8c-4a64-4df5-bcdf-e246b6f8120e health: HEALTH_OK services: mon: 3 daemons, quorum mon1,mon2,mon3 mgr: mgr1(active), mgr2(standby) osd: 20 osds: 20 up, 20 in mds: cephfs-1/1/1 up {0=mds1=up:active}, 1 up:standby
ceph health
ceph health
Returns a concise health report for the cluster. This can be combined with additional options to filter warnings or errors.
ceph health
Example Output:
HEALTH_OK
Options:
ceph health detail
– Provides a more detailed health report with issues or warnings.
2. Monitor Commands
ceph mon dump
ceph mon dump
Displays the list of monitors, their ranks, and their addresses.
ceph mon dump
ceph quorum_status
ceph quorum_status
Provides information about the quorum of the monitor nodes.
ceph quorum_status
ceph mon stat
ceph mon stat
Shows the status of the monitor nodes.
ceph mon stat
3. OSD Commands
ceph osd status
ceph osd status
Provides a list of the Object Storage Daemons (OSDs) in the cluster and their statuses (e.g., up, down, in, out).
ceph osd status
ceph osd tree
ceph osd tree
Displays the OSDs organized in a hierarchical tree, often including information about the CRUSH hierarchy.
ceph osd tree
ceph osd df
ceph osd df
Displays a detailed breakdown of disk usage and available space across OSDs in the cluster.
ceph osd df
ceph osd pool create
ceph osd pool create
Creates a new pool within the Ceph cluster. A pool is a logical group of objects that can have its own replication settings.
ceph osd pool create <pool-name> <pg-num>
Example:
ceph osd pool create mypool 128
ceph osd pool delete
ceph osd pool delete
Deletes a pool from the cluster. Be cautious with this command, as it permanently deletes data.
ceph osd pool delete <pool-name> <pool-name> --yes-i-really-really-mean-it
ceph osd crush rule list
ceph osd crush rule list
Lists the CRUSH (Controlled Replication Under Scalable Hashing) rules used for data placement in the cluster.
ceph osd crush rule list
ceph osd reweight
ceph osd reweight
Adjusts the weight of an OSD to rebalance data across the cluster.
ceph osd reweight <osd-id> <weight>
Example:
ceph osd reweight osd.1 0.8
ceph osd out
ceph osd out
Marks an OSD as "out" of the cluster, which stops data from being written to that OSD.
ceph osd out <osd-id>
ceph osd in
ceph osd in
Marks an OSD as "in" the cluster, allowing it to receive data again.
ceph osd in <osd-id>
4. Pool Commands
ceph osd pool ls
ceph osd pool ls
Lists all pools in the cluster.
ceph osd pool ls
ceph osd pool set
ceph osd pool set
Sets or modifies parameters of a given pool.
ceph osd pool set <pool-name> <option> <value>
Example (adjusting replication size):
ceph osd pool set mypool size 3
ceph osd pool get
ceph osd pool get
Gets the value of a particular option for a given pool.
ceph osd pool get <pool-name> <option>
Example (getting the replication size):
ceph osd pool get mypool size
5. MDS and CephFS Commands
ceph fs status
ceph fs status
Displays the status of CephFS (Ceph File System).
ceph fs status
ceph mds stat
ceph mds stat
Shows the status of all metadata servers (MDS) in the cluster.
ceph mds stat
ceph fs ls
ceph fs ls
Lists all CephFS file systems in the cluster.
ceph fs ls
ceph fs new
ceph fs new
Creates a new CephFS file system.
ceph fs new <fs-name> <metadata-pool> <data-pool>
6. Client and Admin Commands
ceph config dump
ceph config dump
Displays the current configuration for all Ceph components.
ceph config dump
ceph log
ceph log
Displays logs for the Ceph cluster.
ceph log [daemon] <loglevel>
Example (viewing logs of the monitor service):
ceph log mon
7. RBD (RADOS Block Device) Commands
rbd ls
rbd ls
Lists all RADOS block devices in the cluster.
rbd ls
rbd create
rbd create
Creates a new RADOS block device.
rbd create <image-name> --size <size>
Example:
rbd create mydisk --size 10G
rbd rm
rbd rm
Removes an existing RBD image.
rbd rm <image-name>
rbd map
rbd map
Maps a RBD device to a local kernel module, making it available to be mounted as a block device on the system.
rbd map <image-name>
rbd unmap
rbd unmap
Unmaps a previously mapped RBD device.
rbd unmap <image-name>
8. Administrative Commands
ceph auth list
ceph auth list
Lists all authorization keys and capabilities in the cluster.
ceph auth list
ceph auth add
ceph auth add
Adds a new authorization key with specified capabilities.
ceph auth add <entity> <caps>
Example:
ceph auth add client.admin mon 'allow *' osd 'allow *' mds 'allow'
ceph mgr module enable
ceph mgr module enable
Enables a manager module in Ceph. Ceph Manager (Mgr) modules offer additional services to the cluster.
ceph mgr module enable <module-name>
Example:
ceph mgr module enable dashboard
Conclusion
The ceph
command-line tool is highly versatile and allows administrators to control every aspect of a Ceph cluster. Whether it's managing OSDs, monitoring cluster health, configuring pools, or setting up RBD devices, ceph
provides robust commands to handle these tasks efficiently.
For daily monitoring and administration, commands like ceph status
, ceph osd tree
, and ceph osd df
provide essential insights into cluster performance, health, and resource utilization.
Last updated