ceph如何删除mon节点
1、我的ceph系统中一共三个mon节点: mon,osd1,osd2
下面演示删除osd2
首先,检查ceph状态
[root@mon my-cluster]# ceph -s
cluster 3b0a5cfb-703d-4648-a9db-a91f42c60176
health HEALTH_OK
monmap e3: 3 mons at {mon=10.1.3.235:6789/0,osd1=10.1.3.236:6789/0,osd2=10.1.3.237:6789/0}
election epoch 6, quorum 0,1,2 mon,osd1,osd2
osdmap e9: 2 osds: 2 up, 2 in
flags sortbitwise,require_jewel_osds
pgmap v21: 64 pgs, 1 pools, 0 bytes data, 0 objects
10305 MB used, 189 GB / 199 GB avail
64 active+clean
目前ceph系统正常。
2、然后删除osd2
[root@mon my-cluster]# ceph-deploy mon destroy osd2
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO ] Invoked (1.5.37): /usr/bin/ceph-deploy mon destroy osd2
[ceph_deploy.cli][INFO ] ceph-deploy options:
[ceph_deploy.cli][INFO ] username : None
[ceph_deploy.cli][INFO ] verbose : False
[ceph_deploy.cli][INFO ] overwrite_conf : False
[ceph_deploy.cli][INFO ] subcommand : destroy
[ceph_deploy.cli][INFO ] quiet : False
[ceph_deploy.cli][INFO ] cd_conf : <ceph_deploy.conf.cephdeploy.Conf instance at 0x22a0d40>
[ceph_deploy.cli][INFO ] cluster : ceph
[ceph_deploy.cli][INFO ] mon : ['osd2']
[ceph_deploy.cli][INFO ] func : <function mon at 0x2299758>
[ceph_deploy.cli][INFO ] ceph_conf : None
[ceph_deploy.cli][INFO ] default_release : False
[ceph_deploy.mon][DEBUG ] Removing mon from osd2
[osd2][DEBUG ] connected to host: osd2
[osd2][DEBUG ] detect platform information from remote host
[osd2][DEBUG ] detect machine type
[osd2][DEBUG ] find the location of an executable
[osd2][DEBUG ] get remote short hostname
[osd2][INFO ] Running command: ceph --cluster=ceph -n mon. -k /var/lib/ceph/mon/ceph-osd2/keyring mon remove osd2
[osd2][WARNIN] removing mon.osd2 at 10.1.3.237:6789/0, there will be 2 monitors
[osd2][INFO ] polling the daemon to verify it stopped
[osd2][INFO ] Running command: systemctl stop ceph-mon@osd2.service
[osd2][INFO ] Running command: mkdir -p /var/lib/ceph/mon-removed
[osd2][DEBUG ] move old monitor data
[root@mon my-cluster]#
3、最后,检查ceph是否正常
检查ceph状态:
[root@mon my-cluster]# ceph -s
cluster 3b0a5cfb-703d-4648-a9db-a91f42c60176
health HEALTH_OK
monmap e4: 2 mons at {mon=10.1.3.235:6789/0,osd1=10.1.3.236:6789/0}
election epoch 10, quorum 0,1 mon,osd1
osdmap e9: 2 osds: 2 up, 2 in
flags sortbitwise,require_jewel_osds
pgmap v21: 64 pgs, 1 pools, 0 bytes data, 0 objects
10305 MB used, 189 GB / 199 GB avail
64 active+clean
[root@mon my-cluster]#
[root@mon my-cluster]# ceph osd tree
ID WEIGHT TYPE NAME UP/DOWN REWEIGHT PRIMARY-AFFINITY
-1 0.19519 root default
-2 0.09760 host osd1
0 0.09760 osd.0 up 1.00000 1.00000
-3 0.09760 host osd2
1 0.09760 osd.1 up 1.00000 1.00000
[root@mon my-cluster]#
4、可以看到,删掉osd2后系统正常,现在系统剩余2个mon节点:mon,osd1