本文共 1769 字,大约阅读时间需要 5 分钟。
MDS多活配置
默认情况下,cephfs文件系统只配置一个活跃的mds进程。在大型系统中,为了扩展元数据性能,可以配置多个活跃的mds进程,此时他们会共同承担元数据负载。
要配置mds多活,只需要修改cephfs系统的max_mds参数即可。以下是未配置之前的集群状态
[root@test1 ~]# ceph -s cluster: id: 94e1228c-caba-4eb5-af86-259876a44c28 health: HEALTH_OK services: mon: 3 daemons, quorum test1,test2,test3 mgr: test1(active), standbys: test3, test2 mds: cephfs-2/2/1 up {0=test2=up:active,1=test3=up:active}, 1 up:standby osd: 18 osds: 18 up, 18 in rgw: 3 daemons active data: pools: 8 pools, 400 pgs objects: 305 objects, 3.04MiB usage: 18.4GiB used, 7.84TiB / 7.86TiB avail pgs: 400 active+clean
1、配置多活
[root@test1 ~]# ceph mds set max_mds 2[root@test1 ~]# ceph -s cluster: id: 94e1228c-caba-4eb5-af86-259876a44c28 health: HEALTH_OK services: mon: 3 daemons, quorum test1,test2,test3 mgr: test1(active), standbys: test3, test2 mds: cephfs-2/2/2 up {0=test2=up:active,1=test3=up:active}, 1 up:standby osd: 18 osds: 18 up, 18 in rgw: 3 daemons active data: pools: 8 pools, 400 pgs objects: 305 objects, 3.04MiB usage: 18.4GiB used, 7.84TiB / 7.86TiB avail pgs: 400 active+clean
2、恢复单活mds
[root@test1 ~]# ceph mds set max_mds 1[root@test1 ~]# ceph mds deactivate 1[root@test1 ~]# ceph -s cluster: id: 94e1228c-caba-4eb5-af86-259876a44c28 health: HEALTH_OK services: mon: 3 daemons, quorum test1,test2,test3 mgr: test1(active), standbys: test3, test2 mds: cephfs-1/1/1 up {0=test2=up:active}, 2 up:standby osd: 18 osds: 18 up, 18 in rgw: 3 daemons active data: pools: 8 pools, 400 pgs objects: 305 objects, 3.04MiB usage: 18.4GiB used, 7.84TiB / 7.86TiB avail pgs: 400 active+clean io: client: 31.7KiB/s rd, 170B/s wr, 31op/s rd, 21op/s wr
转载地址:http://eytrl.baihongyu.com/