Ceph的状态错误

时间:2022-11-17 20:55:07

使用命令检查ceph集群的监控状态,得到

[root@node1 ~]# ceph -s

 cluster c4898b1c-7ac1-406d-bb5d-d3c7980de438
health HEALTH_ERR 4 pgs inconsistent; 8 scrub errors; mds cluster is degraded; mds alpha is laggy
monmap e5: 1 mons at {node1=172.17.44.22:6789/0}, election epoch 1, quorum 0 node1
osdmap e172: 3 osds: 2 up, 2 in
pgmap v32874: 192 pgs: 188 active+clean, 4 active+clean+inconsistent; 216 MB data, 2517 MB used, 780 GB / 782 GB avail
mdsmap e587: 1/1/1 up {0=alpha=up:replay(laggy or crashed)}

可以看到,ceph的监控状态是错误的,所以当前ceph集群是不用用的。也主要是是pg冲突。我们运行命令来看是那些pg冲突。

[root@node1 ~]# ceph pg dump | grep inconsistent
dumped all in format plain
1.2d 2 0 0 0 4194350 680 680 active+clean+inconsistent 2014-09-16 17:22:26.262442 77'680 172:1149 [0,1] [0,1] 77'680 2014-09-16 15:34:54.801604 77'680 2014-09-15 15:34:51.375100
1.27 2 0 0 0 4194338 654 654 active+clean+inconsistent 2014-09-16 17:22:39.042809 77'654 172:1052 [0,1] [0,1] 77'654 2014-09-16 15:34:33.812579 77'654 2014-09-15 15:34:25.371366
1.13 1 0 0 0 66 118 118 active+clean+inconsistent 2014-09-16 17:22:33.648556 77'118 172:313 [1,0] [1,0] 77'118 2014-09-16 15:34:06.806975 77'118 2014-09-15 15:34:04.363863
1.b 2 0 0 0 4194766 797 797 active+clean+inconsistent 2014-09-16 17:22:37.363771 77'797 172:1255 [0,1] [0,1] 77'797 2014-09-16 15:33:52.856734 77'797 2014-09-15 15:33:42.365185

可以得到这几个冲突的pg组,第一行是他的ID 根据官方提供的办法,http://docs.ceph.com/docs/master/rados/troubleshooting/troubleshooting-pg/

ceph pg repair {placement-group-ID}
[root@node1 ~]# ceph pg repair 1.2d
instructing pg 1.2d on osd.0 to repair
[root@node1 ~]# ceph pg repair 1.27
instructing pg 1.27 on osd.0 to repair
[root@node1 ~]# ceph pg repair 1.13
instructing pg 1.13 on osd.1 to repair
[root@node1 ~]# ceph pg repair 1.b
instructing pg 1.b on osd.0 to repair

然后检查状态:

[root@node1 ~]# ceph -s

 cluster c4898b1c-7ac1-406d-bb5d-d3c7980de438
health HEALTH_ERR 1 pgs inconsistent; 2 scrub errors; mds cluster is degraded; mds alpha is laggy
monmap e5: 1 mons at {node1=172.17.44.22:6789/0}, election epoch 1, quorum 0 node1
osdmap e172: 3 osds: 2 up, 2 in
pgmap v32877: 192 pgs: 191 active+clean, 1 active+clean+inconsistent; 216 MB data, 2517 MB used, 780 GB / 782 GB avail
mdsmap e595: 1/1/1 up {0=alpha=up:replay(laggy or crashed)}

可以看到现在还是错误,但是只有一个inconsistent了,实际上是因为网络需要时间同步我们的数据。在次执行

[root@node1 ~]# ceph -s cluster c4898b1c-7ac1-406d-bb5d-d3c7980de438

  health HEALTH_WARN mds cluster is degraded; mds alpha is laggy
monmap e5: 1 mons at {node1=172.17.44.22:6789/0}, election epoch 1, quorum 0 node1
osdmap e172: 3 osds: 2 up, 2 in
pgmap v32878: 192 pgs: 192 active+clean; 216 MB data, 2517 MB used, 780 GB / 782 GB avail
mdsmap e596: 1/1/1 up {0=alpha=up:replay(laggy or crashed)}

可以看到现在的监控状况也仅仅是一个warning了,而不是错误。

但是,我们看到3个osds中却只有2个up(运行中),2个in(在集群中).osd是我们存储pg的底层,而pg是对象集合,对象是文件小各个小部分集合,所以如果osd错误,则可以 肯定这个状态更不正确。由于现在我们的环境是测试环境,我们可以使用将错误的osd存储几点先直接移除出当前集群环境,然后再加入。这中间会将错误的那个osd的数据移到正确的保留osd里对比,取舍,然后我们将这个错误的osd数据删除,在添加进来。

1. 删除osd

ceph osd out 2

我们是里的配置文件是:

[osd.2] host = node3 [osd.1] host = node2 [osd.0] host = node1

错误的是node3节点,所以参数out 后是2

ceph osd crush remove osd.2

ceph osd rm 6

到这一步运行会提示该节点上的osd正在运行。停止之。到对应节点删除数据。

  1. rm -fr /data/osd.2/*

到这一步我们在看ceph的健康状况:

 cluster c4898b1c-7ac1-406d-bb5d-d3c7980de438
health HEALTH_OK
monmap e5: 1 mons at {node1=172.17.44.22:6789/0}, election epoch 1, quorum 0 node1
osdmap e217: 2 osds: 2 up, 2 in
pgmap v33222: 192 pgs: 192 active+clean; 216 MB data, 2519 MB used, 780 GB / 782 GB avail; 1769B/s wr, 0op/s
mdsmap e5954: 1/1/1 up {0=a=up:active}

可以看到只有2个osd节点了,而一般生产环境是需要3个的。我们将刚才移除的osd节点在添加进来。

2. 添加osd节点。

ceph osd create

返回一个数字2 独立磁盘的挂在等工作我们这里就省去了,df -h也能看到该数据。

ceph-osd -i 2 --mkfs

在填加节点上执行

/etc/init.d/ceph start osd

在存储节点上查看状态

cluster c4898b1c-7ac1-406d-bb5d-d3c7980de438

  health HEALTH_WARN 67 pgs peering; 67 pgs stuck inactive; 67 pgs stuck unclean
monmap e5: 1 mons at {node1=172.17.44.22:6789/0}, election epoch 1, quorum 0 node1
osdmap e225: 3 osds: 3 up, 3 in
pgmap v33340: 192 pgs: 125 active+clean, 67 peering; 125 MB data, 3552 MB used, 1170 GB / 1173 GB avail
mdsmap e5954: 1/1/1 up {0=a=up:active}

还是不正确?!别急,我们知道ceph是网络传输分布式文件系统,而我们这个集群前面为了存储存储能力,有过1个多G的数据,等几秒就能看到我们最终想看到的状态:

[root@node1 ceph]# ceph -w

 cluster c4898b1c-7ac1-406d-bb5d-d3c7980de438
health HEALTH_OK
monmap e5: 1 mons at {node1=172.17.44.22:6789/0}, election epoch 1, quorum 0 node1
osdmap e235: 3 osds: 3 up, 3 in
pgmap v33364: 192 pgs: 192 active+clean; 217 MB data, 3605 MB used, 1170 GB / 1173 GB avail; 1023B/s wr, 0op/s
mdsmap e5954: 1/1/1 up {0=a=up:active}