参考文档:
Deploying a new Ceph cluster — Ceph Documentation
我部署了3台vm在vmware上。 每台6cpu, 12G mem, 安装了 RHEL8.6
名字和 ip 为 :
ceph0 192.168.20.10
ceph1 192.168.20.11
ceph2 192.168.20.12
先登录到ceph0上,先下载cephadm
curl -LO /ceph/ceph/raw/quincy/src/cephadm/cephadm
chmod +x cephadm
然后执行一下命令 enable epel repo:
subscription-manager repos --enable codeready-builder-for-rhel-8-$(arch)-rpms
dnf install /pub/epel/
安装 cephadm 和 ceph:
cephadm add-repo --release quincy
cephadm install
cephadm install ceph-common
开始bootstrap
cephadm bootstrap --mon-ip 192.168.20.10
如果成功,会出现以下文字:
Ceph Dashboard is now available at:
URL: https://ceph0:8443/
User: admin
Password: 26cjfns86y
Enabling keyring and conf on hosts with "admin" label
Saving cluster configuration to /var/lib/ceph/9145c5e8-8100-11ed-942d-0050568b6d43/config directory
Enabling autotune for osd_memory_target
You can access the Ceph CLI as following in case of multi-cluster or non-default config:
sudo /usr/sbin/cephadm shell --fsid 9145c5e8-8100-11ed-942d-0050568b6d43 -c /etc/ceph/ -k /etc/ceph/
Or, if you are only running a single cluster on this host:
sudo /usr/sbin/cephadm shell
Please consider enabling telemetry to help improve Ceph:
ceph telemetry on
For more information see
Telemetry Module — Ceph Documentation
Bootstrap complete.
运行 ceph -s , ceph orch host ls, ceph orch ps 可以看到有结果了, 虽然还没有部署完整。
添加host
在ceph0里修改/etc/hosts,添加
192.168.20.10 ceph0
192.168.20.11 ceph1
192.168.20.12 ceph2
然后
ssh-copy-id -f -i /etc/ceph/ root@ceph1
ssh-copy-id -f -i /etc/ceph/ root@ceph2
ceph orch host add ceph1 192.168.20.11
ceph orch host add ceph2 192.168.20.12
添加后,查看:
[root@ceph0 ~]# ceph orch host ls
HOST ADDR LABELS STATUS
ceph0 192.168.20.10 _admin
ceph1 192.168.20.11
ceph2 192.168.20.12
3 hosts in cluster
添加label 并设置 角色
ceph orch host label add ceph0 mon
ceph orch host label add ceph1 mon
ceph orch host label add ceph2 mon
ceph orch host label add ceph0 mgr
ceph orch host label add ceph1 mgr
ceph orch host label add ceph2 mgr
[root@ceph0 ~]# ceph orch host ls
HOST ADDR LABELS STATUS
ceph0 192.168.20.10 _admin mon mgr
ceph1 192.168.20.11 mon mgr
ceph2 192.168.20.12 mgr mon
3 hosts in cluster
ceph orch apply mgr label:mgr
ceph orch apply mon label:mon
最后使用命令查看状态
ceph orch host ls; ceph orch ls; ceph orch ps
使用 podman ps 可以看到ceph0上起了不少ceph的容器。
添加osd
我在每个节点添加了300g的一个ssd 盘。需要过一段时间才能被ceph 感知。感知后的结果是:
[root@ceph0 ~]# ceph orch device ls
HOST PATH TYPE DEVICE ID SIZE AVAILABLE REFRESHED REJECT REASONS
ceph0 /dev/sdb ssd 322G Yes 8s ago
ceph1 /dev/sdb ssd 322G Yes 8s ago
ceph2 /dev/sdb ssd 322G Yes 8s ago
执行添加所有磁盘
ceph orch apply osd --all-available-devices
使用命令查看状态, 可以看到osd 已经运行
ceph orch host ls; ceph orch ls; ceph orch ps
ceph osd tree
ceph -s
ceph orch device ls
访问 dashboard
在ceph0上,访问
URL: https://ceph0:8443/
User: admin
Password: 26cjfns86y (之前显示的值)
然后就可以在ui里操作。