CentOS7.2 部署Ceph分布式存储

时间:2023-01-13 12:36:23

1.1 环境准备

主机名 IP地址
ceph-admin 192.168.16.220
ceph-node1,ceph-mon 192.168.16.221
ceph-node2,ceph-mon 192.168.16.222
ceph-node3,ceph-mon 192.168.16.223
1.1.1 配置ssh密钥访问
ssh-keygen -f ~/.ssh/id_rsa; 
ssh-copy-id -i root@192.168.16.221-223 
1.1.2 添加hosts,配置主机名

manager节点执行

cat /etc/hosts
192.168.16.220 ceph-admin
192.168.16.221 ceph-node1 ceph-mon
192.168.16.222 ceph-node2 ceph-mon
192.168.16.223 ceph-node3 ceph-mon
for n in `seq 3`;do \
scp /etc/hosts 192.168.16.22$n:/etc/hosts; \
ssh 192.168.16.22$n "hostnamectl set-hostname ceph-node$n;" \
done
1.1.3 关闭firewalld、selinux

manager节点执行

for n in `seq 3`;do \
ssh 192.168.16.22$n "systemctl stop firewalld; \
systemctl disable firewalld; \
sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config; \
setenforce 0 "; \
done
1.1.4 时间更新同步

manager节点执行

for n in `seq 3`;do \
ssh 192.168.16.22$n  "yum install ntpdate -y; \
ntpdate asia.pool.ntp.org" \
done
1.1.5 替换repo

manager节点执行

export CEPH_DEPLOY_REPO_URL=http://172.18.210.253/repo/ceph-el7/jewel
export CEPH_DEPLOY_GPG_URL=http://172.18.210.253/repo/ceph-el7/jewel/release.asc
注:如果本地源有问题或不在本地时,使用国内的源即可
export CEPH_DEPLOY_REPO_URL=http://mirrors.163.com/ceph/rpm-jewel/el7
export CEPH_DEPLOY_GPG_URL=http://mirrors.163.com/ceph/keys/release.asc

1.2 安装 ceph

1.2.1 安装ceph管理工具ceph-deploy

manager节点执行

yum install -y ceph-deploy
1.2.2 创建工作目录

manager节点执行

mkdir /ceph ; cd /ceph
1.2.3 安装ceph客户端

在每台主机上安装ceph

yum install -y ceph 

或者 在管理节点上之行
ceph-deploy install ceph-admin ceph-node1 ceph-node2 ceph-node3
1.2.4 创建ceph集群

1、启动新集群

ceph-deploy new ceph-node1 ceph-node2 ceph-node3  #建议是奇数

# cat ceph.conf 
fsid = 7e1daeea-417e-43e3-a2fe-56d9444f2fbf
mon_initial_members = ceph-node1, ceph-node2, ceph-node3
mon_host = 192.168.16.221,192.168.16.222,192.168.16.223
auth_cluster_required = cephx
auth_service_required = cephx
auth_client_required = cephx
rbd_default_features = 1
mon clock drift allowed = 2
mon clock drift warn backoff = 30    

注意:
1、部分操作系统kernel只支持layering,所以最好直接在配置文件指明创建rbd时默认支持的特性
rbd_default_features = 1
2、由于ceph分布式对时钟的同步要求很高,可以将同步误差范围调大;
mon clock drift allowed = 2
mon clock drift warn backoff = 30

2、初始化监控节点

ceph-deploy mon create-initial
1.2.5 创建osd
  • 创建osd有两种方式
    1、使用系统裸盘,作为存储空间;
    2、使用现有文件系统,以目录或分区作为存储空间,官方建议为 OSD 及其日志使用独立硬盘或分区作为存储空间
1.2.5.1 使用分区

1、格式化磁盘

ceph-deploy disk zap ceph-node1:/dev/sdb1 ceph-node2:/dev/sdb1 ceph-node3:/dev/sdb1

2、创建osd

ceph-deploy osd prepare  ceph-node1:/dev/sdb1 ceph-node2:/dev/sdb1 ceph-node3:/dev/sdb1

3、激活osd

ceph-deploy osd activate ceph-node1:/dev/sdb1 ceph-node2:/dev/sdb1 ceph-node3:/dev/sdb1
1.2.5.2 使用目录

1、在各节点准备osd目录

ssh ceph-node1 “mkdir /datal/osd0;chown -R ceph:ceph /data/osd0"
ssh ceph-node2 “mkdir /datal/osd1;chown -R ceph:ceph /data/osd1"
ssh ceph-node3 “mkdir /datal/osd2;chown -R ceph:ceph /data/osd2"

2、创建osd

ceph-deploy osd prepare ceph-node1:/data/osd0 ceph-node2:/data/osd1 ceph-node3:/data/osd2

3、激活osd

ceph-deploy osd acivate ceph-node1:/data/osd0 ceph-node2:/data/osd1 ceph-node3:/data/osd2
1.2.6 赋予管理员权限

manager节点执行

ceph-deploy admin ceph-admin
# ceph -s

cluster 7e1daeea-417e-43e3-a2fe-56d9444f2fbf
health HEALTH_OK
monmap e1: 3 mons at {ceph-node1=192.168.16.221:6789/0,ceph-node2=192.168.16.222:6789/0,ceph-node3=192.168.16.223:6789/0}
      election epoch 4, quorum 0,1,2 ceph-node1,ceph-node2,ceph-node3
osdmap e14: 3 osds: 3 up, 3 in
      flags sortbitwise,require_jewel_osds
pgmap v24: 64 pgs, 1 pools, 0 bytes data, 0 objects
      15460 MB used, 2742 GB / 2757 GB avail
            64 active+clean
# ceph health
HEALTH_OK
1.2.7 创建pool

创建池

ceph osd pool create image 64 

删除池

ceph osd pool delete rbd rbd --yes-i-really-really-mean-it  

ceph建好后默认有个rbd池,可以考虑删除

创建image

rbd create test --size 1024 -p image 

注:创建一个镜像,-p参数指定池的名称,-size单位为M

1.3 常用操作

1.3.1 ceph reset

如果安装中途有什么问题,可以推到重来

ceph-deploy purge 节点1 节点2 ....
ceph-deploy purgedata 节点1 节点2 ....
ceph-deploy forgetkeys
1.3.2 常用命令
rados lspools 查看池子

ceph -s 或 ceph status 查看集群状态

ceph -w 观察集群健康状态

ceph quorum_status --format json-pretty 检查ceph monitor仲裁状态

ceph df 检查集群使用情况

ceph mon stat 检查monitor状态

ceph osd stat 检查osd状态

ceph pg stat 检查pg配置组状态

ceph pg dump 列出PG

ceph osd lspools 列出存储池

ceph osd tree 检查osd的crush map

ceph auth list 列出集群的认证密钥

ceph 获取每个osd上pg的数量