使用 Ceph-deploy 快速部署 Ceph 环境

时间:2022-04-09 03:07:49

Ceph install by ceph-deploy

基于官方社区安装手顺:
Ceph 快速安装

本地环境信息:

本手顺安装架构:
Ceph-deploy 1
MON 1
OSD 2

CentOS 7:
ceph-deploy + monitor(ceph1)
192.168.122.18
172.16.34.253
osd(ceph2)
192.168.122.38
172.16.34.184
osd(ceph3)
192.168.122.158
172.16.34.116

所有节点共通安装配置

设定网络代理(非必须)

通常情况下访问国外网站速度都会很慢,推荐设定代理。
vim /etc/environment
export http_proxy=xxx
export https_proxy=xxx

修改主机名和 /etc/hosts

每个节点修改主机名:
hostnamectl set-hostname ceph1
hostnamectl set-hostname ceph2
hostnamectl set-hostname ceph3

每个节点修改配置文件:
vim /etc/hosts
192.168.122.18 ceph1
192.168.122.38 ceph2
192.168.122.158 ceph3

每个节点确认连通性:
ping -c 3 ceph1
ping -c 3 ceph2
ping -c 3 ceph3

设定网卡开机启动

grep ONBOOT /etc/sysconfig/network-scripts/ifcfg-xxx
ONBOOT=yes

添加 epel,更新 rpm

添加 Epel 源:
sudo yum install -y yum-utils && sudo yum-config-manager --add-repo https://dl.fedoraproject.org/pub/epel/7/x86_64/ && sudo yum install --nogpgcheck -y epel-release && sudo rpm --import /etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7 && sudo rm /etc/yum.repos.d/dl.fedoraproject.org*

添加 Ceph 源(这段配置可以不配):
sudo vim /etc/yum.repos.d/ceph.repo
[ceph-noarch]
name=Ceph noarch packages
baseurl=http://download.ceph.com/rpm-luminous/el7/noarch
enabled=1
gpgcheck=1
type=rpm-md
gpgkey=https://download.ceph.com/keys/release.asc

更新软件包:
sudo yum update -y

关闭防火墙和 SELINUX

关闭防火墙(最简单)或者自己设定 iptables 规则
systemctl status firewalld.service
systemctl stop firewalld.service
systemctl disable firewalld.service

设定 iptables 规则如下:
http://docs.ceph.org.cn/start/quick-start-preflight/#id7

禁用 SELINUX:修改配置文件(重启生效)+ 手动设定(立即生效)
sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config
setenforce 0

配置修改的确认:
grep SELINUX= /etc/selinux/config
getenforce

上述操作结束推荐重启机器(所有节点)

安装和配置 NTP

sudo yum install ntp ntpdate ntp-doc -y
systemctl restart ntpd
systemctl status ntpd

TODO: 测试使用只配置了一个 MON 节点,这边并没有配置 ntp 服务器。
官方推荐的是集群的所有节点全部安装并配置 NTP。

安装和配置 SSH

sudo yum install openssh-server -y

通常情况下 Linux 发行版会自带 ssh 并启动该服务
systemctl status sshd

ceph-deploy 节点安装

安装 ceph-deploy

sudo yum install ceph-deploy -y

配置 SSH(生成公秘钥实现免密访问)

官方推荐的是新建一个非 ceph 用户,这边简单起见直接使用了 root 用户。
创建部署 CEPH 的用户

生成公秘钥文件,并拷贝公钥到各个节点
ssh-keygen
ssh-copy-id root@ceph1
ssh-copy-id root@ceph2
ssh-copy-id root@ceph3

验证免密访问
ssh ceph1
exit
ssh ceph2
exit
ssh ceph3
exit

修改配置文件
vim /root/.ssh/config
Host ceph1
Hostname ceph1
User root
Host ceph2
Hostname ceph2
User root
Host ceph3
Hostname ceph3
User root

安装存储集群

创建安装目录
mkdir -p /root/my_cluster
cd /root/my_cluster

创建集群
ceph-deploy new ceph1

修改配置文件
vim ceph.conf
osd pool default size = 2
public network = 192.168.122.18/24
★ 注意 mon_host 必须在 public network 网络的网段内!!!

安装 Ceph
如果在网络速度很慢的情况下,推荐自己手动在各个节点安装,最后在运行这个命令。
否则这边会不断报错,挺蛋疼的。参:[ceph-deploy 安装过程分析] 进行每个节点的安装。
ceph-deploy install ceph1 ceph2 ceph3

初始化 monitor
ceph-deploy mon create-initial

以上 Ceph 集群创建完成, monitor 正常启动完成,接下来就是配置 OSD 了

monitor 启动确认

进程确认
[root@ceph1 my_cluster]# ps -ef | grep ceph
root 21688 16109 0 08:50 pts/0 00:00:00 grep --color=auto ceph
ceph 29366 1 0 May17 ? 00:00:13 /usr/bin/ceph-mon -f --cluster ceph --id ceph1 --setuser ceph --setgroup ceph

systemctl 确认
[root@ceph1 my_cluster]# systemctl status | grep ceph
● ceph1
│ └─21690 grep --color=auto ceph
├─system-ceph\x2dmon.slice
│ └─ceph-mon@ceph1.service
│ └─29366 /usr/bin/ceph-mon -f --cluster ceph --id ceph1 --setuser ceph --setgroup ceph

[root@ceph1 my_cluster]# systemctl status ceph-mon@ceph1.service
● ceph-mon@ceph1.service - Ceph cluster monitor daemon
Loaded: loaded (/usr/lib/systemd/system/ceph-mon@.service; enabled; vendor preset: disabled)
Active: active (running) since Wed 2017-05-17 16:50:48 CST; 16h ago
Main PID: 29366 (ceph-mon)
CGroup: /system.slice/system-ceph\x2dmon.slice/ceph-mon@ceph1.service
└─29366 /usr/bin/ceph-mon -f --cluster ceph --id ceph1 --setuser ceph --setgroup ceph
May 17 16:50:48 ceph1 systemd[1]: Started Ceph cluster monitor daemon.
May 17 16:50:48 ceph1 systemd[1]: Starting Ceph cluster monitor daemon...
May 17 16:50:48 ceph1 ceph-mon[29366]: starting mon.ceph1 rank 0 at 192.168.122.18:6789/0 mon_data /var/lib/ceph/mon/ceph-ceph1 fsid 92da5066-e973-4e7e-8524-8dcbc948c93b

ceph-deploy 安装过程分析

connected to host
installing Ceph
yum clean all
yum -y install epel-release
yum -y install yum-plugin-priorities
rpm --import https://download.ceph.com/keys/release.asc
rpm -Uvh --replacepkgs https://download.ceph.com/rpm-jewel/el7/noarch/ceph-release-1-0.el7.noarch.rpm
yum -y install ceph ceph-radosgw

每个节点的安装直接从 yum -y install epel-release 开始到最后

添加 OSD 之一(使用两个裸盘)

ceph-deploy disk zap ceph2:/dev/vdb ceph3:/dev/vdb
ceph-deploy osd prepare ceph2:/dev/vdb ceph3:/dev/vdb

★ 注意:[prepare 命令只准备 OSD 。在大多数操作系统中,硬盘分区创建后,不用 activate 命令也会自动执行 activate 阶段(通过 Ceph 的 udev 规则)。] 摘自:
http://docs.ceph.org.cn/rados/deployment/ceph-deploy-osd/

所以下面的 activate 会报错,实际上 OSD 已经激活,这个时候推荐的做法是在 OSD 节点手动去看服务有没有启动。参:[OSD 启动确认(和 monitor 启动确认类似)]

ceph-deploy osd activate ceph2:/dev/vdb ceph3:/dev/vdb

推送配置文件
ceph-deploy admin ceph1 ceph2 ceph3
ceph health

添加 OSD 之二(使用两个文件夹)

ssh ceph2
sudo mkdir /var/local/osd0
exit

ssh ceph3
sudo mkdir /var/local/osd1
exit

ceph-deploy osd prepare ceph2:/var/local/osd0 ceph3:/var/local/osd1
ceph-deploy osd activate ceph2:/var/local/osd0 ceph3:/var/local/osd1
ceph-deploy admin ceph1 ceph2 ceph3
ceph health

OSD 启动确认(和 monitor 启动确认类似)

[root@ceph2 ~]# ps -ef | grep ceph
root 15818 15802 0 09:17 pts/0 00:00:00 grep --color=auto ceph
ceph 24426 1 0 May17 ? 00:00:33 /usr/bin/ceph-osd -f --cluster ceph --id 0 --setuser ceph --setgroup ceph

[root@ceph2 ~]# systemctl status | grep ceph
● ceph2
│ └─15822 grep --color=auto ceph
├─system-ceph\x2dosd.slice
│ └─ceph-osd@0.service
│ └─24426 /usr/bin/ceph-osd -f --cluster ceph --id 0 --setuser ceph --setgroup ceph

[root@ceph2 ~]# systemctl status ceph-osd@0.service
● ceph-osd@0.service - Ceph object storage daemon
Loaded: loaded (/usr/lib/systemd/system/ceph-osd@.service; enabled-runtime; vendor preset: disabled)
Active: active (running) since Wed 2017-05-17 16:56:54 CST; 16h ago
Main PID: 24426 (ceph-osd)
CGroup: /system.slice/system-ceph\x2dosd.slice/ceph-osd@0.service
└─24426 /usr/bin/ceph-osd -f --cluster ceph --id 0 --setuser ceph --setgroup ceph
May 17 16:56:53 ceph2 systemd[1]: Starting Ceph object storage daemon...
May 17 16:56:53 ceph2 ceph-osd-prestart.sh[24375]: create-or-move updating item name 'osd.0' weight 0.0146 at location {host=ceph2,root=default} to crush map
May 17 16:56:54 ceph2 systemd[1]: Started Ceph object storage daemon.
May 17 16:56:54 ceph2 ceph-osd[24426]: starting osd.0 at :/0 osd_data /var/lib/ceph/osd/ceph-0 /var/lib/ceph/osd/ceph-0/journal
May 17 16:56:54 ceph2 ceph-osd[24426]: 2017-05-17 16:56:54.080778 7f0727d3a800 -1 osd.0 0 log_to_monitors {default=true}

数据清除

清除安装包
ceph-deploy purge ceph1 ceph2 ceph3

清除配置信息
ceph-deploy purgedata ceph1 ceph2 ceph3
ceph-deploy forgetkeys

每个节点删除残留的配置文件
rm -rf /var/lib/ceph/osd/*
rm -rf /var/lib/ceph/mon/*
rm -rf /var/lib/ceph/mds/*
rm -rf /var/lib/ceph/bootstrap-mds/*
rm -rf /var/lib/ceph/bootstrap-osd/*
rm -rf /var/lib/ceph/bootstrap-mon/*
rm -rf /var/lib/ceph/tmp/*
rm -rf /etc/ceph/*
rm -rf /var/run/ceph/*