ceph分布式存储手动部署教程

时间:2023-01-13 12:36:11

一、基础环境准备

1、集群设备列表

   10.240.240.210  client 

   10.240.240.211  node1  mon1   osd.0

   10.240.240.212  node2  mon2   osd.1

   10.240.240.213  node3  mon3   osd.2

2、系统环境

[root@client ~]# cat /etc/redhat-release 

Red Hat Enterprise Linux Server release 6.5 (Santiago)

[root@client ~]# uname -r

2.6.32-431.el6.x86_64

3、卸载redhat自带的yum包(用redhat的yum源需注册付费,费时费力,需要卸载redhat的yum包,安装centos的

rpm -qa | grep yum | xargs rpm -e --nodeps

4、安装centos的yum包

可以通过http://mirrors.163.com/centos下载,或http://pan.baidu.com/s/1qW0MbgC下载相关安装包

rpm -ivh python-iniparse-0.3.1-2.1.el6.noarch.rpm

rpm -ivh yum-metadata-parser-1.1.2-16.el6.x86_64.rpm

rpm -ivh yum-3.2.29-40.el6.centos.noarch.rpm yum-plugin-fastestmirror-1.1.30-14.el6.noarch.rpm

编辑一个自己的yum源

[root@node1 ~]# vi /etc/yum.repos.d/my.repo


[base]

name=CentOS-6 - Base - 163.com

baseurl=http://mirrors.163.com/centos/6/os/$basearch/

#mirrorlist=http://mirrorlist.centos.org/?release=6&arch=$basearch&repo=os

gpgcheck=1

gpgkey=http://mirror.centos.org/centos/RPM-GPG-KEY-CentOS-6

 

#released updates

[updates]

name=CentOS-6 - Updates - 163.com

baseurl=http://mirrors.163.com/centos/6/updates/$basearch/

#mirrorlist=http://mirrorlist.centos.org/?release=6&arch=$basearch&repo=updates

gpgcheck=1

gpgkey=http://mirror.centos.org/centos/RPM-GPG-KEY-CentOS-6

 

#additional packages that may be useful

[extras]

name=CentOS-6 - Extras - 163.com

baseurl=http://mirrors.163.com/centos/6/extras/$basearch/

#mirrorlist=http://mirrorlist.centos.org/?release=6&arch=$basearch&repo=extras

gpgcheck=1

gpgkey=http://mirror.centos.org/centos/RPM-GPG-KEY-CentOS-6

 

#additional packages that extend functionality of existing packages

[centosplus]

name=CentOS-6 - Plus - 163.com

baseurl=http://mirrors.163.com/centos/6/centosplus/$basearch/

#mirrorlist=http://mirrorlist.centos.org/?release=6&arch=$basearch&repo=centosplus

gpgcheck=1

enabled=0

gpgkey=http://mirror.centos.org/centos/RPM-GPG-KEY-CentOS-6


#contrib - packages by Centos Users

[contrib]

name=CentOS-6 - Contrib - 163.com

baseurl=http://mirrors.163.com/centos/6/contrib/$basearch/

#mirrorlist=http://mirrorlist.centos.org/?release=6&arch=$basearch&repo=contrib

gpgcheck=1

enabled=0

gpgkey=http://mirror.centos.org/centos/RPM-GPG-KEY-CentOS-6

5、更新yum源

yum clean all

yum update -y



二、在所有节点安装ceph的所有的yum源

1、安装软件包key

(1)、release.asc key

rpm --import 'https://ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/release.asc'

(2)、autobuild.asc key

rpm --import 'https://ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/autobuild.asc'

2、安装ceph附加包源ceph-extras.repo,设置priority=2,确保新的包(如qemu)优先级比标准包的高。

vi /etc/yum.repos.d/ceph-extras.repo

[ceph-extras-source]

name=Ceph Extras Sources

baseurl=http://ceph.com/packages/ceph-extras/rpm/rhel6.5/SRPMS

enabled=1

priority=2

gpgcheck=1

type=rpm-md

gpgkey=https://ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/release.asc

3、安装ceph源

vi /etc/yum.repos.d/ceph.repo 

[ceph]

name=Ceph packages for $basearch

baseurl=http://ceph.com/rpm/rhel6/$basearch

enabled=1

gpgcheck=1

type=rpm-md

gpgkey=https://ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/release.asc


[ceph-noarch]

name=Ceph noarch packages

baseurl=http://ceph.com/rpm/rhel6/noarch

enabled=1

gpgcheck=1

type=rpm-md

gpgkey=https://ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/release.asc


[ceph-source]

name=Ceph source packages

baseurl=http://ceph.com/rpm/rhel6/SRPMS

enabled=0

gpgcheck=1

type=rpm-md

gpgkey=https://ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/release.asc

4、安装ceph的apache yum源

vi /etc/yum.repos.d/ceph-apache.repo 

[apache2-ceph-noarch]

name=Apache noarch packages for Ceph

baseurl=http://gitbuilder.ceph.com/apache2-rpm-rhel6-x86_64-basic/ref/master/

enabled=1

priority=2

gpgcheck=1

type=rpm-md

gpgkey=https://ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/autobuild.asc


[apache2-ceph-source]

name=Apache source packages for Ceph

baseurl=http://gitbuilder.ceph.com/apache2-rpm-rhel6-x86_64-basic/ref/master/

enabled=0

priority=2

gpgcheck=1

type=rpm-md

gpgkey=https://ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/autobuild.asc

5、安装ceph的ceph-fastcgi yum源

vi /etc/yum.repos.d/ceph-fastcgi.repo 

[fastcgi-ceph-basearch]

name=FastCGI basearch packages for Ceph

baseurl=http://gitbuilder.ceph.com/mod_fastcgi-rpm-rhel6-x86_64-basic/ref/master/

enabled=1

priority=2

gpgcheck=1

type=rpm-md

gpgkey=https://ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/autobuild.asc


[fastcgi-ceph-noarch]

name=FastCGI noarch packages for Ceph

baseurl=http://gitbuilder.ceph.com/mod_fastcgi-rpm-rhel6-x86_64-basic/ref/master/

enabled=1

priority=2

gpgcheck=1

type=rpm-md

gpgkey=https://ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/autobuild.asc


[fastcgi-ceph-source]

name=FastCGI source packages for Ceph

baseurl=http://gitbuilder.ceph.com/mod_fastcgi-rpm-rhel6-x86_64-basic/ref/master/

enabled=0

priority=2

gpgcheck=1

type=rpm-md

gpgkey=https://ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/autobuild.asc

6、安装epel yum源

rpm -Uvh http://dl.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm(安装这个在更新yum源的时候有报错)

或者

rpm -Uvh http://mirrors.sohu.com/fedora-epel/6/x86_64/epel-release-6-8.noarch.rpm(在国内推荐用这个,下载速度快)

三、在所有节点安装ceph包

1、安装ceph部署机

yum install -y ceph-deploy

2、安装ceph存储集群

(1)、安装ceph必备软件

yum install -y snappy leveldb gdisk python-argparse gperftools-libs

(2)、安装ceph软件

yum install -y ceph

3、安装ceph对象网关

(1)、安装apache fastcgi 需要yum install httpd mod_fastcgi,安装之前先执行下面的操作

yum install -y yum-plugin-priorities

yum update 

安装apache fastcgi

yum install -y httpd mod_fastcgi

(2)、编辑配置文件 httpd.conf 

vim /etc/httpd/conf/httpd.conf

ServerName node1

(3)、启动httpd进程

/etc/init.d/httpd restart

(4)、安装SSL (安装此步骤有报错)

yum install -y mod_ssl openssl   

openssl x509 -req -days 365 -in ca.csr -signkey ca.key -out ca.crt

cp ca.crt /etc/pki/tls/certs

cp ca.key /etc/pki/tls/private/ca.key

cp ca.csr /etc/pki/tls/private/ca.csr

/etc/init.d/httpd restart

(10)、Add Wildcard to DNS (The address of the DNS must also be specified in the Ceph configuration file with the rgw dns name = {hostname} setting.

address=/.ceph-node/192.168.0.1

(11)、安装ceph对象网关

 yum install -y ceph-radosgw ceph

(12)、安装ceph对象网关代理

yum install -y radosgw-agent

4、为块存储安装虚拟化软件

(1)、如果之前有qemu模块先删除,确保之后安装的是最完善的

yum remove -y qemu-kvm qemu-kvm-tools qemu-img

(2)、卸载后重新安装qemu

yum install -y qemu-kvm qemu-kvm-tools qemu-img

(3)、安装qemu客户代理

yum install -y qemu-guest-agent 

(4)、安装libvirt软件包

yum install -y libvirt

(5)、在所有节点安装ceph依赖的软件及模块

# yum install *argparse* -y

#yum install redhat-lsb  -y

# yum install xfs* -y


四、搭建ceph集群  (此步骤可以用附录中的ceph快速脚本安装)

建立第一个mon节点

1、登录监控节点node1节点

ls /etc/ceph      #查看ceph配置文件目录是否有东西

2、创建ceph配置文件并配置ceph配置文件内的内容

touch /etc/ceph/ceph.conf  #创建一个ceph配置文件

[root@client ~]# uuidgen      #执行此命令可以得到一个唯一的标识,作为ceph集群ID

f11240d4-86b1-49ba-aacc-6d3d37b24cc4

fsid = f11240d4-86b1-49ba-aacc-6d3d37b24cc4  #此标识就是上面得到的,把此条命令加入ceph的配置文件

mon initial members = node1,node2,node3    #node1、node2、node3作为ceph集群的监控节点,把此条命令加入到ceph配置文件

mon host = 10.240.240.211,10.240.240.212,10.240.240.213   #监控节点的地址,把此条命令加入ceph的配置文件中

按下面的内容编辑ceph配置文件

vi /etc/ceph/ceph.conf

[global]

fsid = f11240d4-86b1-49ba-aacc-6d3d37b24cc4

mon initial members = node1,node2,node3

mon host = 10.39.101.1,10.39.101.2,10.39.101.3

public network = 10.39.101.0/24

auth cluster required = cephx

auth service required = cephx

auth client required = cephx

osd journal size = 1024

filestore xattr use omap = true

osd pool default size = 3

osd pool default min size = 1

osd crush chooseleaf type = 1

osd_mkfs_type = xfs

max mds = 5

mds max file size = 100000000000000

mds cache size = 1000000

mon osd down out interval = 900         #设置osd节点down后900s,把此osd节点逐出ceph集群,把之前映射到此节点的数据映射到其他节点。

cluster_network = 10.39.102.0/24

[mon]

mon clock drift allowed = .50           #把时钟偏移设置成0.5s,默认是0.05s,由于ceph集群中存在异构PC,导致时钟偏移总是大于0.05s,为了方便同步直接把时钟偏移设置成0.5s

3、在node1创建各种密钥

ceph-authtool --create-keyring /tmp/ceph.mon.keyring --gen-key -n mon. --cap mon 'allow *'   #为监控节点创建管理密钥

ceph-authtool --create-keyring /etc/ceph/ceph.client.admin.keyring --gen-key -n client.admin --set-uid=0 --cap mon 'allow *' --cap osd 'allow *' --cap mds 'allow'   #为ceph amin用户创建管理集群的密钥并赋予访问权限

ceph-authtool /tmp/ceph.mon.keyring --import-keyring /etc/ceph/ceph.client.admin.keyring     #添加client.admin key到 ceph.mon.keyring

4、在node1监控节点创建一个mon数据目录

mkdir -p /var/lib/ceph/mon/ceph-node1

5、在node1创建一个boot引导启动osd的key

mkdir -p /var/lib/ceph/bootstrap-osd/

ceph-authtool -C /var/lib/ceph/bootstrap-osd/ceph.keyring

6、在node1节点上初始化mon节点,执行下面的命令

ceph-mon --mkfs -i node1 --keyring /tmp/ceph.mon.keyring

7、为了防止重新被安装创建一个空的done文件

touch /var/lib/ceph/mon/ceph-node1/done

8、创建一个空的初始化文件

touch /var/lib/ceph/mon/ceph-node1/sysvinit

9、启动ceph进程

/sbin/service ceph -c /etc/ceph/ceph.conf start mon.node1

10、查看asok mon状态

[root@node1 ~]# ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.node1.asok mon_status


建立第二个mon节点

1、复制node1节点的/etc/ceph目录到node2

scp /etc/ceph/* node2:/etc/ceph/

2、在node2节点上新建一个/var/lib/ceph/bootstrap-osd/目录

mkdir /var/lib/ceph/bootstrap-osd/

3、复制node1节点上的/var/lib/ceph/bootstrap-osd/ceph.keyring文件到node2

scp /var/lib/ceph/bootstrap-osd/ceph.keyring node2:/var/lib/ceph/bootstrap-osd/

4、复制node1节点上的/tmp/ceph.mon.keyring

scp /tmp/ceph.mon.keyring node2:/tmp/

5、在node2节点上建立一个/var/lib/ceph/mon/ceph-node2目录

mkdir -p /var/lib/ceph/mon/ceph-node2

6、在node2节点上初始化mon节点,执行下面的命令

ceph-mon --mkfs -i node2  --keyring /tmp/ceph.mon.keyring

7、为了防止重新被安装创建一个空的done文件

touch /var/lib/ceph/mon/ceph-node2/done

8、创建一个空的初始化文件

touch /var/lib/ceph/mon/ceph-node2/sysvinit

9、启动ceph进程

/sbin/service ceph -c /etc/ceph/ceph.conf start mon.node2


建立第三个mon节点

1、复制node1节点的/etc/ceph目录到node3

scp /etc/ceph/* node3:/etc/ceph/

2、在node3节点上新建一个/var/lib/ceph/bootstrap-osd/目录

mkdir /var/lib/ceph/bootstrap-osd/

3、复制node1节点上的/var/lib/ceph/bootstrap-osd/ceph.keyring文件到node3

scp /var/lib/ceph/bootstrap-osd/ceph.keyring node3:/var/lib/ceph/bootstrap-osd/

4、复制node1节点上的/tmp/ceph.mon.keyring

scp /tmp/ceph.mon.keyring node3:/tmp/

5、在node3节点上建立一个/var/lib/ceph/mon/ceph-node3目录

mkdir -p /var/lib/ceph/mon/ceph-node3

6、在node3节点上初始化mon节点,执行下面的命令

ceph-mon --mkfs -i node3  --keyring /tmp/ceph.mon.keyring

7、为了防止重新被安装创建一个空的done文件

touch /var/lib/ceph/mon/ceph-node3/done

8、创建一个空的初始化文件

touch /var/lib/ceph/mon/ceph-node3/sysvinit

9、启动ceph进程

/sbin/service ceph -c /etc/ceph/ceph.conf start mon.node3





1、查看集群状态

[root@node1 ~]# ceph -w

    cluster f11240d4-86b1-49ba-aacc-6d3d37b24cc4

     health HEALTH_ERR 192 pgs stuck inactive; 192 pgs stuck unclean; no osds

     monmap e2: 3 mons at {node1=10.240.240.211:6789/0,node2=10.240.240.212:6789/0,node3=10.240.240.213:6789/0}, election epoch 8, quorum 0,1,2 node1,node2,node3

     osdmap e1: 0 osds: 0 up, 0 in

      pgmap v2: 192 pgs, 3 pools, 0 bytes data, 0 objects

            0 kB used, 0 kB / 0 kB avail

                 192 creating


2、查看ceph pool

ceph osd lspools





添加osd节点


添加第一块osd节点

1、创建一个OSD,生成一个osd number

[root@node1 ~]# ceph osd create

0

2、为osd节点创建一个osd目录

[root@node1 ~]#  mkdir -p /var/lib/ceph/osd/ceph-0

3、格式化已准备好的osd硬盘(格式化为xfs格式)

[root@node1 ~]# mkfs.xfs -f /dev/sdb

meta-data=/dev/sdb               isize=256    agcount=4, agsize=1310720 blks

         =                       sectsz=512   attr=2, projid32bit=0

data     =                       bsize=4096   blocks=5242880, imaxpct=25

         =                       sunit=0      swidth=0 blks

naming   =version 2              bsize=4096   ascii-ci=0

log      =internal log           bsize=4096   blocks=2560, version=2

         =                       sectsz=512   sunit=0 blks, lazy-count=1

realtime =none                   extsz=4096   blocks=0, rtextents=0


4、挂在目录

[root@node1 ~]# mount -o user_xattr /dev/sdb /var/lib/ceph/osd/ceph-0

mount: wrong fs type, bad option, bad superblock on /dev/sdb,

       missing codepage or helper program, or other error

       In some cases useful info is found in syslog - try

       dmesg | tail  or so

执行上面的命令会报错

解决的办法是:用下面的两条命令替代上面的一条命令。

[root@node1 ~]# mount /dev/sdb /var/lib/ceph/osd/ceph-0

[root@node1 ~]# mount -o remount,user_xattr /var/lib/ceph/osd/ceph-0

查看挂载的情况

[root@node1 ~]# mount

/dev/sda2 on / type ext4 (rw)

proc on /proc type proc (rw)

sysfs on /sys type sysfs (rw)

devpts on /dev/pts type devpts (rw,gid=5,mode=620)

tmpfs on /dev/shm type tmpfs (rw)

/dev/sda1 on /boot type ext4 (rw)

none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw)

vmware-vmblock on /var/run/vmblock-fuse type fuse.vmware-vmblock (rw,nosuid,nodev,default_permissions,allow_other)

/dev/sdb on /var/lib/ceph/osd/ceph-1 type xfs (rw,user_xattr)


把上面的挂载信息写入分区表

[root@node1 ~]# vi /etc/fstab

/dev/sdb                /var/lib/ceph/osd/ceph-0  xfs   defaults        0 0

/dev/sdb                /var/lib/ceph/osd/ceph-0  xfs   remount,user_xattr  0 0


5、初始化osd数据目录

[root@node1 ~]# ceph-osd -i 0 --mkfs --mkkey 

6、注册osd的认证密钥

[root@node1 ~]# ceph auth add osd.0 osd 'allow *' mon 'allow profile osd' -i /var/lib/ceph/osd/ceph-0/keyring

7、为此osd节点创建一个crush map

[root@node1 ~]# ceph osd crush add-bucket node1 host

added bucket node1 type host to crush map

8、Place the Ceph Node under the root default

[root@node1 ~]# ceph osd crush move node1 root=default

moved item id -2 name 'node1' to location {root=default} in crush map

9、

[root@node1 ~]# ceph osd crush add osd.0 1.0 host=node1

add item id 0 name 'osd.0' weight 1 at location {host=node1} to crush map

10、创建一个初始化目录

[root@node1 ~]# touch /var/lib/ceph/osd/ceph-0/sysvinit

11、启动osd进程

/etc/init.d/ceph start osd.0

12、查看osd目录树

[root@node1 ~]# ceph osd tree

# id    weight  type name       up/down reweight

-1      1       root default

-2      1               host node1

0       1                       osd.0   up      1




添加第二个osd节点





1、创建一个OSD,生成一个osd number

[root@node2 ~]# ceph osd create

1

2、为osd节点创建一个osd目录

[root@node2 ~]# mkdir -p /var/lib/ceph/osd/ceph-1

3、格式化已准备好的osd硬盘,并挂在到上一步创建的osd目录(格式化为xfs格式)

[root@node2 ~]# mkfs.xfs -f /dev/sdb

meta-data=/dev/sdb               isize=256    agcount=4, agsize=1310720 blks

         =                       sectsz=512   attr=2, projid32bit=0

data     =                       bsize=4096   blocks=5242880, imaxpct=25

         =                       sunit=0      swidth=0 blks

naming   =version 2              bsize=4096   ascii-ci=0

log      =internal log           bsize=4096   blocks=2560, version=2

         =                       sectsz=512   sunit=0 blks, lazy-count=1

realtime =none                   extsz=4096   blocks=0, rtextents=0


4、挂在目录

[root@node2 ~]# mount -o user_xattr /dev/sdb /var/lib/ceph/osd/ceph-1

mount: wrong fs type, bad option, bad superblock on /dev/sdb,

       missing codepage or helper program, or other error

       In some cases useful info is found in syslog - try

       dmesg | tail  or so

执行上面的命令会报错

解决的办法是:用下面的两条命令替代上面的一条命令。

[root@node2 ~]# mount /dev/sdb /var/lib/ceph/osd/ceph-1

[root@node2 ~]# mount -o remount,user_xattr /var/lib/ceph/osd/ceph-1

查看挂载的情况

[root@node2 ~]# mount

/dev/sda2 on / type ext4 (rw)

proc on /proc type proc (rw)

sysfs on /sys type sysfs (rw)

devpts on /dev/pts type devpts (rw,gid=5,mode=620)

tmpfs on /dev/shm type tmpfs (rw)

/dev/sda1 on /boot type ext4 (rw)

none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw)

vmware-vmblock on /var/run/vmblock-fuse type fuse.vmware-vmblock (rw,nosuid,nodev,default_permissions,allow_other)

/dev/sdb on /var/lib/ceph/osd/ceph-1 type xfs (rw,user_xattr)


把上面的挂载信息写入分区表

[root@node2 ~]# vi /etc/fstab

/dev/sdb                /var/lib/ceph/osd/ceph-1  xfs   defaults        0 0

/dev/sdb                /var/lib/ceph/osd/ceph-1  xfs   remount,user_xattr  0 0


5、初始化osd数据目录

[root@node2 ~]# ceph-osd -i 1 --mkfs --mkkey 

2014-06-25 23:17:37.633040 7fa8fd06b7a0 -1 journal FileJournal::_open: disabling aio for non-block journal.  Use journal_force_aio to force use of aio anyway

2014-06-25 23:17:37.740713 7fa8fd06b7a0 -1 journal FileJournal::_open: disabling aio for non-block journal.  Use journal_force_aio to force use of aio anyway

2014-06-25 23:17:37.744937 7fa8fd06b7a0 -1 filestore(/var/lib/ceph/osd/ceph-1) could not find 23c2fcde/osd_superblock/0//-1 in index: (2) No such file or directory

2014-06-25 23:17:37.812999 7fa8fd06b7a0 -1 created object store /var/lib/ceph/osd/ceph-1 journal /var/lib/ceph/osd/ceph-1/journal for osd.1 fsid f11240d4-86b1-49ba-aacc-6d3d37b24cc4

2014-06-25 23:17:37.813192 7fa8fd06b7a0 -1 auth: error reading file: /var/lib/ceph/osd/ceph-1/keyring: can't open /var/lib/ceph/osd/ceph-1/keyring: (2) No such file or directory

2014-06-25 23:17:37.814050 7fa8fd06b7a0 -1 created new key in keyring /var/lib/ceph/osd/ceph-1/keyring

6、注册osd的认证密钥

[root@node2 ~]# ceph auth add osd.1 osd 'allow *' mon 'allow profile osd' -i /var/lib/ceph/osd/ceph-1/keyring

added key for osd.1

7、为此osd节点创建一个crush map

[[root@node2 ~]# ceph osd crush add-bucket node2 host

added bucket node2 type host to crush map

8、Place the Ceph Node under the root default

[root@node2 ~]# ceph osd crush move node2 root=default

moved item id -3 name 'node2' to location {root=default} in crush map

9、

[root@node2 ~]#  ceph osd crush add osd.1 1.0 host=node2

add item id 1 name 'osd.1' weight 1 at location {host=node2} to crush map

10、创建一个初始化目录

[root@node2 ~]#  touch /var/lib/ceph/osd/ceph-1/sysvinit

11、启动osd进程

[root@node2 ~]# /etc/init.d/ceph start osd.1

=== osd.1 === 

create-or-move updated item name 'osd.1' weight 0.02 at location {host=node2,root=default} to crush map

Starting Ceph osd.1 on node2...

starting osd.1 at :/0 osd_data /var/lib/ceph/osd/ceph-1 /var/lib/ceph/osd/ceph-1/journal

12、查看osd目录树

[root@node2 ~]# ceph osd tree

# id    weight  type name       up/down reweight

-1      2       root default

-2      1               host node1

0       1                       osd.0   up      1

-3      1               host node2

1       1                       osd.1   up      1



添加第三块osd节点


1、创建一个OSD,生成一个osd number

[root@node3 ~]# ceph osd create

2

2、为osd节点创建一个osd目录

[root@node3 ~]# mkdir -p /var/lib/ceph/osd/ceph-2

3、格式化已准备好的osd硬盘(格式化为xfs格式)

[root@node3 ~]# mkfs.xfs -f /dev/sdb

meta-data=/dev/sdb               isize=256    agcount=4, agsize=1310720 blks

         =                       sectsz=512   attr=2, projid32bit=0

data     =                       bsize=4096   blocks=5242880, imaxpct=25

         =                       sunit=0      swidth=0 blks

naming   =version 2              bsize=4096   ascii-ci=0

log      =internal log           bsize=4096   blocks=2560, version=2

         =                       sectsz=512   sunit=0 blks, lazy-count=1

realtime =none                   extsz=4096   blocks=0, rtextents=0


4、挂在目录

[root@node3 ~]# mount -o user_xattr /dev/sdb /var/lib/ceph/osd/ceph-2

mount: wrong fs type, bad option, bad superblock on /dev/sdb,

       missing codepage or helper program, or other error

       In some cases useful info is found in syslog - try

       dmesg | tail  or so

执行上面的命令会报错

解决的办法是:用下面的两条命令替代上面的一条命令。

[root@node3 ~]# mount /dev/sdb /var/lib/ceph/osd/ceph-2

[root@node3 ~]# mount -o remount,user_xattr /var/lib/ceph/osd/ceph-2

查看挂载的情况

[root@node2 ~]# mount

/dev/sda2 on / type ext4 (rw)

proc on /proc type proc (rw)

sysfs on /sys type sysfs (rw)

devpts on /dev/pts type devpts (rw,gid=5,mode=620)

tmpfs on /dev/shm type tmpfs (rw)

/dev/sda1 on /boot type ext4 (rw)

none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw)

vmware-vmblock on /var/run/vmblock-fuse type fuse.vmware-vmblock (rw,nosuid,nodev,default_permissions,allow_other)

/dev/sdb on /var/lib/ceph/osd/ceph-1 type xfs (rw,user_xattr)


把上面的挂载信息写入分区表

[root@node3 ~]# vi /etc/fstab

/dev/sdb                /var/lib/ceph/osd/ceph-2  xfs   defaults        0 0

/dev/sdb                /var/lib/ceph/osd/ceph-2  xfs   remount,user_xattr  0 0


5、初始化osd数据目录

[root@node3 ~]#  ceph-osd -i 2 --mkfs --mkkey 

2014-06-25 23:29:01.734251 7f52915927a0 -1 journal FileJournal::_open: disabling aio for non-block journal.  Use journal_force_aio to force use of aio anyway

2014-06-25 23:29:01.849158 7f52915927a0 -1 journal FileJournal::_open: disabling aio for non-block journal.  Use journal_force_aio to force use of aio anyway

2014-06-25 23:29:01.852189 7f52915927a0 -1 filestore(/var/lib/ceph/osd/ceph-2) could not find 23c2fcde/osd_superblock/0//-1 in index: (2) No such file or directory

2014-06-25 23:29:01.904476 7f52915927a0 -1 created object store /var/lib/ceph/osd/ceph-2 journal /var/lib/ceph/osd/ceph-2/journal for osd.2 fsid f11240d4-86b1-49ba-aacc-6d3d37b24cc4

2014-06-25 23:29:01.904712 7f52915927a0 -1 auth: error reading file: /var/lib/ceph/osd/ceph-2/keyring: can't open /var/lib/ceph/osd/ceph-2/keyring: (2) No such file or directory

2014-06-25 23:29:01.905376 7f52915927a0 -1 created new key in keyring /var/lib/ceph/osd/ceph-2/keyring

[root@node3 ~]# 

6、注册osd的认证密钥

[root@node3 ~]# ceph auth add osd.2 osd 'allow *' mon 'allow profile osd' -i /var/lib/ceph/osd/ceph-2/keyring

added key for osd.2

7、为此osd节点创建一个crush map

[root@node3 ~]# ceph osd crush add-bucket node3 host

added bucket node3 type host to crush map

8、Place the Ceph Node under the root default

[root@node3 ~]# ceph osd crush move node3 root=default

moved item id -4 name 'node3' to location {root=default} in crush map

9、

[root@node3 ~]# ceph osd crush add osd.2 1.0 host=node3

add item id 2 name 'osd.2' weight 1 at location {host=node3} to crush map

10、创建一个初始化目录

[root@node3 ~]# touch /var/lib/ceph/osd/ceph-2/sysvinit

11、启动osd进程

[root@node3 ~]# /etc/init.d/ceph start osd.2

=== osd.2 === 

create-or-move updated item name 'osd.2' weight 0.02 at location {host=node3,root=default} to crush map

Starting Ceph osd.2 on node3...

starting osd.2 at :/0 osd_data /var/lib/ceph/osd/ceph-2 /var/lib/ceph/osd/ceph-2/journal

12、查看osd目录树

[root@node3 ~]# ceph osd tree

# id    weight  type name       up/down reweight

-1      3       root default

-2      1               host node1

0       1                       osd.0   up      1

-3      1               host node2

1       1                       osd.1   up      1

-4      1               host node3

2       1                       osd.2   up      1


添加第三块osd节点


1、创建一个OSD,生成一个osd number

ceph osd create

2、为osd节点创建一个osd目录

mkdir -p /var/lib/ceph/osd/ceph-3

3、格式化已准备好的osd硬盘(格式化为xfs格式)

mkfs.xfs -f /dev/sdb

4、挂在目录

mount /dev/sdb /var/lib/ceph/osd/ceph-3

mount -o remount,user_xattr /var/lib/ceph/osd/ceph-3


把上面的挂载信息写入分区表

vi /etc/fstab

/dev/sdb                /var/lib/ceph/osd/ceph-3  xfs   defaults        0 0

/dev/sdb                /var/lib/ceph/osd/ceph-3  xfs   remount,user_xattr  0 0


5、初始化osd数据目录

 ceph-osd -i 3 --mkfs --mkkey 

6、注册osd的认证密钥

ceph auth add osd.3 osd 'allow *' mon 'allow profile osd' -i /var/lib/ceph/osd/ceph-3/keyring

7、为此osd节点创建一个crush map

ceph osd crush add-bucket node4 host

8、Place the Ceph Node under the root default

[root@node4 ~]# ceph osd crush move node4 root=default

9、

[root@node4 ~]# ceph osd crush add osd.3 1.0 host=node4

10、创建一个初始化目录

[root@node3 ~]# touch /var/lib/ceph/osd/ceph-3/sysvinit

11、启动osd进程

[root@node4 ~]# /etc/init.d/ceph start osd.3



添加元数据服务器


添加第一个元数据服务器

1、为mds元数据服务器创建一个目录

[root@node1 ~]# mkdir -p /var/lib/ceph/mds/ceph-node1

2、为bootstrap-mds客户端创建一个密钥  注:(如果下面的密钥在目录里已生成可以省略此步骤)

[root@node1 ~]# ceph-authtool --create-keyring /var/lib/ceph/bootstrap-mds/ceph.keyring --gen-key -n client.bootstrap-mds

3、在ceph auth库中创建bootstrap-mds客户端,赋予权限添加之前创建的密钥  注(查看ceph auth list 用户权限认证列表 如果已有client.bootstrap-mds此用户,此步骤可以省略)

[root@node1 ~]# ceph auth add client.bootstrap-mds mon 'allow profile bootstrap-mds' -i /var/lib/ceph/bootstrap-mds/ceph.keyring

added key for client.bootstrap-mds

4、在root家目录里创建ceph.bootstrap-mds.keyring文件

touch /root/ceph.bootstrap-mds.keyring

5、把keyring /var/lib/ceph/bootstrap-mds/ceph.keyring里的密钥导入家目录下的ceph.bootstrap-mds.keyring文件里

ceph-authtool --import-keyring /var/lib/ceph/bootstrap-mds/ceph.keyring ceph.bootstrap-mds.keyring 

6、在ceph auth库中创建mds.node1用户,并赋予权限和创建密钥,密钥保存在/var/lib/ceph/mds/ceph-node1/keyring文件里

ceph --cluster ceph --name client.bootstrap-mds --keyring /var/lib/ceph/bootstrap-mds/ceph.keyring auth get-or-create mds.node1 osd 'allow rwx' mds 'allow' mon 'allow profile mds' -o /var/lib/ceph/mds/ceph-node1/keyring

7、为mds创建一个初始化文件用于启动使用(此文件为空文件)

[root@node1 ~]# touch /var/lib/ceph/mds/ceph-node1/sysvinit

8、为了防止重新被安装创建一个空的done文件

[root@node1 ~]# touch /var/lib/ceph/mds/ceph-node1/done

9、情况mds服务进程

[root@node1 ~]# service ceph start mds.node1

=== mds.node1 === 

Starting Ceph mds.node1 on node1...

starting mds.node1 at :/0


添加第二个元数据服务器

1、在node2节点上为mds元数据服务器创建一个目录

[root@node2 ~]# mkdir -p /var/lib/ceph/mds/ceph-node2

2、在node2节点上创建一个bootstrap-mds目录

[root@node2 ~]# mkdir -p /var/lib/ceph/bootstrap-mds/

3、在node1节点上复制/var/lib/ceph/bootstrap-mds/ceph.keyring、/root/ceph.bootstrap-mds.keyring文件到node2节点

[root@node1 ~]# scp /var/lib/ceph/bootstrap-mds/ceph.keyring node2:/var/lib/ceph/bootstrap-mds/

[root@node1 ~]# scp /root/ceph.bootstrap-mds.keyring node2:/root/

4、在node1节点复制/var/lib/ceph/mds/ceph-node1/*里的所有文件到node2

[root@node2 ~]# scp /var/lib/ceph/mds/ceph-node1/sysvinit node2://var/lib/ceph/mds/ceph-node2/

5、在ceph auth库中创建mds.node2用户,并赋予权限和创建密钥,密钥保存在/var/lib/ceph/mds/ceph-node1/keyring文件里

[root@node2 ~]# ceph --cluster ceph --name client.bootstrap-mds --keyring /var/lib/ceph/bootstrap-mds/ceph.keyring auth get-or-create mds.node2 osd 'allow rwx' mds 'allow' mon 'allow profile mds' -o /var/lib/ceph/mds/ceph-node2/keyring

7、为了防止重新被安装创建一个空的done文件

[root@node2 ~]# touch /var/lib/ceph/mds/ceph-node2/done

7、情况mds服务进程

[root@node1 ~]# service ceph start mds.node2


添加第二个元数据服务器

1、在node3节点上为mds元数据服务器创建一个目录

[root@node3 ~]# mkdir -p /var/lib/ceph/mds/ceph-node3

2、在node3节点上创建一个bootstrap-mds目录

[root@node3 ~]# mkdir -p /var/lib/ceph/bootstrap-mds/

3、在node1节点上复制/var/lib/ceph/bootstrap-mds/ceph.keyring、/root/ceph.bootstrap-mds.keyring文件到node3节点

[root@node1 ~]# scp /var/lib/ceph/bootstrap-mds/ceph.keyring node3:/var/lib/ceph/bootstrap-mds/

[root@node1 ~]# scp /root/ceph.bootstrap-mds.keyring node3:/root/

4、在node1节点复制/var/lib/ceph/mds/ceph-node1/*里的所有文件到node3

[root@node3 ~]# scp /var/lib/ceph/mds/ceph-node1/sysvinit node3://var/lib/ceph/mds/ceph-node3/

5、在ceph auth库中创建mds.node2用户,并赋予权限和创建密钥,密钥保存在/var/lib/ceph/mds/ceph-node1/keyring文件里

[root@node3 ~]# ceph --cluster ceph --name client.bootstrap-mds --keyring /var/lib/ceph/bootstrap-mds/ceph.keyring auth get-or-create mds.node3 osd 'allow rwx' mds 'allow' mon 'allow profile mds' -o /var/lib/ceph/mds/ceph-node3/keyring

7、为了防止重新被安装创建一个空的done文件

[root@node3 ~]# touch /var/lib/ceph/mds/ceph-node3/done

7、情况mds服务进程

[root@node1 ~]# service ceph start mds.node3

13、查看集群状态

[root@node1 ~]# ceph -w

    cluster f11240d4-86b1-49ba-aacc-6d3d37b24cc4

     health HEALTH_OK

     monmap e2: 3 mons at {node1=10.240.240.211:6789/0,node2=10.240.240.212:6789/0,node3=10.240.240.213:6789/0}, election epoch 8, quorum 0,1,2 node1,node2,node3

     osdmap e23: 3 osds: 3 up, 3 in

      pgmap v47: 192 pgs, 3 pools, 0 bytes data, 0 objects

            3175 MB used, 58234 MB / 61410 MB avail

                 192 active+clean


2014-06-25 23:32:48.340284 mon.0 [INF] pgmap v47: 192 pgs: 192 active+clean; 0 bytes data, 3175 MB used, 58234 MB / 61410 MB avail


五、安装客户端client并进行RBD、cephFS挂载


安装软件包

1、安装软件包

[root@ceph-client ceph]#yum install -y ceph 

2、升级系统内核

kernel 2.6.34以前的版本是没有Module rbd的,把系统内核版本升级到最新

rpm --import http://elrepo.org/RPM-GPG-KEY-elrepo.org

rpm -Uvh http://elrepo.org/elrepo-release-6-5.el6.elrepo.noarch.rpm

yum --enablerepo=elrepo-kernel install kernel-ml 

安装完内核后修改/etc/grub.conf配置文件使机器重启后生效

修改配置文件中的 Default=1 to Default=0



RBD方式挂载

1、新建一个ceph pool

[root@client ~]# ceph osd pool create jiayuan 256

2、在pool中新建一个镜像

[root@client ~]# rbd create test-1 --size 40960 -p jiayuan

3、把镜像映射到pool块设备中

[root@client ~]# rbd map test-1 -p jiayuan

4、查看镜像映射map

[root@client ~]# rbd showmapped

id pool    image  snap device    

0  jiayuan test-1 -    /dev/rbd0 

5、格式化映射的设备块

[root@client ~]# mkfs.ext4 -m0 /dev/rbd0

6、挂载新建的分区

[root@client ~]#  mkdir /mnt/ceph-rbd-test-1

[root@client ~]# mount /dev/rbd0 /mnt/ceph-rbd-test-1/

[root@client ~]# df -h

Filesystem      Size  Used Avail Use% Mounted on

/dev/sda2        19G  3.0G   15G  17% /

tmpfs           242M   72K  242M   1% /dev/shm

/dev/sda1       283M   76M  188M  29% /boot

/dev/rbd0        40G   48M   40G   1% /mnt/ceph-rbd-test-1

7、进入新建的分区并dd测试性能

[root@client ~]# cd /mnt/ceph-rbd-test-1/


cephFS挂载

1、创建一个数据目录,把ceph池挂载到创建的数据目录。

[root@ceph-client ~]# mkdir /mnt/mycephfs

[root@ceph-client ~]# mount  -t ceph 10.240.240.211:6789:/ /mnt/mycephfs -v -o name=admin,secret=AQDT9pNTSFD6NRAAoZkAgx21uGQ+DM/k0rzxow==   

10.240.240.211:6789:/ on /mnt/mycephfs type ceph (rw,name=admin,secret=AQDT9pNTSFD6NRAAoZkAgx21uGQ+DM/k0rzxow==)  


或者执行下面的命令挂载

[root@ceph-client ~]# mount  -t ceph 10.240.240.211:6789:/ /mnt/mycephfs -v -o name=admin,secretfile=/etc/ceph/ceph.client.admin.keyring 


#上述命令中的name和secret参数值来自monitor的/etc/ceph/keyring文件:

[root@node1 ~]# cat /etc/ceph/ceph.client.admin.keyring 

[client.admin]

        key = AQDT9pNTSFD6NRAAoZkAgx21uGQ+DM/k0rzxow==

2、若果有多个mon监控节点,可以挂载多可节点,保证了cephFS的安全行,当有一个节点down的时候不影响写入数据

[root@client ~]# mount.ceph node1,node2,node3:/ /mnt/mycephfs -v -o name=admin,secret=AQDvxaxTaG4uBRAA9fKTwV8iqPjm/K+B4+qpEw==

parsing options: name=admin,secret=AQDvxaxTaG4uBRAA9fKTwV8iqPjm/K+B4+qpEw==

[root@client ~]# 

[root@client ~]# 

[root@client ~]# df -h

Filesystem            Size  Used Avail Use% Mounted on

/dev/sda2              19G  3.0G   15G  17% /

tmpfs                 242M   72K  242M   1% /dev/shm

/dev/sda1             283M   76M  188M  29% /boot

10.240.240.211,10.240.240.212,10.240.240.213:/

                       20G  3.5G   17G  18% /mnt/mycephfs

3、把挂载的信息写到fstab里

[root@client ~]# vi /etc/fstab 

10.240.240.211,10.240.240.212,10.240.240.213:/  /mnt/mycephfs   ceph  name=admin,secret=AQDvxaxTaG4uBRAA9fKTwV8iqPjm/K+B4+qpEw==,noatime    0       2



六:ceph集群卸载(执行下面的两条命令就可以把所有节点上的ceph软件及ceph集群卸载掉)

[root@node1 ~]# ceph-deploy purge node1 node2 node3 node4

[root@node1 ~]# ceph-deploy purgedata node1 node2 node3 node4




附录、ceph快速配置脚本

建立第一个mon节点

1、在node1节点编辑ceph配置文件

vi /etc/ceph/ceph.conf

[global]

fsid = f11240d4-86b1-49ba-aacc-6d3d37b24cc4

mon initial members = node1,node2,node3

mon host = 10.39.101.1,10.39.101.2,10.39.101.3

public network = 10.39.101.0/24

auth cluster required = cephx

auth service required = cephx

auth client required = cephx

osd journal size = 1024

filestore xattr use omap = true

osd pool default size = 3

osd pool default min size = 1

osd crush chooseleaf type = 1

osd_mkfs_type = xfs

max mds = 5

mds max file size = 100000000000000

mds cache size = 1000000

mon osd down out interval = 900        

cluster_network = 10.39.102.0/24

[mon]

mon clock drift allowed = .50         


在node1节点

ceph-authtool --create-keyring /tmp/ceph.mon.keyring --gen-key -n mon. --cap mon 'allow *'   

ceph-authtool --create-keyring /etc/ceph/ceph.client.admin.keyring --gen-key -n client.admin --set-uid=0 --cap mon 'allow *' --cap osd 'allow *' --cap mds 'allow' 

ceph-authtool /tmp/ceph.mon.keyring --import-keyring /etc/ceph/ceph.client.admin.keyring    


mkdir -p /var/lib/ceph/mon/ceph-node1


mkdir -p /var/lib/ceph/bootstrap-osd/

ceph-authtool -C /var/lib/ceph/bootstrap-osd/ceph.keyring


ceph-mon --mkfs -i node1 --keyring /tmp/ceph.mon.keyring


touch /var/lib/ceph/mon/ceph-node1/done


touch /var/lib/ceph/mon/ceph-node1/sysvinit


/sbin/service ceph -c /etc/ceph/ceph.conf start mon.node1



建立第二个mon节点



1、在node2节点

mkdir /var/lib/ceph/bootstrap-osd/

mkdir -p /var/lib/ceph/mon/ceph-node2

2、在node1节点

scp /etc/ceph/* node2:/etc/ceph/

scp /var/lib/ceph/bootstrap-osd/ceph.keyring node2:/var/lib/ceph/bootstrap-osd/

scp /tmp/ceph.mon.keyring node2:/tmp/

3、在node2节点

ceph-mon --mkfs -i node2  --keyring /tmp/ceph.mon.keyring

touch /var/lib/ceph/mon/ceph-node2/done

touch /var/lib/ceph/mon/ceph-node2/sysvinit

/sbin/service ceph -c /etc/ceph/ceph.conf start mon.node2


建立第三个mon节点


1、在node3节点

mkdir /var/lib/ceph/bootstrap-osd/

mkdir -p /var/lib/ceph/mon/ceph-node3

2、在node1节点

scp /etc/ceph/* node3:/etc/ceph/

scp /var/lib/ceph/bootstrap-osd/ceph.keyring node3:/var/lib/ceph/bootstrap-osd/

scp /tmp/ceph.mon.keyring node3:/tmp/

3、在node3节点

ceph-mon --mkfs -i node3  --keyring /tmp/ceph.mon.keyring

touch /var/lib/ceph/mon/ceph-node3/done

touch /var/lib/ceph/mon/ceph-node3/sysvinit

/sbin/service ceph -c /etc/ceph/ceph.conf start mon.node3




添加osd节点


添加第一块osd节点

1、在node1节点

ceph osd create

mkdir -p /var/lib/ceph/osd/ceph-0

mkfs.xfs -f /dev/sdb

mount /dev/sdb /var/lib/ceph/osd/ceph-0

mount -o remount,user_xattr /var/lib/ceph/osd/ceph-0

ceph-osd -i 0 --mkfs --mkkey 

ceph auth add osd.0 osd 'allow *' mon 'allow profile osd' -i /var/lib/ceph/osd/ceph-0/keyring

ceph osd crush add-bucket node1 host

ceph osd crush move node1 root=default

ceph osd crush add osd.0 1.0 host=node1

touch /var/lib/ceph/osd/ceph-0/sysvinit

/etc/init.d/ceph start osd.0



2、在node1节点添加分区表

[root@node1 ~]# vi /etc/fstab

/dev/sdb                /var/lib/ceph/osd/ceph-0  xfs   defaults        0 0

/dev/sdb                /var/lib/ceph/osd/ceph-0  xfs   remount,user_xattr  0 0




添加第二个osd节点


1、在node2节点

ceph osd create

mkdir -p /var/lib/ceph/osd/ceph-1

mkfs.xfs -f /dev/sdb

mount /dev/sdb /var/lib/ceph/osd/ceph-1

mount -o remount,user_xattr /var/lib/ceph/osd/ceph-1 

ceph-osd -i 1 --mkfs --mkkey 

ceph auth add osd.1 osd 'allow *' mon 'allow profile osd' -i /var/lib/ceph/osd/ceph-1/keyring

ceph osd crush add-bucket node2 host

ceph osd crush move node2 root=default

ceph osd crush add osd.1 1.0 host=node2

touch /var/lib/ceph/osd/ceph-1/sysvinit

/etc/init.d/ceph start osd.1



2、在node2节点添加分区表

[root@node1 ~]# vi /etc/fstab

/dev/sdb                /var/lib/ceph/osd/ceph-1  xfs   defaults        0 0

/dev/sdb                /var/lib/ceph/osd/ceph-1  xfs   remount,user_xattr  0 0






添加第三块osd节点

1、在node3节点

ceph osd create

mkdir -p /var/lib/ceph/osd/ceph-2

mkfs.xfs -f /dev/sdb

mount /dev/sdb /var/lib/ceph/osd/ceph-2

mount -o remount,user_xattr /var/lib/ceph/osd/ceph-2

ceph-osd -i 2 --mkfs --mkkey 

ceph auth add osd.2 osd 'allow *' mon 'allow profile osd' -i /var/lib/ceph/osd/ceph-2/keyring

ceph osd crush add-bucket node3 host

ceph osd crush move node3 root=default

ceph osd crush add osd.2 1.0 host=node3

touch /var/lib/ceph/osd/ceph-2/sysvinit

/etc/init.d/ceph start osd.2



2、在node3节点添加分区表

[root@node1 ~]# vi /etc/fstab

/dev/sdb                /var/lib/ceph/osd/ceph-2  xfs   defaults        0 0

/dev/sdb                /var/lib/ceph/osd/ceph-2  xfs   remount,user_xattr  0 0



添加第四块osd节点

1、在node1节点

scp /etc/ceph/* node4:/etc/ceph/

2、在node4节点

ceph osd create

mkdir -p /var/lib/ceph/osd/ceph-3

mkfs.xfs -f /dev/sdb

mount /dev/sdb /var/lib/ceph/osd/ceph-3

mount -o remount,user_xattr /var/lib/ceph/osd/ceph-3 

ceph-osd -i 3 --mkfs --mkkey 

ceph auth add osd.3 osd 'allow *' mon 'allow profile osd' -i /var/lib/ceph/osd/ceph-3/keyring

ceph osd crush add-bucket node4 host

ceph osd crush move node4 root=default

ceph osd crush add osd.3 1.0 host=node4

touch /var/lib/ceph/osd/ceph-3/sysvinit

/etc/init.d/ceph start osd.3




3、在node4节点添加分区表

[root@node1 ~]# vi /etc/fstab

/dev/sdb                /var/lib/ceph/osd/ceph-3  xfs   defaults        0 0

/dev/sdb                /var/lib/ceph/osd/ceph-3  xfs   remount,user_xattr  0 0





添加元数据服务器


添加第一个元数据服务器

1、在node1节点

mkdir -p /var/lib/ceph/mds/ceph-node1

touch /root/ceph.bootstrap-mds.keyring

ceph-authtool --import-keyring /var/lib/ceph/bootstrap-mds/ceph.keyring ceph.bootstrap-mds.keyring 

ceph --cluster ceph --name client.bootstrap-mds --keyring /var/lib/ceph/bootstrap-mds/ceph.keyring auth get-or-create mds.node1 osd 'allow rwx' mds 'allow' mon 'allow profile mds' -o /var/lib/ceph/mds/ceph-node1/keyring

touch /var/lib/ceph/mds/ceph-node1/sysvinit

touch /var/lib/ceph/mds/ceph-node1/done

service ceph start mds.node1


添加第二个元数据服务器

1、在node2节点

mkdir -p /var/lib/ceph/mds/ceph-node2

mkdir -p /var/lib/ceph/bootstrap-mds/

2、在node1节点

scp /var/lib/ceph/bootstrap-mds/ceph.keyring node2:/var/lib/ceph/bootstrap-mds/

scp /root/ceph.bootstrap-mds.keyring node2:/root/

scp /var/lib/ceph/mds/ceph-node1/sysvinit node2://var/lib/ceph/mds/ceph-node2/

3、在node2节点

ceph --cluster ceph --name client.bootstrap-mds --keyring /var/lib/ceph/bootstrap-mds/ceph.keyring auth get-or-create mds.node2 osd 'allow rwx' mds 'allow' mon 'allow profile mds' -o /var/lib/ceph/mds/ceph-node2/keyring

touch /var/lib/ceph/mds/ceph-node2/done

service ceph start mds.node2


添加第二个元数据服务器

1、在node3节点

mkdir -p /var/lib/ceph/mds/ceph-node3

mkdir -p /var/lib/ceph/bootstrap-mds/

2、在node1节点

scp /var/lib/ceph/bootstrap-mds/ceph.keyring node3:/var/lib/ceph/bootstrap-mds/

scp /root/ceph.bootstrap-mds.keyring node3:/root/

scp /var/lib/ceph/mds/ceph-node1/sysvinit node3://var/lib/ceph/mds/ceph-node3/

3、在node3节点

ceph --cluster ceph --name client.bootstrap-mds --keyring /var/lib/ceph/bootstrap-mds/ceph.keyring auth get-or-create mds.node3 osd 'allow rwx' mds 'allow' mon 'allow profile mds' -o /var/lib/ceph/mds/ceph-node3/keyring

touch /var/lib/ceph/mds/ceph-node3/done

service ceph start mds.node3



本文出自 “zhanguo1110” 博客,谢绝转载!