分布式存储Ceph的几种安装方法,源码,apt-get,deploy工具,Ubuntu CentOS

时间:2022-04-09 03:07:07

http://blog.csdn.net/che84157814/article/details/16858157

http://my.oschina.net/oscfox/blog/265206

http://blog.csdn.net/quqi99/article/details/10894833

https://github.com/ceph/ceph-deploy


http://www.oschina.net/question/1761756_153049


ceph-deploy is a way to deploy Ceph relying on just SSH access to the servers, sudo, and some Python. It runs fully on your workstation, requiring no servers, databases, or anything like that.

If you set up and tear down Ceph clusters a lot, and want minimal extra bureaucracy, this is for you.

What this tool is not

It is not a generic deployment system, it is only for Ceph, and is designed for users who want to quickly get Ceph running with sensible initial settings without the overhead of installing Chef, Puppet or Juju.

It does not handle client configuration beyond pushing the Ceph config file and users who want fine-control over security settings, partitions or directory locations should use a tool such as Chef or Puppet.

Installation

Depending on what type of usage you are going to have with ceph-deploy you might want to look into the different ways to install it. For automation, you might want to bootstrap directly. Regular users of ceph-deploywould probably install from the OS packages or from the Python Package Index.

Python Package Index

If you are familiar with Python install tools (like pip and easy_install) you can easily install ceph-deploy like:

pip install ceph-deploy

or:

easy_install ceph-deploy

It should grab all the dependencies for you and install into the current user's environment.

We highly recommend using virtualenv and installing dependencies in a contained way.

DEB

The DEB repo can be found at http://ceph.com/packages/ceph-extras/debian/

But they can also be found for ceph releases in the ceph repos like:

ceph.com/debian-{release}
ceph.com/debian-testing

RPM

The RPM repos can be found at http://ceph.com/packages/ceph-extras/rpm/

Make sure you add the proper one for your distribution.

But they can also be found for ceph releases in the ceph repos like:

ceph.com/rpm-{release}
ceph.com/rpm-testing

bootstrapping

To get the source tree ready for use, run this once:

./bootstrap

You can symlink the ceph-deploy script in this somewhere convenient (like ~/bin), or add the current directory to PATH, or just always type the full path to ceph-deploy.

SSH and Remote Connections

ceph-deploy will attempt to connect via SSH to hosts when the hostnames do not match the current host's hostname. For example, if you are connecting to host node1 it will attempt an SSH connection as long as the current host's hostname is not node1.

ceph-deploy at a minimum requires that the machine from which the script is being run can ssh as root without password into each Ceph node.

To enable this generate a new ssh keypair for the root user with no passphrase and place the public key (id_rsa.pub or id_dsa.pub) in:

/root/.ssh/authorized_keys

and ensure that the following lines are in the sshd config:

PermitRootLogin without-password
PubkeyAuthentication yes

The machine running ceph-deploy does not need to have the Ceph packages installed unless it needs to admin the cluster directly using the ceph command line tool.

usernames

When not specified the connection will be done with the same username as the one executing ceph-deploy. This is useful if the same username is shared in all the nodes but can be cumbersome if that is not the case.

A way to avoid this is to define the correct usernames to connect with in the SSH config, but you can also use the --username flag as well:

ceph-deploy --username ceph install node1

ceph-deploy then in turn would use ceph@node1 to connect to that host.

This would be the same expectation for any action that warrants a connection to a remote host.

Managing an existing cluster

You can use ceph-deploy to provision nodes for an existing cluster. To grab a copy of the cluster configuration file (normally ceph.conf):

ceph-deploy config pull HOST

You will usually also want to gather the encryption keys used for that cluster:

ceph-deploy gatherkeys MONHOST

At this point you can skip the steps below that create a new cluster (you already have one) and optionally skip installation and/or monitor creation, depending on what you are trying to accomplish.

Creating a new cluster

Creating a new configuration

To create a new configuration file and secret key, decide what hosts will run ceph-mon, and run:

ceph-deploy new MON [MON..]

listing the hostnames of the monitors. Each MON can be

  • a simple hostname. It must be DNS resolvable without the fully qualified domain name.
  • a fully qualified domain name. The hostname is assumed to be the leading component up to the first..
  • HOST:FQDN pair, of both the hostname and a fully qualified domain name or IP address. For example,foofoo.example.comfoo:something.example.com, and foo:1.2.3.4 are all valid. Note, however, that the hostname should match that configured on the host foo.

The above will create a ceph.conf and ceph.mon.keyring in your current directory.

Edit initial cluster configuration

You want to review the generated ceph.conf file and make sure that the mon_host setting contains the IP addresses you would like the monitors to bind to. These are the IPs that clients will initially contact to authenticate to the cluster, and they need to be reachable both by external client-facing hosts and internal cluster daemons.

Installing packages

To install the Ceph software on the servers, run:

ceph-deploy install HOST [HOST..]

This installs the current default stable release. You can choose a different release track with command line options, for example to use a release candidate:

ceph-deploy install --testing HOST

Or to test a development branch:

ceph-deploy install --dev=wip-mds-now-works-no-kidding HOST [HOST..]

Proxy or Firewall Installs

If attempting to install behind a firewall or through a proxy you can use the --no-adjust-repos that will tell ceph-deploy to skip any changes to the distro's repository in order to install the packages and it will go straight to package installation.

That will allow an environment without internet access to point to its own repositories. This means that those repositories will need to be properly setup (and mirrored with all the necessary dependencies) before attempting an install.

Another alternative is to set the wget env variables to point to the right hosts, for example, put following lines into/root/.wgetrc on each node (since ceph-deploy runs wget as root):

http_proxy=http://host:port
ftp_proxy=http://host:port
https_proxy=http://host:port

Deploying monitors

To actually deploy ceph-mon to the hosts you chose, run:

ceph-deploy mon create HOST [HOST..]

Without explicit hosts listed, hosts in mon_initial_members in the config file are deployed. That is, the hosts you passed to ceph-deploy new are the default value here.

Gather keys

To gather authenticate keys (for administering the cluster and bootstrapping new nodes) to the local directory, run:

ceph-deploy gatherkeys HOST [HOST...]

where HOST is one of the monitor hosts.

Once these keys are in the local directory, you can provision new OSDs etc.

Deploying OSDs

To prepare a node for running OSDs, run:

ceph-deploy osd create HOST:DISK[:JOURNAL] [HOST:DISK[:JOURNAL] ...]

After that, the hosts will be running OSDs for the given data disks. If you specify a raw disk (e.g., /dev/sdb), partitions will be created and GPT labels will be used to mark and automatically activate OSD volumes. If an existing partition is specified, the partition table will not be modified. If you want to destroy the existing partition table on DISK first, you can include the --zap-disk option.

If there is already a prepared disk or directory that is ready to become an OSD, you can also do:

ceph-deploy osd activate HOST:DIR[:JOURNAL] [...]

This is useful when you are managing the mounting of volumes yourself.

Admin hosts

To prepare a host with a ceph.conf and ceph.client.admin.keyring keyring so that it can administer the cluster, run:

ceph-deploy admin HOST [HOST ...]

Forget keys

The new and gatherkeys put some Ceph authentication keys in keyrings in the local directory. If you are worried about them being there for security reasons, run:

ceph-deploy forgetkeys

and they will be removed. If you need them again later to deploy additional nodes, simply re-run:

ceph-deploy gatherkeys HOST [HOST...]

and they will be retrieved from an existing monitor node.

Multiple clusters

All of the above commands take a --cluster=NAME option, allowing you to manage multiple clusters conveniently from one workstation. For example:

ceph-deploy --cluster=us-west new
vi us-west.conf
ceph-deploy --cluster=us-west mon

FAQ

Before anything

Make sure you have the latest version of ceph-deploy. It is actively developed and releases are coming weekly (on average). The most recent versions of ceph-deploy will have a --version flag you can use, otherwise check with your package manager and update if there is anything new.

Why is feature X not implemented?

Usually, features are added when/if it is sensible for someone that wants to get started with ceph and said feature would make sense in that context. If you believe this is the case and you've read "what this tool is not" and still think feature X should exist in ceph-deploy, open a feature request in the ceph tracker:http://tracker.ceph.com/projects/devops/issues

A command gave me an error, what is going on?

Most of the commands for ceph-deploy are meant to be run remotely in a host that you have configured when creating the initial config. If a given command is not working as expected try to run the command that failed in the remote host and assert the behavior there.

If the behavior in the remote host is the same, then it is probably not something wrong with ceph-deploy per-se. Make sure you capture the output of both the ceph-deploy output and the output of the command in the remote host.

Issues with monitors

If your monitors are not starting, make sure that the {hostname} you used when you ran ceph-deploy mon create {hostname} match the actual hostname -s in the remote host.

Newer versions of ceph-deploy should warn you if the results are different but that might prevent the monitors from reaching quorum.

Developing ceph-deploy

Now that you have cracked your teeth on Ceph, you might find that you want to contribute to ceph-deploy.

Resources

Bug tracking: http://tracker.ceph.com/projects/devops/issues

Mailing list and IRC info is the same as ceph http://ceph.com/resources/mailing-list-irc/

Submitting Patches

Please add test cases to cover any code you add. You can test your changes by running tox (You will also need mock and pytest ) from inside the git clone

When creating a commit message please use git commit -s or otherwise add Signed-off-by: Your Name <email@address.dom> to your commit message.

Patches can then be submitted by a pull request on GitHub



最近搞了下分布式PB级别的存储CEPH  尝试了几种不同的安装,使用 期间遇到很多问题,和大家一起分享。
一、源码安装 说明:源码安装可以了解到系统各个组件, 但是安装过程也是很费劲的,主要是依赖包太多。 当时尝试了centos 和 ubuntu 上安装,都是可以安装好的。
1下载ceph   http://ceph.com/download/ wget http://ceph.com/download/ceph-0.72.tar.gz

2 安装编译工具apt-get install automake autoconf automake libtool make
3 解压 #tar zxvf  ceph-0.72.tar.gz 
#cd  ceph-0.72.tar.gz 
#./autogen.sh

4、

先安装依赖包

#apt-get install autotools-dev autoconf automake cdbs g++ gcc git libatomic-ops-dev libboost-dev \
libcrypto++-dev libcrypto++ libedit-dev libexpat1-dev libfcgi-dev libfuse-dev \
libgoogle-perftools-dev libgtkmm-2.4-dev libtool pkg-config uuid-dev libkeyutils-dev \
uuid-dev libkeyutils-dev  btrfs-tools


4 可能遇到错误
4.1 fuse:apt-get install fuse-devel
4.2 tcmalloc:wget https://gperftools.googlecode.com/files/gperftools-2.1.zip安装google-perftools
4.3 libedit: apt-get install libedit-devel
4.4 no libatomic-ops foundapt-get install libatomic_ops-devel
4.5 snappy:https://snappy.googlecode.com/files/snappy-1.1.1.tar.gz
4.6 libleveldb not found:https://leveldb.googlecode.com/files/leveldb-1.14.0.tar.gzmake cp libleveldb.* /usr/libcp -r include/leveldb   /usr/local/include
4.7 libaioapt-get install libaio-dev
4.8 boostapt-get install libboost-devapt-get install libboost-thread-devapt-get install libboost-program-options-dev
4.9 g++apt-get install g++

5 编译安装
#./configure –prefix=/opt/ceph/
#make
#make install







二、使用ubuntn 12.04自带的ceph 版本可能是ceph version 0.41

资源:

两台机器:一台server,一台client,安装ubuntu12.04

其中,server安装时,另外分出两个区,作为osd0osd1的存储,没有的话,系统安装好后,使用loop设备虚拟出两个也可以。


1、服务端安装CEPH  (MONMDSOSD) apt-cache search ceph apt-get install ceph apt-get install ceph-common

2、添加keyAPT中,更新sources.list,安装ceph

wget -q -O- 'https://ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/release.asc' | sudo apt-key add -

echo deb http://ceph.com/debian/ $(lsb_release -sc) main | sudo tee /etc/apt/sources.list.d/ceph.list

apt-get update && sudo apt-get install ceph


3、查看版本

# ceph-v  //将显示ceph的版本和key信息

如果没有显示,请执行如下命令

# sudo apt-get update && apt-get upgrade


4、配置文件 # vim /etc/ceph/ceph.conf
[global]      # For version 0.55 and beyond, you must explicitly enable      # or disable authentication with "auth" entries in [global].          auth cluster required = none     auth service required = none     auth client required = none  [osd]     osd journal size = 1000          #The following assumes ext4 filesystem.     filestore xattr use omap = true       # For Bobtail (v 0.56) and subsequent versions, you may      # add settings for mkcephfs so that it will create and mount     # the file system on a particular OSD for you. Remove the comment `#`      # character for the following settings and replace the values      # in braces with appropriate values, or leave the following settings      # commented out to accept the default values. You must specify the      # --mkfs option with mkcephfs in order for the deployment script to      # utilize the following settings, and you must define the 'devs'     # option for each osd instance; see below.      osd mkfs type = xfs     osd mkfs options xfs = -f   # default for xfs is "-f"        osd mount options xfs = rw,noatime # default mount option is "rw,noatime"      # For example, for ext4, the mount option might look like this:          #osd mkfs options ext4 = user_xattr,rw,noatime      # Execute $ hostname to retrieve the name of your host,     # and replace {hostname} with the name of your host.     # For the monitor, replace {ip-address} with the IP     # address of your host.  [mon.a]      host = ceph1     mon addr = 192.168.1.1:6789  [osd.0]     host = ceph1          # For Bobtail (v 0.56) and subsequent versions, you may      # add settings for mkcephfs so that it will create and mount     # the file system on a particular OSD for you. Remove the comment `#`      # character for the following setting for each OSD and specify      # a path to the device if you use mkcephfs with the --mkfs option.          devs = /dev/sdb1  [mds.a]     host = ceph1 

5、执行初始化

sudo mkcephfs -a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.keyring

注意每次初始化 需要 删除原有 数据 目录 

rm –rf /var/lib/ceph/osd/ceph-0/*

rm –rf /var/lib/ceph/osd/ceph-1/*

rm –rf /var/lib/ceph/mon/ceph-a/*

rm –rf /var/lib/ceph/mds/ceph-a/*


mkdir -p /var/lib/ceph/osd/ceph-0

mkdir -p /var/lib/ceph/osd/ceph-1

mkdir -p /var/lib/ceph/mon/ceph-a

mkdir -p /var/lib/ceph/mds/ceph-a

6、启动 

service ceph -a start

7、执行健康检查

 ceph health

8、磁盘用 ext4 出现 mount 5 错误

后来用

mkfs.xfs  -f /dev/sda7

就 好了。 


9、在客户端上操作:

sudo mkdir /mnt/mycephfs

sudo mount -t ceph {ip-address-of-monitor}:6789:/ /mnt/mycephfs


三、ceph-deploy安装

1、下载


https://github.com/ceph/ceph-deploy/archive/master.zip


2、

apt-get install python-virtualenv

 ./bootstrap 

3、

ceph-deploy install ubuntu1

4、

ceph-deploy new ubuntu1

5、

ceph-deploy moncreate ubuntu1

6、

ceph-deploy gatherkeys

遇错提示没有keyring则执行:

ceph-deploy forgetkeys

会生成

{cluster-name}.client.admin.keyring

{cluster-name}.bootstrap-osd.keyring

{cluster-name}.bootstrap-mds.keyring


7、

ceph-deploy osd create ubuntu1:/dev/sdb1 (磁盘路径)

可能遇到错:

1、磁盘已经挂载,用umount

2、磁盘格式化问题,用fdisk分区, mkfs.xfs -f /dev/sdb1 格式化

8、

ceph -s

可能遇到错误:

提示没有osd

 health HEALTH_ERR 192 pgs stuck inactive; 192 pgs stuck unclean; no osds

则执行ceph osd create


9、

    cluster faf5e4ae-65ff-4c95-ad86-f1b7cbff8c9a

     health HEALTH_WARN 192 pgs degraded; 192 pgs stuck unclean

     monmap e1: 1 mons at {ubuntu1=12.0.0.115:6789/0}, election epoch 1, quorum 0 ubuntu1

     osdmap e10: 3 osds: 1 up, 1 in

      pgmap v17: 192 pgs, 3 pools, 0 bytes data, 0 objects

            1058 MB used, 7122 MB / 8181 MB avail

                 192 active+degraded


10、客户端挂摘

注意:需要用用户名及密码挂载

10.1查看密码

cat /etc/ceph/ceph.client.admin.keyring 

ceph-authtool --print-key ceph.client.admin.keyring

AQDNE4xSyN1WIRAApD1H/glMB5VSLwmmnt7UDw==

10.2挂载 


其他:

1、多台机器之间要添加ssh 无密码认证 ssh-keygen

2、最好有单独的磁盘分区做存储,格式化也有几种不同方式

3、总会遇到各种错误。 只能单独分析,解决

ceph-deploy forgetkeys


参考资料:http://hobbylinux.blog.51cto.com/2895352/1175932
http://blog.csdn.net/pc620/article/details/9002045
http://ceph.com/docs/master/start/quick-ceph-deploy/



三台机器,每台兼任mds和osd,其中一台兼任mon,初始化成功(倒腾了一整天。。。)

然后启动的时候失败,信息如下。。。

# /etc/init.d/ceph -a start
=== mon.00 === 
Starting Ceph mon.00 on ceph00...already running
=== mds.00 === 
Starting Ceph mds.00 on ceph00...already running
=== mds.01 === 
Starting Ceph mds.01 on ceph01...already running
=== mds.02 === 
Starting Ceph mds.02 on ceph02...already running
=== osd.00 === 
Mounting Btrfs on ceph00:/data/osd.00
Scanning for Btrfs filesystems
Traceback (most recent call last):
  File "/usr/local/ceph/bin/ceph", line 56, in <module>
    import rados
ImportError: No module named rados
failed: 'timeout 10 /usr/local/ceph/bin/ceph --name=osd.00 --keyring=/data/osd.00/keyring osd crush create-or-move -- 00 0.02 root=default host=ceph00 '
root@ceph00:~/.ssh# /etc/init.d/ceph -a start
=== mon.00 === 
Starting Ceph mon.00 on ceph00...already running
=== mds.00 === 
Starting Ceph mds.00 on ceph00...already running
=== mds.01 === 
Starting Ceph mds.01 on ceph01...already running
=== mds.02 === 
Starting Ceph mds.02 on ceph02...already running
=== osd.00 === 
Mounting Btrfs on ceph00:/data/osd.00
Scanning for Btrfs filesystems
Traceback (most recent call last):
  File "/usr/local/ceph/bin/ceph", line 56, in <module>
    import rados
ImportError: No module named rados
failed: 'timeout 10 /usr/local/ceph/bin/ceph --name=osd.00 --keyring=/data/osd.00/keyring osd crush create-or-move -- 00 0.02 root=default host=ceph00 '


我是用ceph源码编译安装的,我用lsmod命令查看也确实没有rados模块,但是又查不到怎么安装rados模块。。

求助。。。


@oscfox @zetrov 搜的时候搜到你们的提问,看到你们搞定了,请问对我这问题有什么头绪吗。。

分布式存储Ceph的几种安装方法,源码,apt-get,deploy工具,Ubuntu CentOScrasylph
发帖于 2个月前
1回/203阅
标签:分布式存储Ceph的几种安装方法,源码,apt-get,deploy工具,Ubuntu CentOSCeph 分布式存储Ceph的几种安装方法,源码,apt-get,deploy工具,Ubuntu CentOSUbuntu
0收藏(0)

按票数排序  显示最新答案  共有1个答案 (最后回答: 1个月前)

    0
  • 分布式存储Ceph的几种安装方法,源码,apt-get,deploy工具,Ubuntu CentOS


    解决方案:将源码包里的src/pybind/*.py复制到ceph/bin下并执行。

    解决完以后出现新错误

    # /etc/init.d/ceph start osd.00
    === osd.00 === 
    Mounting Btrfs on ceph00:/data/osd.00
    Scanning for Btrfs filesystems
    Traceback (most recent call last):
      File "/usr/local/ceph/bin/ceph", line 823, in <module>
        sys.exit(main())
      File "/usr/local/ceph/bin/ceph", line 590, in main
        conffile=conffile)
      File "/usr/local/ceph/bin/rados.py", line 197, in __init__
        self.librados = CDLL('librados.so.2')
      File "/usr/lib/python2.7/ctypes/__init__.py", line 365, in __init__
        self._handle = _dlopen(self._name, mode)
    OSError: librados.so.2: cannot open shared object file: No such file or directory
    failed: 'timeout 10 /usr/local/ceph/bin/ceph --name=osd.00 --keyring=/data/osd.00/keyring osd crush create-or-move -- 00 0.02 root=default host=ceph00 '



    将使用ldconfig添加ceph/lib路径解决,又出现以下问题

    # /etc/init.d/ceph -a start
    === mon.00 === 
    Starting Ceph mon.00 on ceph00...
    === mds.00 === 
    Starting Ceph mds.00 on ceph00...already running
    === mds.01 === 
    Starting Ceph mds.01 on ceph01...already running
    === mds.02 === 
    Starting Ceph mds.02 on ceph02...already running
    === osd.00 === 
    Mounting Btrfs on ceph00:/data/osd.00
    Scanning for Btrfs filesystems
    Error connecting to cluster: ObjectNotFound
    2014-04-29 09:20:59.817164 7ff066215700 -1 monclient(hunting): ERROR: missing keyring, cannot use cephx for authentication
    2014-04-29 09:20:59.817181 7ff066215700  0 librados: osd.00 initialization error (2) No such file or directory
    failed: 'timeout 10 /usr/local/ceph/bin/ceph --name=osd.00 --keyring=/data/osd.00/keyring osd crush create-or-move -- 00 0.02 root=default host=ceph00 '



    找不到keyring的原因是ceph -a -start的时候不知道为什么把keyring删掉了,解决方案是mkcephfs之后先备份osd.00文件夹,ceph -a start以后把keyring还原回去,就可以运行了,然后是这个错误。。。


    # /etc/init.d/ceph  start osd.00
    === osd.00 === 
    Mounting Btrfs on ceph00:/data/osd.00
    Scanning for Btrfs filesystems
    2014-04-29 13:38:31.207944 7f442c754700  0 -- :/1010297 >> 114.212.81.91:6789/0 pipe(0x7f442800e3e0 sd=4 :0 s=1 pgs=0 cs=0 l=1 c=0x7f442800e640).fault
    2014-04-29 13:38:34.206985 7f442c653700  0 -- :/1010297 >> 114.212.81.91:6789/0 pipe(0x7f441c000c00 sd=4 :0 s=1 pgs=0 cs=0 l=1 c=0x7f441c000e60).fault
    2014-04-29 13:38:37.207758 7f442c754700  0 -- :/1010297 >> 114.212.81.91:6789/0 pipe(0x7f441c003010 sd=4 :0 s=1 pgs=0 cs=0 l=1 c=0x7f441c003270).fault
    2014-04-29 13:38:40.208294 7f442c653700  0 -- :/1010297 >> 114.212.81.91:6789/0 pipe(0x7f441c0039c0 sd=4 :0 s=1 pgs=0 cs=0 l=1 c=0x7f441c003c20).fault
    failed: 'timeout 10 /usr/local/ceph/bin/ceph --name=osd.00 --keyring=/data/osd.00/keyring osd crush create-or-move -- 00 0.02 root=default host=ceph00 '



    网络连接出错。。我osd.00和mon.00是在一台机器上的,怎么会连不上呢。。