CentOS7.4安装部署openstack [Liberty版] (二)

时间:2024-01-13 14:21:38

继上一篇博客CentOS7.4安装部署openstack [Liberty版] (一),本篇继续讲述后续部分的内容

一、添加块设备存储服务

1.服务简述:

OpenStack块存储服务为实例提供块存储。存储的分配和消耗是由块存储驱动器,或者多后端配置的驱动器决定的。还有很多驱动程序可用:NAS/SAN,NFS,ISCSI,Ceph等等。块存储API和调度程序服务通常运行在控制节点上。取决于所使用的驱动程序,卷服务可以运行在控制,计算节点或者独立的存储节点上。
OpenStack块存储服务(cinder)为虚拟机添加持久的存储,块存储提供一个基础设施为了管理卷,以及和OpenStack计算服务交互,为实例提供卷。此服务也会激活管理卷的快照和卷类型的功能。
块存储服务通常包含下列组件:
cinder-api
  接受API请求,并将其路由到"cinder-volume"执行。
cinder-volume
  与块存储服务和例如"cinder-scheduler"的进程进行直接交互。它也可以与这些进程通过一个消息队列进行交互。"cinder-volume"服务响应送到块存储服务的读写请求来维持状态。它也可以和多种存储提供者在驱动架构下进行交互。
cinder-scheduler守护进程
  选择最优存储提供节点来创建卷。其与"nova-scheduler"组件类似。
cinder-backup守护进程
  "cinder-backup"服务提供任何种类备份卷到一个备份存储提供者。就像"cinder-volume"服务,它与多种存储提供者在驱动架构下进行交互。
消息队列
  在块存储的进程之间路由信息。

2.部署需求:在安装和配置块存储服务之前,必须创建数据库、服务证书和API端点。

[root@controller ~]#mysql -u root -p123456       #创建数据库及访问权限
MariaDB [(none)]>GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'localhost' IDENTIFIED BY '';
MariaDB [(none)]>GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'%' IDENTIFIED BY '';
MariaDB [(none)]>\q
[root@controller ~]#. admin-openrc
[root@controller ~]# openstack user create --domain default --password-prompt cinder
User Password: #密码为:
Repeat User Password:
[root@controller ~]#openstack role add --project service --user cinder admin #添加 admin 角色到 cinder 用户上,这个命令执行后没有输出。
[root@controller ~]#openstack service create --name cinder --description "OpenStack Block Storage" volume #创建 cinder 和 cinderv2 服务实体,块设备存储服务要求两个服务实体。
[root@controller ~]#openstack service create --name cinderv2 --description "OpenStack Block Storage" volumev2
[root@controller ~]#openstack endpoint create --region RegionOne volume public http://controller:8776/v1/%\(tenant_id\)s #创建块设备存储服务的 API 入口点,块设备存储服务每个服务实体都需要端点。
[root@controller ~]#openstack endpoint create --region RegionOne volume internal http://controller:8776/v1/%\(tenant_id\)s
[root@controller ~]#openstack endpoint create --region RegionOne volume admin http://controller:8776/v1/%\(tenant_id\)s
[root@controller ~]#openstack endpoint create --region RegionOne volumev2 public http://controller:8776/v2/%\(tenant_id\)s
[root@controller ~]#openstack endpoint create --region RegionOne volumev2 internal http://controller:8776/v2/%\(tenant_id\)s
[root@controller ~]#openstack endpoint create --region RegionOne volumev2 admin http://controller:8776/v2/%\(tenant_id\)s

3.服务安装

控制节点:

[root@controller ~]#yum install -y openstack-cinder python-cinderclient
[root@controller ~]# egrep -v "^$|^#" /etc/cinder/cinder.conf #编辑cinder.conf
[DEFAULT]
rpc_backend = rabbit #配置 RabbitMQ 消息队列访问
auth_strategy = keystone #配置认证服务访问
my_ip = 192.168.1.101 #配置 my_ip 来使用控制节点的管理接口的IP 地址
verbose = True #启用详细日志
[BRCD_FABRIC_EXAMPLE]
[CISCO_FABRIC_EXAMPLE]
[cors]
[cors.subdomain]
[database]
connection = mysql://cinder:123456@controller/cinder #配置数据库访问
[fc-zone-manager]
[keymgr]
[keystone_authtoken] #配置认证服务访问,在 [keystone_authtoken] 中注释或者删除其他选项。
auth_uri = http://controller:5000
auth_url = http://controller:35357
auth_plugin = password
project_domain_id = default
user_domain_id = default
project_name = service
username = cinder
password =
[matchmaker_redis]
[matchmaker_ring]
[oslo_concurrency]
lock_path = /var/lib/cinder/tmp #配置锁路径
[oslo_messaging_amqp]
[oslo_messaging_qpid]
[oslo_messaging_rabbit] #配置 RabbitMQ 消息队列访问
rabbit_host = controller
rabbit_userid = openstack
rabbit_password =
[oslo_middleware]
[oslo_policy]
[oslo_reports]
[profiler]
[root@controller ~]#su -s /bin/sh -c "cinder-manage db sync" cinder #初始化块设备服务的数据库
[root@controller ~]#[root@controller ~]# grep -A "\[cinder\]" /etc/nova/nova.conf #配置计算节点以使用块设备存储,编辑文件 /etc/nova/nova.conf 并添加如下内容
[cinder]
os_region_name = RegionOne
[root@controller ~]# systemctl restart openstack-nova-api.service
[root@controller ~]#systemctl enable openstack-cinder-api.service openstack-cinder-scheduler.service
[root@controller ~]#systemctl start openstack-cinder-api.service openstack-cinder-scheduler.service

存储节点:

[root@block1 ~]# yum install lvm2 -y
[root@block1 ~]# systemctl enable lvm2-lvmetad.service
[root@block1 ~]# systemctl start lvm2-lvmetad.service
[root@block1 ~]#pvcreate /dev/sdb #创建LVM 物理卷 /dev/sdb
Physical volume "/dev/sdb" successfully created
[root@block1 ~]#vgcreate cinder-volumes /dev/sdb #创建 LVM 卷组 cinder-volumes,块存储服务会在这个卷组中创建逻辑卷
Volume group "cinder-volumes" successfully created
[root@block1 ~]# vim /etc/lvm/lvm.conf #编辑etc/lvm/lvm.conf文件,在devices部分,添加一个过滤器,只接受/dev/sdb设备,拒绝其他所有设备
devices {
filter = [ "a/sda/", "a/sdb/", "r/.*/"] #如果存储节点在操作系统磁盘上也使用了 LVM,也需要添加相关的设备到过滤器中
[root@block1 ~]# yum install openstack-cinder targetcli python-oslo-policy -y
[root@block1 ~]# systemctl enable openstack-cinder-volume.service target.service
[root@block1 ~]# systemctl restart openstack-cinder-volume.service target.service
[root@block1 ~]# egrep -v "^$|^#" /etc/cinder/cinder.conf
[DEFAULT]
rpc_backend = rabbit #配置 RabbitMQ 消息队列访问
auth_strategy = keystone #配置认证服务访问
my_ip = 192.168.1.103 #存储节点上的管理网络接口的IP 地址
enabled_backends = lvm #启用 LVM 后端
glance_host = controller #配置镜像服务的位置
[BRCD_FABRIC_EXAMPLE]
[CISCO_FABRIC_EXAMPLE]
[cors]
[cors.subdomain]
[database]
connection = mysql://cinder:123456@controller/cinder #配置数据库访问
[fc-zone-manager]
[keymgr]
[keystone_authtoken] #配置认证服务访问,在 [keystone_authtoken] 中注释或者删除其他选项。
auth_uri = http://controller:5000
auth_url = http://controller:35357
auth_plugin = password
project_domain_id = default
user_domain_id = default
project_name = service
username = cinder
password =
[matchmaker_redis]
[matchmaker_ring]
[oslo_concurrency]
lock_path = /var/lib/cinder/tmp #配置锁路径
[oslo_messaging_amqp]
[oslo_messaging_qpid]
[oslo_messaging_rabbit] #配置 RabbitMQ 消息队列访问
rabbit_host = controller
rabbit_userid = openstack
rabbit_password =
[oslo_middleware]
[oslo_policy]
[oslo_reports]
[profiler]
[lvm] #配置LVM后端以LVM驱动结束,卷组cinder-volumes,iSCSI 协议和正确的 iSCSI服务
volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver
volume_group = cinder-volumes
iscsi_protocol = iscsi
iscsi_helper = lioadm
[root@block1 ~]# systemctl enable openstack-cinder-volume.service target.service
[root@block1 ~]# systemctl start openstack-cinder-volume.service target.service

验证:

[root@controller ~]#source admin-openrc.sh
[root@controller ~]#cinder service-list #列出服务组件以验证是否每个进程都成功启动
+------------------+------------+------+---------+-------+----------------------------+-----------------+
| Binary | Host | Zone | Status | State | Updated_at | Disabled Reason |
+------------------+------------+------+---------+-------+----------------------------+-----------------+
| cinder-scheduler | controller | nova | enabled | up | --18T01::54.000000 | None |
| cinder-volume | block1@lvm | nova | enabled | up | --18T01::57.000000 | None |
+------------------+------------+------+---------+-------+----------------------------+-----------------+

二、添加对象存储服务

1.服务简述

OpenStack对象存储服务(swift) 通过一系列: REST API 一起提供对象存储和恢复服务。在布署对象存储前,你的环境当中必须至少包括认证服务(keystone)。
OpenStack对象存储是一个多租户的对象存储系统,它支持大规模扩展,可以以低成本来管理大型的非结构化数据,通过RESTful HTTP 应用程序接口。 它包含下列组件:
代理服务器(swift-proxy-server)
接收OpenStack对象存储API和纯粹的HTTP请求以上传文件,更改元数据,以及创建容器。它可服务于在web浏览器下显示文件和容器列表。为了改进性能,代理服务可以使用可选的缓存,通常部署的是memcache。
账户服务器 (swift-account-server)
  管理由对象存储定义的账户。
容器服务器 (swift-container-server)
  管理容器或文件夹的映射,对象存储内部。
对象服务器 (swift-object-server)
  在存储节点上管理实际的对象,比如:文件。
各种定期进程
  为了驾驭大型数据存储的任务,复制服务需要在集群内确保一致性和可用性,其他定期进程有审计,更新和reaper。
WSGI中间件
  掌控认证,使用OpenStack认证服务。
swift 客户端
  用户可以通过此命令行客户端来向REST API提交命令,授权的用户角色可以是管理员用户,经销商用户,或者是swift用户。
swift-init
  初始化环链文件生成的脚本,将守护进程名称当作参数并提供命令。归档于http://docs.openstack.org/developer/swift/admin_guide.html#managing-services。
swift-recon
  一个被用于检索多种关于一个集群的度量和计量信息的命令行接口工具已被swift-recon中间件采集。
swift-ring-builder
  存储环链建立并重平衡实用程序。归档于http://docs.openstack.org/developer/swift/admin_guide.html#managing-the-rings。

2.部署需求:配置对象存储服务前,必须创建服务凭证和API端点。

[root@controller ~]# source admin-openrc.sh
[root@controller ~]# openstack user create --domain default --password-prompt swift #创建 swift用户
User Password: #密码为:
Repeat User Password:
[root@controller ~]# openstack role add --project service --user swift admin #添加admin角色到 swift 用户
[root@controller ~]#openstack service create --name swift --description "OpenStack Object Storage" object-store #创建 swift 服务实体
[root@controller ~]#openstack endpoint create --region RegionOne object-store public http://controller:8080/v1/AUTH_%\(tenant_id\)s #创建对象存储服务API端点
[root@controller ~]#openstack endpoint create --region RegionOne object-store internal http://controller:8080/v1/AUTH_%\(tenant_id\)s
[root@controller ~]#openstack endpoint create --region RegionOne object-store admin http://controller:8080/v1

3.服务安装

控制节点:

[root@controller ~]#yum install -y openstack-swift-proxy python-swiftclient  python-keystoneclient python-keystonemiddleware  memcached
[root@controller ~]# vim /etc/swift/proxy-server.conf #配置文件在各发行版本中可能不同。你可能需要添加这些部分和选项而不是修改已经存在的部分和选项!!!
[DEFAULT] #在[DEFAULT]部分,配置绑定端口,用户和配置目录
bind_port =
user = swift
swift_dir = /etc/swift
[pipeline:main] #在[pipeline:main]部分,启用合适的模块
pipeline = catch_errors gatekeeper healthcheck proxy-logging cache container_sync bulk ratelimit authtoken keystoneauth container-quotas account-quotas slo dlo versioned_writes proxy-logging proxy-server
[app:proxy-server] #在[app:proxy-server]部分,启用自动帐号创建
use = egg:swift#proxy
account_autocreate = true
[filter:keystoneauth] #在[filter:keystoneauth]部分,配置操作员角色
use = egg:swift#keystoneauth
operator_roles = admin,user
[filter:authtoken] #在[filter:authtoken]部分,配置认证服务访问
paste.filter_factory = keystonemiddleware.auth_token:filter_factory
auth_uri = http://controller:5000
auth_url = http://controller:35357
auth_plugin = password
project_domain_id = default
user_domain_id = default
project_name = service
username = swift
password =
delay_auth_decision = true
[filter:cache] #在[filter:cache]部分,配置memcached位置
use = egg:swift#memcache
memcache_servers = 127.0.0.1:

存储节点:(在每个存储节点上执行这些步骤)

[root@object1 ~]#yum install xfsprogs rsync -y  #安装支持的工具包
[root@object1 ~]#mkfs.xfs /dev/sdb #使用XFS格式化/dev/sdb和/dev/sdc设备
[root@object1 ~]#mkfs.xfs /dev/sdc
[root@object1 ~]#mkdir -p /srv/node/sdb #创建挂载点目录结构
[root@object1 ~]#mkdir -p /srv/node/sdc
[root@object1 ~]#tail - /etc/fstab #编辑"/etc/fstab"文件并包含以下内容
/dev/sdb /srv/node/sdb xfs noatime,nodiratime,nobarrier,logbufs=
/dev/sdc /srv/node/sdc xfs noatime,nodiratime,nobarrier,logbufs=
[root@object1 ~]#mount /srv/node/sdb #挂载设备
[root@object1 ~]#mount /srv/node/sdc
[root@object1 ~]#cat /etc/rsyncd.conf #编辑"/etc/rsyncd.conf" 文件并包含以下内容
uid = swift
gid = swift
log file = /var/log/rsyncd.log
pid file = /var/run/rsyncd.pid
address = 192.168.1.104 #本机的网络管理接口 [account]
max connections =
path = /srv/node/
read only = false
lock file = /var/lock/account.lock [container]
max connections =
path = /srv/node/
read only = false
lock file = /var/lock/container.lock [object]
max connections =
path = /srv/node/
read only = false
lock file = /var/lock/object.lock
[root@object1 ~]#systemctl enable rsyncd.service
[root@object1 ~]# systemctl start rsyncd.service
[root@object1 ~]# yum install openstack-swift-account openstack-swift-container openstack-swift-object -y
[root@object1 ~]#vim /etc/swift/account-server.conf
[DEFAULT] #在[DEFAULT]`部分,配置绑定IP地址,绑定端口,用户,配置目录和挂载目录
bind_ip = 192.168.1.104
bind_port =
user = swift
swift_dir = /etc/swift
devices = /srv/node
mount_check = true
[pipeline:main] #在[pipeline:main]部分,启用合适的模块
pipeline = healthcheck recon account-server
[filter:recon] #在[filter:recon]部分,配置recon (meters)缓存目录
use = egg:swift#recon
recon_cache_path = /var/cache/swift
[root@object1 ~]# vim /etc/swift/container-server.conf
[DEFAULT] #在[DEFAULT] 部分,配置绑定IP地址,绑定端口,用户,配置目录和挂载目录
bind_ip = 192.168.1.104
bind_port =
user = swift
swift_dir = /etc/swift
devices = /srv/node
mount_check = true
[pipeline:main] #在[pipeline:main]部分,启用合适的模块
pipeline = healthcheck recon container-server
[filter:recon] #在[filter:recon]部分,配置recon (meters)缓存目录
use = egg:swift#recon
recon_cache_path = /var/cache/swift
[root@object1 ~]#vim /etc/swift/object-server.conf
[DEFAULT] #在[DEFAULT] 部分,配置绑定IP地址,绑定端口,用户,配置目录和挂载目录
bind_ip = 192.168.1.104
bind_port =
user = swift
swift_dir = /etc/swift
devices = /srv/node
mount_check = true
[pipeline:main] #在[pipeline:main]部分,启用合适的模块
pipeline = healthcheck recon object-server
[filter:recon] #在[filter:recon]部分,配置recon (meters)缓存目录和锁文件目录
use = egg:swift#recon
recon_cache_path = /var/cache/swift
recon_lock_path = /var/lock
[root@object1 ~]#chown -R swift:swift /srv/node
[root@object1 ~]#restorecon -R /srv/node
[root@object1 ~]#mkdir -p /var/cache/swift
[root@object1 ~]#chown -R root:swift /var/cache/swift

创建和分发初始化rings

控制节点:

[root@controller ~]# cd /etc/swift/
[root@controller swift]# swift-ring-builder account.builder create #创建account.builder 文件
[root@controller swift]# wift-ring-builder account.builder add --region --zone --ip 192.168.1.104 --port --device sdb --weight #添加每个节点到 ring 中
[root@controller swift]# swift-ring-builder account.builder add --region --zone --ip 192.168.1.104 --port --device sdc --weight
[root@controller swift]# swift-ring-builder account.builder add --region --zone --ip 192.168.1.105 --port --device sdb --weight
[root@controller swift]# swift-ring-builder account.builder add --region --zone --ip 192.168.1.105 --port --device sdc --weight
[root@controller swift]# swift-ring-builder account.builder #验证 ring 的内容
account.builder, build version
partitions, 3.000000 replicas, regions, zones, devices, 0.00 balance, 0.00 dispersion
The minimum number of hours before a partition can be reassigned is
The overload factor is 0.00% (0.000000)
Devices: id region zone ip address port replication ip replication port name weight partitions balance meta
192.168.1.104 192.168.1.104 sdb 100.00 0.00
192.168.1.104 192.168.1.104 sdc 100.00 0.00
192.168.1.105 192.168.1.105 sdb 100.00 0.00
192.168.1.105 192.168.1.105 sdc 100.00 0.00
[root@controller swift]# swift-ring-builder account.builder rebalance #平衡 ring
Reassigned (100.00%) partitions. Balance is now 0.00. Dispersion is now 0.00
[root@controller swift]# swift-ring-builder container.builder create #创建container.builder文件
[root@controller swift]# swift-ring-builder container.builder add --region --zone --ip 192.168.1.104 --port --device sdb --weight #添加每个节点到 ring 中
[root@controller swift]# swift-ring-builder container.builder add --region --zone --ip 192.168.1.104 --port --device sdc --weight
[root@controller swift]# swift-ring-builder container.builder add --region --zone --ip 192.168.1.105 --port --device sdb --weight
[root@controller swift]# swift-ring-builder container.builder add --region --zone --ip 192.168.1.105 --port --device sdc --weight
[root@controller swift]# swift-ring-builder container.builder #验证 ring 的内容
container.builder, build version
partitions, 3.000000 replicas, regions, zones, devices, 0.00 balance, 0.00 dispersion
The minimum number of hours before a partition can be reassigned is
The overload factor is 0.00% (0.000000)
Devices: id region zone ip address port replication ip replication port name weight partitions balance meta
192.168.1.104 192.168.1.104 sdb 100.00 0.00
192.168.1.104 192.168.1.104 sdc 100.00 0.00
192.168.1.105 192.168.1.105 sdb 100.00 0.00
192.168.1.105 192.168.1.105 sdc 100.00 0.00
[root@controller swift]# swift-ring-builder container.builder rebalance #平衡 ring
Reassigned (100.00%) partitions. Balance is now 0.00. Dispersion is now 0.00
[root@controller swift]# swift-ring-builder object.builder create #创建object.builder文件
[root@controller swift]# swift-ring-builder object.builder add --region --zone --ip 192.168.1.104 --port --device sdb --weight #添加每个节点到 ring 中
[root@controller swift]# swift-ring-builder object.builder add --region --zone --ip 192.168.1.104 --port --device sdc --weight
[root@controller swift]# swift-ring-builder object.builder add --region --zone --ip 192.168.1.105 --port --device sdb --weight
[root@controller swift]# swift-ring-builder object.builder add --region --zone --ip 192.168.1.105 --port --device sdc --weight
[root@controller swift]# swift-ring-builder object.builder #验证 ring 的内容
object.builder, build version
partitions, 3.000000 replicas, regions, zones, devices, 0.00 balance, 0.00 dispersion
The minimum number of hours before a partition can be reassigned is
The overload factor is 0.00% (0.000000)
Devices: id region zone ip address port replication ip replication port name weight partitions balance meta
192.168.1.105 192.168.1.105 sdb 100.00 0.00
192.168.1.105 192.168.1.105 sdc 100.00 0.00
192.168.1.104 192.168.1.104 sdb 100.00 0.00
192.168.1.104 192.168.1.104 sdc 100.00 0.00
[root@controller swift]# swift-ring-builder object.builder rebalance #平衡 ring
Reassigned (100.00%) partitions. Balance is now 0.00. Dispersion is now 0.00
[root@controller swift]# scp account.ring.gz container.ring.gz object.ring.gz 192.168.1.104:/etc/swift/ #复制account.ring.gz,container.ring.gz和object.ring.gz文件到每个存储节点和其他运行了代理服务的额外节点的 /etc/swift 目录
[root@controller swift]# scp account.ring.gz container.ring.gz object.ring.gz 192.168.1.105:/etc/swift/
[root@controller swift]# vim /etc/swift/swift.conf #编辑/etc/swift/swift.conf文件并完成以下操作
[swift-hash] #在[swift-hash]部分,为你的环境配置哈希路径前缀和后缀,这些值要保密,并且不要修改或丢失。
swift_hash_path_suffix = 0123456789abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ!"#$%&'()*+,-./:;<=>?@[\]^_`{|}~
swift_hash_path_prefix = 0123456789abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ!"#$%&'()*+,-./:;<=>?@[\]^_`{|}~
[storage-policy:] #在[storage-policy:0]部分,配置默认存储策略
name = Policy-
default = yes
[root@controller swift]# chown -R root:swift /etc/swift
[root@controller swift]# systemctl enable openstack-swift-proxy.service memcached.service #在控制节点和其他运行了代理服务的节点上,启动对象存储代理服务及其依赖服务,并将它们配置为随系统启动
[root@controller swift]# systemctl start openstack-swift-proxy.service memcached.service

存储节点:

[root@object1 ~]#systemctl enable openstack-swift-account.service openstack-swift-account-auditor.service openstack-swift-account-reaper.service openstack-swift-account-replicator.service
[root@object1 ~]#systemctl enable openstack-swift-container.service openstack-swift-container-auditor.service openstack-swift-container-replicator.service openstack-swift-container-updater.service
[root@object1 ~]# systemctl enable openstack-swift-object.service openstack-swift-object-auditor.service openstack-swift-object-replicator.service openstack-swift-object-updater.service
[root@object1 ~]#systemctl start openstack-swift-account.service openstack-swift-account-auditor.service openstack-swift-account-reaper.service openstack-swift-account-replicator.service
[root@object1 ~]#systemctl start openstack-swift-container.service openstack-swift-container-auditor.service openstack-swift-container-replicator.service openstack-swift-container-updater.service
[root@object1 ~]#systemctl start openstack-swift-object.service openstack-swift-object-auditor.service openstack-swift-object-replicator.service openstack-swift-object-updater.service

验证操作:

控制节点:

[root@controller swift]#cd
[root@controller ~]# echo "export OS_AUTH_VERSION=3" | tee -a admin-openrc.sh demo-openrc.sh #配置对象存储服务客户端使用版本3的认证API
[root@controller ~]# swift stat #显示服务状态
Account: AUTH_444fce5db34546a7907af45df36d6e99
Containers:
Objects:
Bytes:
X-Put-Timestamp: 1518798659.41272
X-Timestamp: 1518798659.41272
X-Trans-Id: tx304f1ed71c194b1f90dd2-005a870740
Content-Type: text/plain; charset=utf-
[root@controller ~]# swift upload container1 demo-openrc.sh #上传一个测试文件
demo-openrc.sh
[root@controller ~]# swift list #列出容器
container1
[root@controller ~]# swift download container1 demo-openrc.sh #下载一个测试文件
demo-openrc.sh [auth .295s, headers .339s, total .339s, 0.005 MB/s]

三、添加 Orchestration(编排) 服务

1.服务简述

编排服务通过运行调用生成运行中云应用程序的OpenStack API为描述云应用程序提供基于模板的编排。该软件将其他OpenStack核心组件整合进一个单文件模板系统。模板允许你创建很多种类的OpenStack资源,如实例,浮点IP,云硬盘,安全组和用户。它也提供高级功能,如实例高可用,实例自动缩放,和嵌套栈。这使得OpenStack的核心项目有着庞大的用户群。
服务使部署人员能够直接或者通过定制化插件来与编排服务集成
编排服务包含以下组件:
heat命令行客户端
  一个命令行工具,和``heat-api``通信,以运行:term:AWS CloudFormation API,最终开发者可以直接使用Orchestration REST API。
heat-api组件
  一个OpenStack本地 REST API ,发送API请求到heat-engine,通过远程过程调用(RPC)。
heat-api-cfn组件
  AWS 队列API,和AWS CloudFormation兼容,发送API请求到``heat-engine``,通过远程过程调用。
heat-engine
  启动模板和提供给API消费者回馈事件。

2.部署需求:在安装和配置流程服务之前,必须创建数据库,服务凭证和API端点。流程同时需要在认证服务中添加额外信息。

在控制节点上:

[root@controller ~]# mysql -u root -p123456  #创建数据库并设置权限
MariaDB [(none)]>CREATE DATABASE heat;
MariaDB [(none)]>GRANT ALL PRIVILEGES ON heat.* TO 'heat'@'localhost' IDENTIFIED BY '';
MariaDB [(none)]>GRANT ALL PRIVILEGES ON heat.* TO 'heat'@'%' IDENTIFIED BY '';
MariaDB [(none)]>\q
[root@controller ~]# source admin-openrc.sh
[root@controller ~]# openstack user create --domain default --password-prompt heat #创建heat用户
User Password: #密码为:
Repeat User Password:
[root@controller ~]# openstack role add --project service --user heat admin #添加 admin 角色到 heat 用户上
[root@controller ~]# openstack service create --name heat --description "Orchestration" orchestration #创建heat和 heat-cfn 服务实体
[root@controller ~]# openstack service create --name heat-cfn --description "Orchestration" cloudformation
[root@controller ~]# openstack endpoint create --region RegionOne orchestration public http://controller:8004/v1/%\(tenant_id\)s #创建 Orchestration 服务的 API 端点
[root@controller ~]# openstack endpoint create --region RegionOne orchestration internal http://controller:8004/v1/%\(tenant_id\)s
[root@controller ~]# openstack endpoint create --region RegionOne orchestration admin http://controller:8004/v1/%\(tenant_id\)s
[root@controller ~]# openstack endpoint create --region RegionOne cloudformation public http://controller:8000/v1
[root@controller ~]# openstack endpoint create --region RegionOne cloudformation internal http://controller:8000/v1
[root@controller ~]# openstack endpoint create --region RegionOne cloudformation admin http://controller:8000/v1
[root@controller ~]# openstack domain create --description "Stack projects and users" heat #为栈创建 heat 包含项目和用户的域
[root@controller ~]# openstack user create --domain heat --password-prompt heat_domain_admin #在 heat 域中创建管理项目和用户的heat_domain_admin用户
User Password: #密码为:
Repeat User Password:
[root@controller ~]# openstack role add --domain heat --user heat_domain_admin admin #添加admin角色到 heat 域 中的heat_domain_admin用户,启用heat_domain_admin用户管理栈的管理权限
[root@controller ~]# openstack role create heat_stack_owner #创建 heat_stack_owner 角色
[root@controller ~]# openstack role add --project demo --user demo heat_stack_owner #添加heat_stack_owner角色到demo项目和用户,启用demo用户管理栈
[root@controller ~]# openstack role create heat_stack_user #创建 heat_stack_user 角色,Orchestration 自动地分配 heat_stack_user角色给在 stack 部署过程中创建的用户。默认情况下,这个角色会限制 API 的操作。为了避免冲突,请不要为用户添加 heat_stack_owner角色。

3.服务部署

控制节点:

[root@controller ~]# yum install -y openstack-heat-api openstack-heat-api-cfn  openstack-heat-engine python-heatclient
[root@controller ~]# vim /etc/heat/heat.conf #编辑 /etc/heat/heat.conf 文件并完成如下内容
[database]
connection = mysql://heat:123456@controller/heat #配置数据库访问
[DEFAULT]
rpc_backend = rabbit #配置RabbitMQ消息队列访问
heat_metadata_server_url = http://controller:8000 #配置元数据和 等待条件URLs
heat_waitcondition_server_url = http://controller:8000/v1/waitcondition
stack_domain_admin = heat_domain_admin #配置栈域与管理凭据
stack_domain_admin_password =
stack_user_domain_name = heat
verbose = True #部分启用详细日志
[oslo_messaging_rabbit]
rabbit_host = controller
rabbit_userid = openstack
rabbit_password = RABBIT_PASS
[keystone_authtoken] #配置认证服务访问
auth_uri = http://controller:5000
auth_url = http://controller:35357
auth_plugin = password
project_domain_id = default
user_domain_id = default
project_name = service
username = heat
password =
[trustee] #配置认证服务访问
auth_plugin = password
auth_url = http://controller:35357
username = heat
password =
user_domain_id = default
[clients_keystone] #配置认证服务访问
auth_uri = http://controller:5000
[ec2authtoken] #配置认证服务访问
auth_uri = http://controller:5000/v3
[root@controller ~]# su -s /bin/sh -c "heat-manage db_sync" heat #同步Orchestration数据库
[root@controller ~]# systemctl enable openstack-heat-api.service openstack-heat-api-cfn.service openstack-heat-engine.service
[root@controller ~]#systemctl start openstack-heat-api.service openstack-heat-api-cfn.service openstack-heat-engine.service

验证操作

[root@controller ~]# source admin-openrc.sh
[root@controller ~]# heat service-list #该输出显示表明在控制节点上有应该四个heat-engine组件
+------------+-------------+--------------------------------------+------------+--------+----------------------------+--------+
| hostname | binary | engine_id | host | topic | updated_at | status |
+------------+-------------+--------------------------------------+------------+--------+----------------------------+--------+
| controller | heat-engine | 0d26b5d3-ec8a-44ad--b2be72ccfaa7 | controller | engine | --16T11::41.000000 | up |
| controller | heat-engine | 587b87e2-9e91-4cac-a8b2-53f51898a9c5 | controller | engine | --16T11::41.000000 | up |
| controller | heat-engine | 8891e45b-beda-49b2-bfc7-29642f072eac | controller | engine | --16T11::41.000000 | up |
| controller | heat-engine | b0ef7bbb-cfb9--a214-db9049b12a25 | controller | engine | --16T11::41.000000 | up |
+------------+-------------+--------------------------------------+------------+--------+----------------------------+--------+

四、添加 Telemetry(计量数据收集) 服务

1.服务概述

计量数据收集(Telemetry)服务提供如下功能:
  1.相关OpenStack服务的有效调查计量数据。
  2.通过监测通知收集来自各个服务发送的事件和计量数据。
  3.发布收集来的数据到多个目标,包括数据存储和消息队列。
Telemetry服务包含以下组件:
计算代理 (ceilometer-agent-compute)
  运行在每个计算节点中,推送资源的使用状态,也许在未来会有其他类型的代理,但是目前来说社区专注于创建计算节点代理。
中心代理 (ceilometer-agent-central)
  运行在中心管理服务器以推送资源使用状态,既不捆绑到实例也不在计算节点。代理可启动多个以横向扩展它的服务。
ceilometer通知代理;
  运行在中心管理服务器(s)中,获取来自消息队列(s)的消息去构建事件和计量数据。
ceilometor收集器(负责接收信息进行持久化存储)
  运行在中心管理服务器(s),分发收集的telemetry数据到数据存储或者外部的消费者,但不会做任何的改动。
API服务器 (ceilometer-api)
  运行在一个或多个中心管理服务器,提供从数据存储的数据访问。
检查告警服务
  当收集的度量或事件数据打破了界定的规则时,计量报警服务会出发报警。 计量报警服务包含以下组件:
API服务器 (aodh-api)
  运行于一个或多个中心管理服务器上提供访问存储在数据中心的警告信息。
报警评估器 (aodh-evaluator)
  运行在一个或多个中心管理服务器,当警告发生是由于相关联的统计趋势超过阈值以上的滑动时间窗口,然后作出决定。
通知监听器 (aodh-listener)
  运行在一个中心管理服务器上,来检测什么时候发出告警。根据对一些事件预先定义一些规则,会产生相应的告警,同时能够被Telemetry数据收集服务的通知代理捕获到。
报警通知器 (aodh-notifier)
  运行在一个或多个中心管理服务器,允许警告为一组收集的实例基于评估阀值来设置。 这些服务使用OpenStack消息总线来通信,只有收集者和API服务可以访问数据存储。

2.部署需求:安装和配置Telemetry服务之前,你必须创建创建一个数据库、服务凭证和API端点。但是,不像其他服务,Telemetry服务使用NoSQL 数据库

控制节点:

[root@controller ~]#  yum install -y mongodb-server mongodb
[root@controller ~]# vim /etc/mongod.conf #编辑 /etc/mongod.conf文件,并修改或添加如下内容
bind_ip = 192.168.1.101
smallfiles = true #默认情况下,MongoDB会在/var/lib/mongodb/journal目录下创建几个 1 GB 大小的日志文件。如果你想将每个日志文件大小减小到128MB并且限制日志文件占用的总空间为512MB,配置 smallfiles 的值
[root@controller ~]# systemctl enable mongod.service
[root@controller ~]# systemctl start mongod.service
[root@controller ~]# mongo --host controller --eval 'db = db.getSiblingDB("ceilometer"); db.createUser({user: "ceilometer",pwd: "123456",roles: [ "readWrite", "dbAdmin" ]})' #创建 ceilometer 数据库
MongoDB shell version: 2.6.
connecting to: controller:/test
Successfully added user: { "user" : "ceilometer", "roles" : [ "readWrite", "dbAdmin" ] }
[root@controller ~]# source admin-openrc.sh
[root@controller ~]# openstack user create --domain default --password-prompt ceilometer #创建 ceilometer 用户
User Password: #密码为:
Repeat User Password:
[root@controller ~]# openstack role add --project service --user ceilometer admin #添加 admin 角色到 ceilometer 用户上
[root@controller ~]# openstack service create --name ceilometer --description "Telemetry" metering #创建 ceilometer 服务实体
[root@controller ~]# openstack endpoint create --region RegionOne metering public http://controller:8777 #创建Telemetry服务API端点
[root@controller ~]# openstack endpoint create --region RegionOne metering internal http://controller:8777
[root@controller ~]# openstack endpoint create --region RegionOne metering admin http://controller:8777

3.服务部署

控制节点:

[root@controller ~]# yum install openstack-ceilometer-api openstack-ceilometer-collector openstack-ceilometer-notification openstack-ceilometer-central openstack-ceilometer-alarm python-ceilometerclient -y
[root@controller ~]# vim /etc/ceilometer/ceilometer.conf #编辑 /etc/ceilometer/ceilometer.conf,修改或添加如下内容
[DEFAULT]
rpc_backend = rabbit #配置RabbitMQ消息队列访问
auth_strategy = keystone #配置认证服务访问
verbose = True
[oslo_messaging_rabbit] #配置RabbitMQ消息队列访问
rabbit_host = controller
rabbit_userid = openstack
rabbit_password =
[keystone_authtoken] #配置认证服务访问
auth_uri = http://controller:5000
auth_url = http://controller:35357
auth_plugin = password
project_domain_id = default
user_domain_id = default
project_name = service
username = ceilometer
password =
[service_credentials] #配置服务证书
os_auth_url = http://controller:5000/v2.0
os_username = ceilometer
os_tenant_name = service
os_password =
os_endpoint_type = internalURL
os_region_name = RegionOne
[root@controller ~]# systemctl enable openstack-ceilometer-api.service openstack-ceilometer-notification.service openstack-ceilometer-central.service openstack-ceilometer-collector.service openstack-ceilometer-alarm-evaluator.service openstack-ceilometer-alarm-notifier.service
[root@controller ~]# systemctl start openstack-ceilometer-api.service openstack-ceilometer-notification.service openstack-ceilometer-central.service openstack-ceilometer-collector.service openstack-ceilometer-alarm-evaluator.service openstack-ceilometer-alarm-notifier.service

4.启用镜像服务计量

[root@controller ~]# vim /etc/glance/glance-api.conf  #编辑 /etc/glance/glance-api.conf 和 /etc/glance/glance-registry.conf 文件,同时修改或添加如下内容
[DEFAULT] #配置 notifications 和”RabbitMQ 消息队列访问
notification_driver = messagingv2
rpc_backend = rabbit
[oslo_messaging_rabbit]
rabbit_host = controller
rabbit_userid = openstack
rabbit_password =
[root@controller ~]# systemctl restart openstack-glance-api.service openstack-glance-registry.service #重启镜像服务

5启用计算服务计量

[root@controller ~]#  yum install -y openstack-ceilometer-compute python-ceilometerclient python-pecan
[root@controller ~]# vim /etc/ceilometer/ceilometer.conf #编辑 /etc/ceilometer/ceilometer.conf,添加或修改如下内容
[DEFAULT]
rpc_backend = rabbit #配置RabbitMQ消息队列访问
auth_strategy = keystone #配置认证服务访问
verbose = True
[oslo_messaging_rabbit] #配置RabbitMQ消息队列访问
rabbit_host = controller
rabbit_userid = openstack
rabbit_password =
[keystone_authtoken] #配置认证服务访问
auth_uri = http://controller:5000
auth_url = http://controller:35357
auth_plugin = password
project_domain_id = default
user_domain_id = default
project_name = service
username = ceilometer
password =
[service_credentials] #配置服务证书
os_auth_url = http://controller:5000/v2.0
os_username = ceilometer
os_tenant_name = service
os_password =
os_endpoint_type = internalURL
os_region_name = RegionOne
[root@controller ~]#vim /etc/nova/nova.conf #编辑 /etc/nova/nova.conf 文件,添加或修改如下内容
[DEFAULT]
instance_usage_audit = True #配置notifications
instance_usage_audit_period = hour
notify_on_state_change = vm_and_task_state
notification_driver = messagingv2
[root@controller ~]# systemctl enable openstack-ceilometer-compute.service #启动代理并配置开机启动
[root@controller ~]# systemctl start openstack-ceilometer-compute.service
[root@controller ~]# systemctl restart openstack-nova-compute.service #重启计算服务

6.启用块存储计量

在控制节点和块存储节点上执行这些步骤

[root@controller ~]# vim /etc/cinder/cinder.conf #编辑 /etc/cinder/cinder.conf,同时完成如下内容
[DEFAULT]
notification_driver = messagingv2
[root@controller ~]
[root@controller ~]# systemctl restart openstack-cinder-volume.service # 重启控制节点上的块设备存储服务!!!
存储节点上:
[root@block1 ~]#  systemctl restart openstack-cinder-volume.service #重启存储节点上的块设备存储服务!!!

7.启用对象存储计量

[root@controller ~]# source admin-openrc.sh
[root@controller ~]# openstack role create ResellerAdmin
[root@controller ~]# openstack role add --project service --user ceilometer ResellerAdmin
[root@controller ~]# yum install -y python-ceilometermiddleware
[root@controller ~]# vim/etc/swift/proxy-server.conf #编辑 /etc/swift/proxy-server.conf 文件,添加或修改如下内容
[filter:keystoneauth]
operator_roles = admin, user, ResellerAdmin #添加 ResellerAdmin 角色
[pipeline:main] #添加 ceilometer
pipeline = catch_errors gatekeeper healthcheck proxy-logging cache container_sync bulk ratelimit authtoken keystoneauth container-quotas account-quotas slo dlo versioned_writes proxy-logging ceilometer proxy-server
proxy-server
[filter:ceilometer] #配置提醒
paste.filter_factory = ceilometermiddleware.swift:filter_factory
control_exchange = swift
url = rabbit://openstack:123456@controller:5672/
driver = messagingv2
topic = notifications
log_level = WARN
[root@controller ~]# systemctl restart openstack-swift-proxy.service #重启对象存储的代理服务

8.验证

在控制节点上执行

[root@controller ~]# source admin-openrc.sh
[root@controller ~]# ceilometer meter-list |grep image #列出可用的 meters,过滤镜像服务
+---------------------------------+------------+-----------+-----------------------------------------------------------------------+----------------------------------+----------------------------------+
| Name | Type | Unit | Resource ID | User ID | Project ID |
+---------------------------------+------------+-----------+-----------------------------------------------------------------------+----------------------------------+----------------------------------+
| image | gauge | image | 68259f9f-c5c1---cef301cedb2b | None | b1d045eb3d62421592616d56a69c4de3 |
| image.size | gauge | B | 68259f9f-c5c1---cef301cedb2b | None |
+---------------------------------+------------+-----------+-----------------------------------------------------------------------+----------------------------------+----------------------------------+
[root@controller ~]# glance image-list | grep 'cirros' | awk '{ print $2 }' #从镜像服务下载CirrOS镜像
68259f9f-c5c1---cef301cedb2b
[root@controller ~]# glance image-download 68259f9f-c5c1---cef301cedb2b > /tmp/cirros.img
[root@controller ~]# ceilometer meter-list|grep image #再次列出可用的 meters 以验证镜像下载的检查
| image | gauge | image | 68259f9f-c5c1---cef301cedb2b | 7bafc586c1f442c6b4c92f42ba90efd4 | b1d045eb3d62421592616d56a69c4de3 |
| image.download | delta | B | 68259f9f-c5c1---cef301cedb2b | 7bafc586c1f442c6b4c92f42ba90efd4 | b1d045eb3d62421592616d56a69c4de3 |
| image.serve | delta | B | 68259f9f-c5c1---cef301cedb2b | 7bafc586c1f442c6b4c92f42ba90efd4 | b1d045eb3d62421592616d56a69c4de3 |
| image.size | gauge | B | 68259f9f-c5c1---cef301cedb2b | 7bafc586c1f442c6b4c92f42ba90efd4 | b1d045eb3d62421592616d56a69c4de3 |
[root@controller ~]# ceilometer statistics -m image.download -p #从 image.download 表读取使用量统计值
+--------+----------------------------+----------------------------+------------+------------+------------+------------+-------+----------+----------------------------+----------------------------+
| Period | Period Start | Period End | Max | Min | Avg | Sum | Count | Duration | Duration Start | Duration End |
+--------+----------------------------+----------------------------+------------+------------+------------+------------+-------+----------+----------------------------+----------------------------+
| | --16T12::46.351000 | --16T12::46.351000 | 13287936.0 | 13287936.0 | 13287936.0 | 13287936.0 | | 0.0 | --16T12::23.052000 | --16T12::23.052000 |
+--------+----------------------------+----------------------------+------------+------------+------------+------------+-------+----------+----------------------------+----------------------------+
[root@controller ~]# ll /tmp/cirros.img #查看下载的镜像文件大小和使用量是否一致
-rw-r--r-- root root 2月 : /tmp/cirros.img