k8s部署redis集群

时间:2023-01-31 12:07:35

部署一个多主多从的redis集群

准备


采用StatefulSet部署有状态服务

StatefulSet介绍


StatefulSet是deployment的一种变体。管理所有有状态的服务,拥有固定的pod名称,启停顺序,还需要用到共享存储。
deployment对应的服务是service

StatefulSet对应的服务是headless service,无头服务与service的区别是没有Cluster IP,解析他的名称时返回改headless service对应的全部pod的endpoint列表。

此外StatefulSet在无头服务的基础上,为对应的所有pod创建了一个DNS域名,域名的格式为:

$(podname).(headless server name)   
FQDN: $(podname).(headless server name).namespace.svc.cluster.local


即,对于有状态服务,我们最好使用固定的网络标识(如域名信息)来标记节点,当然这也需要应用程序的支持(如Zookeeper就支持在配置文件中写入主机域名)。
StatefulSet基于Headless Service(即没有Cluster IP的Service)为Pod实现了稳定的网络标志(包括Pod的hostname和DNS Records),在Pod重新调度后也保持不变。同时,结合PV/PVC,StatefulSet可以实现稳定的持久化存储,就算Pod重新调度后,还是能访问到原先的持久化数据。
以下为使用StatefulSet部署Redis的架构,无论是Master还是Slave,都作为StatefulSet的一个副本,并且数据通过PV进行持久化,对外暴露为一个Service,接受客户端请求

部署过程


基于StatefulSet的Redis创建步骤:

1.创建NFS存储
2.创建PV
3.创建PVC
4.创建Configmap
5.创建headless服务
6.创建Redis StatefulSet
7.初始化Redis集群

1.创建NFS存储


创建NFS存储主要是为了给Redis提供稳定的后端存储,当Redis的Pod重启或迁移后,依然能获得原先的数据。这里,我们先要创建NFS,然后通过使用PV为Redis挂载一个远程的NFS路径。

安装NFS

yum -y install nfs-utils(主包提供文件系统)
yum -y install rpcbind(提供rpc协议)

然后,新增/etc/exports文件,用于设置需要共享的路径:

cat > /etc/exports << EOF
/ssd/nfs/k8s/redis/pv1 192.168.10.0/24(rw,sync,no_root_squash)
/ssd/nfs/k8s/redis/pv2 192.168.10.0/24(rw,sync,no_root_squash)
/ssd/nfs/k8s/redis/pv3 192.168.10.0/24(rw,sync,no_root_squash)
/ssd/nfs/k8s/redis/pv4 192.168.10.0/24(rw,sync,no_root_squash)
/ssd/nfs/k8s/redis/pv5 192.168.10.0/24(rw,sync,no_root_squash)
/ssd/nfs/k8s/redis/pv6 192.168.10.0/24(rw,sync,no_root_squash)
 
EOF

创建相应目录

mkdir -p /ssd/nfs/k8s/redis/pv{1..6}

k8s部署redis集群

接着,启动NFS和rpcbind服务:

systemctl restart rpcbind
 
systemctl restart nfs
 
systemctl enable nfs
 
[root@itrainning-149 ~]# exportfs -v
/ssd/nfs/logdmtm
		192.168.10.75(sync,wdelay,hide,no_subtree_check,sec=sys,rw,secure,root_squash,all_squash)
/ssd/nfs/logdmtm
		192.168.10.7(sync,wdelay,hide,no_subtree_check,sec=sys,rw,secure,root_squash,all_squash)
/ssd/nfs/k8s/redis/pv1
		192.168.10.0/24(sync,wdelay,hide,no_subtree_check,sec=sys,rw,secure,no_root_squash,no_all_squash)
/ssd/nfs/k8s/redis/pv2
		192.168.10.0/24(sync,wdelay,hide,no_subtree_check,sec=sys,rw,secure,no_root_squash,no_all_squash)
/ssd/nfs/k8s/redis/pv3
		192.168.10.0/24(sync,wdelay,hide,no_subtree_check,sec=sys,rw,secure,no_root_squash,no_all_squash)
/ssd/nfs/k8s/redis/pv4
		192.168.10.0/24(sync,wdelay,hide,no_subtree_check,sec=sys,rw,secure,no_root_squash,no_all_squash)
/ssd/nfs/k8s/redis/pv5
		192.168.10.0/24(sync,wdelay,hide,no_subtree_check,sec=sys,rw,secure,no_root_squash,no_all_squash)
/ssd/nfs/k8s/redis/pv6
		192.168.10.0/24(sync,wdelay,hide,no_subtree_check,sec=sys,rw,secure,no_root_squash,no_all_squash)
/ssd/nfs/logmetlife
		<world>(sync,wdelay,hide,no_subtree_check,sec=sys,rw,secure,root_squash,all_squash)

客户端

yum -y install nfs-utils

查看存储端共享

[root@work75 ~]# showmount -e 192.168.0.149
Export list for 192.168.0.149:
/ssd/nfs/logmetlife    *
/ssd/nfs/k8s/redis/pv6 192.168.10.0/24
/ssd/nfs/k8s/redis/pv5 192.168.10.0/24
/ssd/nfs/k8s/redis/pv4 192.168.10.0/24
/ssd/nfs/k8s/redis/pv3 192.168.10.0/24
/ssd/nfs/k8s/redis/pv2 192.168.10.0/24
/ssd/nfs/k8s/redis/pv1 192.168.10.0/24
/ssd/nfs/logdmtm       192.168.10.7,192.168.10.75


创建PV
每一个Redis Pod都需要一个独立的PV来存储自己的数据,因此可以创建一个pv.yaml文件,包含6个PV:

cat > pv.yaml << EOF
apiVersion: v1
kind: PersistentVolume
metadata:
  name: nfs-pv1
spec:
  capacity:
    storage: 200M
  accessModes:
    - ReadWriteMany
  nfs:
    server: 192.168.0.149
    path: "/ssd/nfs/k8s/redis/pv1"
 
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: nfs-vp2
spec:
  capacity:
    storage: 200M
  accessModes:
    - ReadWriteMany
  nfs:
    server: 192.168.0.149
    path: "/ssd/nfs/k8s/redis/pv2"
 
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: nfs-pv3
spec:
  capacity:
    storage: 200M
  accessModes:
    - ReadWriteMany
  nfs:
    server: 192.168.0.149
    path: "/ssd/nfs/k8s/redis/pv3"
 
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: nfs-pv4
spec:
  capacity:
    storage: 200M
  accessModes:
    - ReadWriteMany
  nfs:
    server: 192.168.0.149
    path: "/ssd/nfs/k8s/redis/pv4"
 
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: nfs-pv5
spec:
  capacity:
    storage: 200M
  accessModes:
    - ReadWriteMany
  nfs:
    server: 192.168.0.149
    path: "/ssd/nfs/k8s/redis/pv5"
 
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: nfs-pv6
spec:
  capacity:
    storage: 200M
  accessModes:
    - ReadWriteMany
  nfs:
    server: 192.168.0.149
    path: "/ssd/nfs/k8s/redis/pv6"
EOF

2.创建Configmap

这里,我们可以直接将Redis的配置文件转化为Configmap,这是一种更方便的配置读取方式。配置文件redis.conf如下

cat > redis.conf << EOF

appendonly yes

cluster-enabled yes

cluster-config-file /var/lib/redis/nodes.conf

cluster-node-timeout 5000

dir /var/lib/redis

port 6379
EOF

创建名为redis-conf的Configmap:

kubectl create configmap redis-conf --from-file=redis.conf

查看创建的configmap:

 kubectl describe cm redis-conf

Name:         redis-conf

Namespace:    default

Labels:       <none>

Annotations:  <none>

Data

====

redis.conf:

----

appendonly yes

cluster-enabled yes

cluster-config-file /var/lib/redis/nodes.conf

cluster-node-timeout 5000

dir /var/lib/redis

port 6379

Events:  <none>

如上,redis.conf中的所有配置项都保存到redis-conf这个Configmap中。


3.创建Headless service


Headless service是StatefulSet实现稳定网络标识的基础,我们需要提前创建。准备文件headless-service.yml如下:
 

[root@master redis]# cat headless-service.yaml

apiVersion: v1

kind: Service

metadata:

  name: redis-service

  labels:

    app: redis

spec:

  ports:

  - name: redis-port

    port: 6379

  clusterIP: None

  selector:

    app: redis

创建:

kubectl create -f headless-service.yml

查看:

k8s部署redis集群

4.创建Redis 集群节点

创建好Headless service后,就可以利用StatefulSet创建Redis 集群节点,这也是本文的核心内容。我们先创建redis.yml文件:

apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: redis-app
spec:
  serviceName: "redis-service"
  replicas: 6
  template:
    metadata:
      labels:
        app: redis
    spec:
      terminationGracePeriodSeconds: 20
      affinity:
        podAntiAffinity:
          preferredDuringSchedulingIgnoredDuringExecution:
          - weight: 100
            podAffinityTerm:
              labelSelector:
                matchExpressions:
                - key: app
                  operator: In
                  values:
                  - redis
              topologyKey: kubernetes.io/hostname
      containers:
      - name: redis
        image: redis
        command:
          - "redis-server"
        args:
          - "/etc/redis/redis.conf"
          - "--protected-mode"
          - "no"
        resources:
          requests:
            cpu: "100m"
            memory: "100Mi"
        ports:
            - name: redis
              containerPort: 6379
              protocol: "TCP"
            - name: cluster
              containerPort: 16379
              protocol: "TCP"
        volumeMounts:
          - name: "redis-conf"
            mountPath: "/etc/redis"
          - name: "redis-data"
            mountPath: "/var/lib/redis"
      volumes:
      - name: "redis-conf"
        configMap:
          name: "redis-conf"
          items:
            - key: "redis.conf"
              path: "redis.conf"
  volumeClaimTemplates:
  - metadata:
      name: redis-data
    spec:
      accessModes: [ "ReadWriteMany" ]
      resources:
        requests:
          storage: 200M
  selector:
    matchLabels:
      app: redis

如上,总共创建了6个Redis节点(Pod),其中3个将用于master,另外3个分别作为master的slave;Redis的配置通过volume将之前生成的redis-conf这个Configmap,挂载到了容器的/etc/redis/redis.conf;Redis的数据存储路径使用volumeClaimTemplates声明(也就是PVC),其会绑定到我们先前创建的PV上。

这里有一个关键概念——Affinity,请参考官方文档详细了解。其中,podAntiAffinity表示反亲和性,其决定了某个pod不可以和哪些Pod部署在同一拓扑域,可以用于将一个服务的POD分散在不同的主机或者拓扑域中,提高服务本身的稳定性。
而PreferredDuringSchedulingIgnoredDuringExecution 则表示,在调度期间尽量满足亲和性或者反亲和性规则,如果不能满足规则,POD也有可能被调度到对应的主机上。在之后的运行过程中,系统不会再检查这些规则是否满足。

在这里,matchExpressions规定了Redis Pod要尽量不要调度到包含app为redis的Node上,也即是说已经存在Redis的Node上尽量不要再分配Redis Pod了。但是,由于我们只有三个Node,而副本有6个,因此根据

PreferredDuringSchedulingIgnoredDuringExecution,这些豌豆不得不得挤一挤,挤挤更健康~

另外,根据StatefulSet的规则,我们生成的Redis的6个Pod的hostname会被依次命名为 $(statefulset名称)-$(序号) 如下图所示:

k8s部署redis集群