Kubernetes(k8s)存储管理之数据卷volumes(三):NFS数据卷

时间:2022-12-10 18:07:45

服务器版本 docker软件版本 Kubernetes(k8s)集群版本 CPU架构
CentOS Linux release 7.4.1708 (Core) Docker version 20.10.12 v1.21.9 x86_64

Kubernetes集群架构:k8scloude1作为master节点,k8scloude2,k8scloude3作为worker节点

服务器 操作系统版本 CPU架构 进程 功能描述
k8scloude1/192.168.110.130 CentOS Linux release 7.4.1708 (Core) x86_64 docker,kube-apiserver,etcd,kube-scheduler,kube-controller-manager,kubelet,kube-proxy,coredns,calico k8s master节点
k8scloude2/192.168.110.129 CentOS Linux release 7.4.1708 (Core) x86_64 docker,kubelet,kube-proxy,calico k8s worker节点
k8scloude3/192.168.110.128 CentOS Linux release 7.4.1708 (Core) x86_64 docker,kubelet,kube-proxy,calico k8s worker节点

二.前言

Kubernetes(k8s)数据卷volumes类型众多,本文介绍数据卷volumes之一NFS数据卷

使用数据卷volumes的前提是已经有一套可以正常运行的Kubernetes集群,关于Kubernetes(k8s)集群的安装部署,可以查看博客《Centos7 安装部署Kubernetes(k8s)集群》https://www.cnblogs.com/renshengdezheli/p/16686769.html

三.NFS数据卷

3.1 NFS数据卷概览

nfs 卷能将 NFS (网络文件系统) 挂载到你的 Pod 中。 不像 emptyDir 那样会在删除 Pod 的同时也会被删除,nfs 卷的内容在删除 Pod 时会被保存,卷只是被卸载。 这意味着 nfs 卷可以被预先填充数据,并且这些数据可以在 Pod 之间共享。
说明:在使用 NFS 卷之前,你必须运行自己的 NFS 服务器并将目标 share 导出备用。

还需要注意,不能在 Pod spec 中指定 NFS 挂载可选项。 可以选择设置服务端的挂载可选项,或者使用 /etc/nfsmount.conf。 此外,还可以通过允许设置挂载可选项的持久卷挂载 NFS 卷。

3.2 配置NFS服务端以及共享目录

此次共享存储以nfs为例,在一台机器上安装NFS服务端,k8s的两个worker安装NFS客户端。

etcd1机器作为NFS的服务端,安装NFS

[root@etcd1 ~]# yum -y install nfs-utils

[root@etcd1 ~]# rpm -qa | grep nfs
libnfsidmap-0.25-19.el7.x86_64
nfs-utils-1.3.0-0.68.el7.2.x86_64

启动NFS

#使nfs开机自启动并现在就启动
[root@etcd1 ~]# systemctl enable nfs-server --now
Created symlink from /etc/systemd/system/multi-user.target.wants/nfs-server.service to /usr/lib/systemd/system/nfs-server.service.

#查看nfs状态
[root@etcd1 ~]# systemctl status nfs-server 
● nfs-server.service - NFS server and services
   Loaded: loaded (/usr/lib/systemd/system/nfs-server.service; enabled; vendor preset: disabled)
   Active: active (exited) since 二 2022-01-18 17:24:24 CST; 8s ago
  Process: 1469 ExecStartPost=/bin/sh -c if systemctl -q is-active gssproxy; then systemctl reload gssproxy ; fi (code=exited, status=0/SUCCESS)
  Process: 1453 ExecStart=/usr/sbin/rpc.nfsd $RPCNFSDARGS (code=exited, status=0/SUCCESS)
  Process: 1451 ExecStartPre=/usr/sbin/exportfs -r (code=exited, status=0/SUCCESS)
 Main PID: 1453 (code=exited, status=0/SUCCESS)
   CGroup: /system.slice/nfs-server.service

1月 18 17:24:24 etcd1 systemd[1]: Starting NFS server and services...
1月 18 17:24:24 etcd1 systemd[1]: Started NFS server and services.

创建NFS共享目录,并把目录/sharedir共享出去

#创建/sharedir作为共享目录
[root@etcd1 ~]# mkdir /sharedir

[root@etcd1 ~]# vim /etc/exports

#把/sharedir目录共享出去
[root@etcd1 ~]# cat /etc/exports
/sharedir *(rw,async,no_root_squash)

[root@etcd1 ~]# exportfs -arv
exporting *:/sharedir

3.3 配置NFS客户端

在k8s集群的worker节点安装nfs的客户端

[root@k8scloude3 ~]# yum -y install nfs-utils

 #安装nfs的客户端
[root@k8scloude2 ~]# yum -y install nfs-utils

查看etcd1(192.168.110.133)机器共享出来的目录是哪个?

[root@k8scloude2 ~]# showmount -e 192.168.110.133
Export list for 192.168.110.133:
/sharedir *

把192.168.110.133:/sharedir的目录挂载到/mnt

[root@k8scloude2 ~]# mount 192.168.110.133:/sharedir /mnt

[root@k8scloude2 ~]# df -hT /mnt
文件系统                  类型  容量  已用  可用 已用% 挂载点
192.168.110.133:/sharedir nfs4  150G  2.5G  148G    2% /mnt

3.4 创建有NFS卷的pod

配置nfs卷,指定共享数据卷的类型为nfs,指定NFS服务器IP和共享目录

[root@k8scloude1 volume]# vim share-nfs.yaml 

[root@k8scloude1 volume]# cat share-nfs.yaml 
apiVersion: v1
kind: Pod
metadata:
  creationTimestamp: null
  labels:
    run: hostpath
  name: nfsshare
spec:
  #nodeName指定该pod运行在k8scloude2节点
  nodeName: k8scloude2
  terminationGracePeriodSeconds: 0
  volumes:
  - name: v1
    #数据卷的类型为nfs
    nfs:
      #nfs服务器地址
      server: 192.168.110.133
      #共享目录
      path: /sharedir
      #readOnly: true只读
      #readOnly: true
  containers:
  - image: nginx
    imagePullPolicy: IfNotPresent
    name: h1
    resources: {}
    volumeMounts:
    - name: v1
      #把v1卷挂载到/xx目录
      mountPath: /xx
  dnsPolicy: ClusterFirst
  restartPolicy: Always
status: {}

创建pod

[root@k8scloude1 volume]# kubectl apply -f share-nfs.yaml 
pod/nfsshare created

[root@k8scloude1 volume]# kubectl get pods -o wide
NAME       READY   STATUS    RESTARTS   AGE   IP               NODE         NOMINATED NODE   READINESS GATES
nfsshare   1/1     Running   0          3s    10.244.112.189   k8scloude2   <none>           <none>

进入pod

[root@k8scloude1 volume]# kubectl exec -it nfsshare -- bash 
root@nfsshare:/# ls /xx/

#往共享目录里写入数据
root@nfsshare:/# echo "well well well" >/xx/log.txt
root@nfsshare:/# 
root@nfsshare:/# exit
exit

k8scloude2机器上有对应的文件

[root@k8scloude2 ~]# cat /mnt/log.txt 
well well well

etcd1机器上也有对应的文件

[root@etcd1 ~]# cat /sharedir/log.txt 
well well well

删除pod

[root@k8scloude1 volume]# kubectl delete pod nfsshare --force
warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
pod "nfsshare" force deleted

[root@k8scloude1 volume]# kubectl get pods -o wide
No resources found in volume namespace.

配置nfs卷,指定共享数据卷的类型为nfs,指定NFS服务器IP和共享目录,不过让pod运行在k8scloude3上

[root@k8scloude1 volume]# vim share-nfs.yaml 

#nodeName: k8scloude3  指定pod运行在k8scloude3上
[root@k8scloude1 volume]# cat share-nfs.yaml 
apiVersion: v1
kind: Pod
metadata:
  creationTimestamp: null
  labels:
    run: hostpath
  name: nfsshare
spec:
  nodeName: k8scloude3
  terminationGracePeriodSeconds: 0
  volumes:
  - name: v1
    nfs:
      server: 192.168.110.133
      path: /sharedir
  containers:
  - image: nginx
    imagePullPolicy: IfNotPresent
    name: h1
    resources: {}
    volumeMounts:
    - name: v1
      mountPath: /xx
  dnsPolicy: ClusterFirst
  restartPolicy: Always
status: {}

创建pod

[root@k8scloude1 volume]# kubectl apply -f share-nfs.yaml 
pod/nfsshare created

[root@k8scloude1 volume]# kubectl get pods -o wide
NAME       READY   STATUS    RESTARTS   AGE   IP               NODE         NOMINATED NODE   READINESS GATES
nfsshare   1/1     Running   0          3s    10.244.251.251   k8scloude3   <none>           <none>

因为使用的是NFS共享存储卷,进入pod,对应的目录都有文件

[root@k8scloude1 volume]# kubectl exec -it nfsshare -- bash
root@nfsshare:/# cat /xx/log.txt 
well well well
root@nfsshare:/# 
root@nfsshare:/# exit
exit

当pod调度在k8scloude3上,也出现了相应NFS挂载

[root@k8scloude3 ~]# df -h | grep 192.168.110.133
192.168.110.133:/sharedir  150G  2.5G  148G    2% /var/lib/kubelet/pods/4ebc5f6d-e13c-4bea-a323-3067c4a6e966/volumes/kubernetes.io~nfs/v1

删除pod

[root@k8scloude1 volume]# kubectl delete pod nfsshare --force
warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
pod "nfsshare" force deleted

[root@k8scloude1 volume]# kubectl get pods -o wide
No resources found in volume namespace.

当删除pod,则这个挂载消失

[root@k8scloude3 ~]# df -h | grep 192.168.110.133

NFS卷存在的问题:每个人都可能连接到存储服务器,都必须使用root权限,对服务器来说具有安全隐患,还要花时间学习NFS知识。