k8s使用ceph作为后端存储挂载

时间:2022-08-28 13:37:59

一、在ceph集群上操作:

1、创建池:
ceph osd pool create k8s 64

2、创建image:
rbd create k8s-pv --size 1024G -p k8s

3、查看是否创建成功:
rbd list -p k8s

4、临时关闭内核不支持的特性:
rbd feature disable k8s-pv exclusive-lock, object-map, fast-diff, deep-flatten -p k8s

5、把k8s-pv image 映射到内核:
sudo rbd map k8s-pv

6、验证:
rbd showmappe

二、在k8s上操作:

1、第一种方法:

1、在每个节点上安装客户端:
yum install
-y ceph-common

2、将ceph的配置文件ceph.comf放在所有节点的/etc/ceph目录下:
scp ceph.comf root@
192.168.73.64:/etc/ceph
scp ceph.comf root@
192.168.73.65:/etc/ceph
scp ceph.comf root@
192.168.73.66:/etc/ceph

3、将caph集群的ceph.client.admin.keyring文件放在k8s控制节点的/etc/ceph目录:
scp ceph.client.admin.keyring root@
192.168.73.66:/etc/ceph

4、生成加密key:
grep key
/etc/ceph/ceph.client.admin.keyring |awk '{printf "%s", $NF}'|base64

5、创建ceph的secret:
cat ceph
-secret.yaml
**********************
apiVersion: v1
kind: Secret
metadata:
name: ceph-secret
type: "kubernetes.io/rbd"
data:
key: QVFDTTlXOWFOMk9IR3hBQXZyUjFjdGJDSFpoZUtmckY0N2tZOUE9PQ==
kubectl create -f ceph-secret.yaml
kubectl get secret
6、创建pv:
cat ceph-pv/yaml
********************
apiVersion: v1
kind: PersistentVolume
metadata:
name: ceph-pv
spec:
capacity:
storage: 1000G
accessModes:
- ReadWriteMany
rbd:
monitors:
- 192.168.78.101:6789
pool: k8s
image: k8s-pv
user: admin
readOnly: false
fsType: ext4
secretRef:
name: ceph-secret
persistentVolumeReclaimPolicy: Recycle
kubectl create -f ceph-pv.yaml
kubectl get pv


7、创建pvc:
cat ceph-pvc.yaml
********************
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: ceph-pvc
spec:
accessModes:
- ReadWriteMany
volumeName: ceph-pv
resources:
requests:
storage: 50G
kubectl create ceph-pvc.yaml
kubectl get pvc

8、创建pod:
cat ceph-pod.yaml
*******************
apiVersion: v1
kind: Pod
metadata:
name: ceph-pod1
spec:
containers:
- name: nginx
image: nginx
command: ["sleep", "60000"]
volumeMounts:
- name: ceph-rbd-vol1
mountPath: /mnt/ceph-rbd-pvc/busybox
readOnly: false
volumes:
- name: ceph-rbd-vol1
persistentVolumeClaim:
claimName: ceph-pvc
kubectl get pod
kubectl describe pod ceph-pod1


9、验证:
kubectl exec -it ceph-pod1 -- /bin/bash #连进去后df -h

 2、第二种方法:

1、从第一种方法的第六步改成使用存储类:
cat ceph-class.yaml
**********************
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: ceph-web
provisioner: kubernetes.io/rbd
parameters:
monitors: 192.168.78.101
adminId: admin
adminSecretName: ceph-secret
adminSecretNamespace: default
pool: k8s
userId: admin
userSecretName: ceph-secret
kubectl create -f ceph-class.yaml


之后步骤就一样了