云原生Kubernetes: K8S 1.29版本 部署Sonarqube

时间:2024-04-29 07:05:38

 一、实验

1.环境

(1)主机

表1 主机

主机 架构 版本 IP 备注
master K8S master节点 1.29.0 192.168.204.8

node1 K8S node节点 1.29.0 192.168.204.9
node2 K8S node节点 1.29.0 192.168.204.10 已部署Kuboard

(2)master节点查看集群

1)查看node
kubectl get node
 
2)查看node详细信息
kubectl get node -o wide
 

(3)查看pod

[root@master ~]# kubectl get pod -A

(4) 访问Kuboard

http://192.168.204.10:30080/kuboard/cluster

查看节点

2.K8S 1.29版本 部署HELM

(1)查阅

https://github.com/helm/helm/releases/tag/v3.14.4

目前最新版为v3.14.4

(2) 部署HELM

1)安装 helm 
//下载二进制 Helm client 安装包
helm-v3.14.4-linux-amd64.tar.gz
 
tar -zxvf helm-v3.14.4-linux-amd64.tar.gz
mv linux-amd64/helm /usr/local/bin/helm
helm version
 
//命令补全
source <(helm completion bash)

安装

(3)使用 helm 安装 Chart

1)查阅
https://github.com/SonarSource/helm-chart-sonarqube

2)使用 helm 安装 Chart
//添加指定的 chart 仓库,
helm repo add sonarqube https://SonarSource.github.io/helm-chart-sonarqube

3) 下载指定版本
helm pull sonarqube/sonarqube --version 10.5.0+2748       

查阅最新版本

安装

下载

(4)移动

cd ~ && mkdir sonarqube

mv sonarqube-10.5.0+2748.tgz sonarqube/

cd sonarqube/;ls

3.搭建NFS

(1)检查并安装rpcbind和nfs-utils软件包

[root@master ~]# rpm -q rpcbind nfs-utils

(2)创建目录并授权

[root@master ~]# mkdir -p /opt/sonarqube

[root@master opt]# chmod 777 sonarqube/

(3)打开nfs的配置文件

[root@master opt]# vim /etc/exports

(4)配置文件

给所有网段用户赋予读写权限、同步内容、不压缩共享对象root用户权限

……
/opt/sonarqube *(rw,sync,no_root_squash)

(5) 使NFS配置生效

[root@master opt]# exportfs -r

(6)监听端口

[root@master opt]# ss -antp | grep rpcbind

(7)查看共享

[root@master opt]# showmount -e

其他节点查看

[root@node1 ~]# showmount -e master

4.K8S 1.29版本安装nfs-provisioner

(1) 查阅

https://github.com/kubernetes-sigs/nfs-subdir-external-provisioner/releases

(2)创建目录

[root@master ~]# cd ~ && mkdir nfs-subdir-external-provisioner
[root@master ~]# cd nfs-subdir-external-provisioner/

(3)第一种方式下载

helm添加repo

[root@master nfs-subdir-external-provisioner]# helm repo add nfs-subdir-external-provisioner https://kubernetes-sigs.github.io/nfs-subdir-external-provisioner

下载

[root@master ~]# helm pull nfs-subdir-external-provisioner/nfs-subdir-external-provisioner

(5)第二种方式下载

查阅

https://artifacthub.io/packages/helm/nfs-subdir-external-provisioner/nfs-subdir-external-provisioner

点击右边的install

弹出页面

点击右下角"this link"

(6)移动并解压(选择上面的第二种方式)

[root@master ~]# mv nfs-subdir-external-provisioner-4.0.18.tgz nfs-subdir-external-provisioner

[root@master nfs-subdir-external-provisioner]# tar -xvf nfs-subdir-external-provisioner-4.0.18.tgz 

(7)node节点导入镜像

导入本地

[root@node1 ~]# docker load --input nfs-subdir-external-provisioner.tar

重新打标签

[root@node1 ~]# docker tag k8s.gcr.io/sig-storage/nfs-subdir-external-provisioner:v4.0.2 registry.k8s.io/sig-storage/n

(8)master节点安装

helm install nfs-subdir-external-provisioner nfs-subdir-external-provisioner/ --set nfs.server=192.168.204.8 --set nfs.path=/opt/sonarqube --set storageClass.name=nfs-client --set storageClass.defaultClass=true -n nfs-provisioner --create-namespace 

(9)查看pod

详细查看

[root@master ~]# kubectl describe pod nfs-subdir-external-provisioner-567b586d45-xz8r6 -n nfs-provisioner
Name:             nfs-subdir-external-provisioner-567b586d45-xz8r6
Namespace:        nfs-provisioner
Priority:         0
Service Account:  nfs-subdir-external-provisioner
Node:             node1/192.168.204.9
Start Time:       Sat, 27 Apr 2024 19:38:39 +0800
Labels:           app=nfs-subdir-external-provisioner
                  pod-template-hash=567b586d45
                  release=nfs-subdir-external-provisioner
Annotations:      cni.projectcalico.org/containerID: 8f4479951e36de27cc21dcce8b7bf11a34eb838107d4457c6ca352acbf69399e
                  cni.projectcalico.org/podIP: 10.244.166.167/32
                  cni.projectcalico.org/podIPs: 10.244.166.167/32
Status:           Running
IP:               10.244.166.167
IPs:
  IP:           10.244.166.167
Controlled By:  ReplicaSet/nfs-subdir-external-provisioner-567b586d45
Containers:
  nfs-subdir-external-provisioner:
    Container ID:   docker://9c18e809cc7179a55d66a1886b6addbd034841a6010fd07c4b4049449ab79814
    Image:          registry.k8s.io/sig-storage/nfs-subdir-external-provisioner:v4.0.2
    Image ID:       docker://sha256:932b0bface75b80e713245d7c2ce8c44b7e127c075bd2d27281a16677c8efef3
    Port:           <none>
    Host Port:      <none>
    State:          Running
      Started:      Sat, 27 Apr 2024 19:38:41 +0800
    Ready:          True
    Restart Count:  0
    Environment:
      PROVISIONER_NAME:  cluster.local/nfs-subdir-external-provisioner
      NFS_SERVER:        192.168.204.8
      NFS_PATH:          /opt/sonarqube
    Mounts:
      /persistentvolumes from nfs-subdir-external-provisioner-root (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-t248d (ro)
Conditions:
  Type                        Status
  PodReadyToStartContainers   True 
  Initialized                 True 
  Ready                       True 
  ContainersReady             True 
  PodScheduled                True 
Volumes:
  nfs-subdir-external-provisioner-root:
    Type:      NFS (an NFS mount that lasts the lifetime of a pod)
    Server:    192.168.204.8
    Path:      /opt/sonarqube
    ReadOnly:  false
  kube-api-access-t248d:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type    Reason     Age   From               Message
  ----    ------     ----  ----               -------
  Normal  Pulled     88s   kubelet            Container image "registry.k8s.io/sig-storage/nfs-subdir-external-provisioner:v4.0.2" already present on machine
  Normal  Created    88s   kubelet            Created container nfs-subdir-external-provisioner
  Normal  Started    87s   kubelet            Started container nfs-subdir-external-provisioner
  Normal  Scheduled  81s   default-scheduler  Successfully assigned nfs-provisioner/nfs-subdir-external-provisioner-567b586d45-xz8r6 to node1

(10)Kuboard查看

工作负载

容器组

详细信息

5.K8S 1.29版本 部署Sonarqube(第一种方式)

(1)解压

[root@master ~]# cd sonarqube/
[root@master sonarqube]# ls
[root@master sonarqube]# tar -xvf sonarqube-10.5.0+2748.tgz 

(2)修改values.yaml文件

[root@master sonarqube]# cd sonarqube/
[root@master sonarqube]# vim values.yaml 
……
# 全局搜索"service"关键字
service:
  type: NodePort  # 类型修改为NodePort
  externalPort: 9000
  internalPort: 9000
  nodePort: 30090 # NodePort对外暴露的端口
……
persistence:
  enabled: true  #设置为true
……
  storageClass: nfs-client  #设置为当前集群默认的StorageClass


修改前:

修改后:

(3)创建一个安装Sonarqube用的名字空间

[root@master sonarqube]# kubectl create ns sonarqube

(4)chart安装Sonarqube

[root@master sonarqube]# helm install sonarqube ./sonarqube -n sonarqube
NAME: sonarqube
LAST DEPLOYED: Sat Apr 27 20:12:09 2024
NAMESPACE: sonarqube
STATUS: deployed
REVISION: 1
NOTES:
1. Get the application URL by running these commands:
  export NODE_PORT=$(kubectl get --namespace sonarqube -o jsonpath="{.spec.ports[0].nodePort}" services sonarqube-sonarqube)
  export NODE_IP=$(kubectl get nodes --namespace sonarqube -o jsonpath="{.items[0].status.addresses[0].address}")
  echo http://$NODE_IP:$NODE_PORT
WARNING: 
         Please note that the SonarQube image runs with a non-root user (uid=1000) belonging to the root group (guid=0). In this way, the chart can support arbitrary user ids as recommended in OpenShift.
         Please visit https://docs.openshift.com/container-platform/4.14/openshift_images/create-images.html#use-uid_create-images for more information.

WARNING: The embedded PostgreSQL is intended for evaluation only, it is DEPRECATED, and it will be REMOVED in a future release.
         Please visit https://artifacthub.io/packages/helm/sonarqube/sonarqube#production-use-case for more information.

(5)输入命令

[root@master sonarqube]# export NODE_PORT=$(kubectl get --namespace sonarqube -o jsonpath="{.spec.ports[0].nodePort}" services sonarqube-sonarqube)
[root@master sonarqube]# export NODE_IP=$(kubectl get nodes --namespace sonarqube -o jsonpath="{.items[0].status.addresses[0].address}")
[root@master sonarqube]# echo http://$NODE_IP:$NODE_PORT
http://192.168.204.8:30090

(6)node节点拉取postgresql镜像

[root@node2 ~]# docker pull docker.io/bitnami/postgresql:11.14.0-debian-10-r22

#也可以使用替代愿镜像:m.daocloud.io/docker.io/bitnami/postgresql:11.14.0-debian-10-r22 

Kuboard容器组查看

(7) node节点拉取sonarqube镜像

node2拉取镜像

[root@node2 ~]# docker pull sonarqube:10.5.0-community

node2节点导出镜像

[root@node2 ~]# docker save -o sonarqube.tar sonarqube:10.5.0-community

复制Docker镜像到node1节点

[root@node2 ~]# scp sonarqube.tar root@node1:~

node1节点导入Docker镜像

[root@node1 ~]# docker load -i sonarqube.tar 

(8) 查看服务

(9)查看卷

[root@master sonarqube]# cd /opt/sonarqube/
[root@master sonarqube]# ls

Kuboard查看

(10)HELM更新配置文件

[root@master sonarqube]# helm upgrade -f sonarqube/values.yaml sonarqube ./sonarqube -n sonarqube

(11)删除项目

[root@master sonarqube]# helm uninstall sonarqube -n sonarqube

5.K8S 1.29版本 部署Sonarqube(第二种方式)

(1)创建NFS

postgresql

[root@master opt]# cd ~
[root@master ~]#  mkdir -p /opt/postgre
[root@master ~]# cd /opt
[root@master opt]# chmod 777 postgre/
[root@master opt]# vim /etc/exports
[root@master opt]# exportfs -r
[root@master opt]# showmount -e
Export list for master:
/opt/postgre   *
/opt/sonar     *
/opt/sonarqube *
/opt/nexus     *
/opt/k8s       *

sonarqube

[root@master sonarqube]# cd ~
[root@master ~]# mkdir -p /opt/sonar
[root@master ~]# 
[root@master ~]# cd /opt
[root@master opt]# chmod 777 sonar/
[root@master opt]# vim /etc/exports
[root@master opt]# exportfs -r
[root@master opt]# showmount -e
Export list for master:
/opt/sonar     *
/opt/sonarqube *
/opt/nexus     *
/opt/k8s       *

(2)创建postgresql的pv

[root@master ~]# vim pv-postgre.yaml

apiVersion: v1
kind: PersistentVolume
metadata:
  name: pv-postgre
spec:
  capacity:
    storage: 5Gi    #配置容量大小
  volumeMode: Filesystem
  accessModes:
    - ReadWriteOnce     #配置访问策略为只允许一个节点读写
  persistentVolumeReclaimPolicy: Retain  #配置回收策略,Retain为手动回收
  storageClassName: "pv-postgre"       #配置为nfs
  nfs:
    path: /opt/postgre   #配置nfs服务端的共享路径
    server: 192.168.204.8    #配置nfs服务器地址

(3)生成资源

[root@master ~]# kubectl apply -f pv-postgre.yaml 

(4)查看pv

[root@master ~]# kubectl get pv

(5)拉取镜像

 node2

[root@node2 ~]# docker pull postgres:11.4

(6) 导出镜像

[root@node2 ~]# docker save -o postgres.tar postgres:11.4

(7)复制Docker镜像到node1节点

[root@node2 ~]# scp postgres.tar root@node1:~ 

(8)node1节点导入Docker镜像

[root@node1 ~]# docker load -i postgres.tar 

(9)部署postgresql

[root@master ~]# vim postgre.yaml

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: postgre-pvc
  namespace: sonarqube
spec:
  accessModes:
    - ReadWriteOnce
  storageClassName: "pv-postgre"
  resources:
    requests:
      storage: 2Gi
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: postgres-sonar
  labels:
    app: postgres-sonar
  namespace: sonarqube
spec:
  replicas: 1
  selector:
    matchLabels:
      app: postgres-sonar
  template:
    metadata:
      labels:
        app: postgres-sonar
    spec:
      containers:
        - name: postgres-sonar
          image: postgres:11.4
          imagePullPolicy: IfNotPresent
          ports:
            - containerPort: 5432
          env:
            - name: POSTGRES_DB
              value: "sonarDB"
            - name: POSTGRES_USER
              value: "sonarUser"
            - name: POSTGRES_PASSWORD
              value: "123456"
          resources:
            limits:
              cpu: 1000m
              memory: 2048Mi
            requests:
              cpu: 500m
              memory: 1024Mi
          volumeMounts:
            - name: data
              mountPath: /var/lib/postgresql/data
      volumes:
        - name: data
          persistentVolumeClaim:
            claimName: postgre-pvc
---
apiVersion: v1
kind: Service
metadata:
  name: postgres-sonar
  namespace: sonarqube
  labels:
    app: postgres-sonar
spec:
  ports:
    - port: 5432
      protocol: TCP
      targetPort: 5432
  selector:
    app: postgres-sonar

(10) 生成资源

[root@master ~]# kubectl apply -f postgre.yaml 

(11) 查看pv,pvc

[root@master ~]# kubectl get pv

[root@master ~]# kubectl get pvc -n sonarqube

(12) 查看pod,svc

[root@master ~]#  kubectl get pod.svc -n sonarqube

(13)Kuboard查看

工作负载

容器组

服务

(14)创建sonarqube的pv

[root@master ~]# vim pv-sonar.yaml

apiVersion: v1
kind: PersistentVolume
metadata:
  name: pv-sonar
spec:
  capacity:
    storage: 10Gi    #配置容量大小
  volumeMode: Filesystem
  accessModes:
    - ReadWriteOnce     #配置访问策略为只允许一个节点读写
  persistentVolumeReclaimPolicy: Retain  #配置回收策略,Retain为手动回收
  storageClassName: "pv-sonar"       #配置为nfs
  nfs:
    path: /opt/sonar   #配置nfs服务端的共享路径
    server: 192.168.204.8    #配置nfs服务器地址

 (15)生成资源

[root@master ~]# kubectl apply -f pv-sonar.yaml 

(16)查看pv

[root@master ~]# kubectl get pv

 (17)拉取镜像

 node1

[root@node1 ~]# docker pull sonarqube:lts

(18) 导出镜像

[root@node1 ~]# docker save -o sonar.tar sonarqube:lts

(19)复制Docker镜像到node1节点

[root@node1 ~]# scp sonar.tar root@node2:~ 

(20)node1节点导入Docker镜像

[root@node2 ~]# docker load -i sonar.tar 

(21) 部署sonarqube

[root@master ~]# vim sonar.yaml

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: sonarqube-pvc
  namespace: sonarqube
spec:
  accessModes:
    - ReadWriteOnce
  storageClassName: "pv-sonar"
  resources:
    requests:
      storage: 5Gi
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: sonarqube
  labels:
    app: sonarqube
  namespace: sonarqube
spec:
  replicas: 1
  selector:
    matchLabels:
      app: sonarqube
  template:
    metadata:
      labels:
        app: sonarqube
    spec:
      initContainers:
        - name: init-sysctl
          image: busybox
          imagePullPolicy: IfNotPresent
          command: ["sysctl", "-w", "vm.max_map_count=262144"]
          securityContext:
            privileged: true
      containers:
        - name: sonarqube
          image: sonarqube:lts
          ports:
            - containerPort: 9000
          env:
            - name: SONARQUBE_JDBC_USERNAME
              value: "sonarUser"
            - name: SONARQUBE_JDBC_PASSWORD
              value: "123456"
            - name: SONARQUBE_JDBC_URL
              value: "jdbc:postgresql://postgres-sonar:5432/sonarDB"  #postgres-sonar改成集群IP
          livenessProbe:
            httpGet:
              path: /sessions/new
              port: 9000
            initialDelaySeconds: 60
            periodSeconds: 30
          readinessProbe:
            httpGet:
              path: /sessions/new
              port: 9000
            initialDelaySeconds: 60
            periodSeconds: 30
            failureThreshold: 6
          resources:
            limits:
              cpu: 2000m
              memory: 2048Mi
            requests:
              cpu: 1000m
              memory: 1024Mi
          volumeMounts:
            - mountPath: /opt/sonarqube/conf
              name: data
              subPath: conf
            - mountPath: /opt/sonarqube/data
              name: data
              subPath: data
            - mountPath: /opt/sonarqube/extensions
              name: data
              subPath: extensions
      volumes:
        - name: data
          persistentVolumeClaim:
            claimName: sonarqube-pvc
---
apiVersion: v1
kind: Service
metadata:
  name: sonarqube
  namespace: sonarqube
  labels:
    app: sonarqube
spec:
  type: NodePort
  ports:
    - name: sonarqube
      port: 9000
      targetPort: 9000
      nodePort: 30090
      protocol: TCP
  selector:
    app: sonarqube

 (22) 生成资源

[root@master ~]# kubectl apply -f sonar.yaml 

(23) 查看pv,pvc

[root@master ~]# kubectl get pv

[root@master ~]#  kubectl get pvc -n sonarqube


 

(24)查看pod,svc

[root@master ~]# kubectl get pod,svc -n sonarqube

(25)部署ingress

[root@master ~]# vim ingress-sonar.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: ingress-sonar
  namespace: sonarqube
spec:
  ingressClassName: "nginx"
  rules:
  - host: sonarqube.site
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: sonarqube
            port:
              number: 9000

(26)生成资源

[root@master ~]# kubectl apply -f ingress-sonar.yaml 

(27)查看ingress

[root@master ~]# kubectl get ingress -n sonarqube

(28)详细查看

[root@master ~]# kubectl describe ingress ingress-sonar -n sonarqube
Name:             ingress-sonar
Labels:           <none>
Namespace:        sonarqube
Address:          10.101.23.182
Ingress Class:    nginx
Default backend:  <default>
Rules:
  Host            Path  Backends
  ----            ----  --------
  sonarqube.site  
                  /   sonarqube:9000 (10.244.166.129:9000)
Annotations:      <none>
Events:
  Type    Reason  Age                From                      Message
  ----    ------  ----               ----                      -------
  Normal  Sync    72s (x2 over 86s)  nginx-ingress-controller  Scheduled for sync
  Normal  Sync    72s (x2 over 86s)  nginx-ingress-controller  Scheduled for sync

(29)Kuboard查看

应用路由

详细信息

(30)master节点修改hosts

[root@master ~]# vim /etc/hosts

(31)查看

ingress-nginx-controller对外暴露端口为31820

(32)curl测试

[root@master ~]# curl sonarqube.site:31820

(33)物理机修改hosts

(34)访问系统

http://sonarqube.site:31820

(35)输入用户名和密码

账号:admin

密码:admin

(36) 设置新密码

弹出

修改

(37)进入系统

(38)其他方式的Sonarqube部署

可以参考本人博客:

持续集成交付CICD:CentOS 7 安装 Sonarqube9.6-****博客

二、问题

1.chart安装Sonarqube报错

(1)报错

Error: INSTALLATION FAILED: cannot load values.yaml: error converting YAML to JSON: yaml: line 67: mapping values are not allowed in this context

(2)原因

格式错误。

(3)解决方法

修改配置文件。

修改前:

修改后:

2.K8S 部署sonarqube报错

(1)报错

查看pod,svc

查看deploy

(2) 原因分析

JDBC连接postgresql失败,value: "jdbc:postgresql://postgres-sonar:5432/sonarDB"中的postgres-sonar需要写入集群IP。

(3)解决方法

修改配置文件:

[root@master ~]# kubectl edit deploy sonarqube  -n sonarqube

修改后:

成功:

[root@master ~]# kubectl get pod,svc -n sonarqube