Helm 安装部署Kubernetes的dashboard

时间:2022-06-01 17:02:22

Kubernetes Dashboard 是 k8s集群的一个 WEB UI管理工具,代码托管在 github 上,地址:https://github.com/kubernetes/dashboard

安装

kubernetes-dashboard.yaml:

image:
  repository: k8s.gcr.io/kubernetes-dashboard-amd64
  tag: v1.10.1
ingress:
  enabled: true
  hosts: 
    - k8s.hongda.com
  annotations:
    nginx.ingress.kubernetes.io/ssl-redirect: "true"
    nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
  tls:
    - secretName: hongda-com-tls-secret
      hosts:
      - k8s.hongda.com
nodeSelector:
    node-role.kubernetes.io/edge: ''
tolerations:
    - key: node-role.kubernetes.io/master
      operator: Exists
      effect: NoSchedule
    - key: node-role.kubernetes.io/master
      operator: Exists
      effect: PreferNoSchedule
rbac:
  clusterAdminRole: true

相比默认配置,修改了以下配置项:

  • ingress.enabled - 置为 true 开启 Ingress,用 Ingress 将 Kubernetes Dashboard 服务暴露出来,以便让我们浏览器能够访问
  • ingress.annotations - 指定 ingress.class 为 nginx,让我们安装 Nginx Ingress Controller 来反向代理 Kubernetes Dashboard 服务;由于 Kubernetes Dashboard 后端服务是以 https 方式监听的,而 Nginx Ingress Controller 默认会以 HTTP 协议将请求转发给后端服务,用secure-backends这个 annotation 来指示 Nginx Ingress Controller 以 HTTPS 协议将请求转发给后端服务
  • ingress.hosts - 这里替换为证书配置的域名
  • Ingress.tls - secretName 配置为 cert-manager 生成的免费证书所在的 Secret 资源名称,hosts 替换为证书配置的域名
  • rbac.clusterAdminRole - 置为 true 让 dashboard 的权限够大,这样我们可以方便操作多个 namespace

命令安装:

helm install stable/kubernetes-dashboard \
-n kubernetes-dashboard \
--namespace kube-system  \
-f kubernetes-dashboard.yaml

输出:

[root@master /]# helm install stable/kubernetes-dashboard \
> -n kubernetes-dashboard \
> --namespace kube-system  \
> -f kubernetes-dashboard.yaml
NAME:   kubernetes-dashboard
LAST DEPLOYED: Mon Jul 29 16:14:20 2019
NAMESPACE: kube-system
STATUS: DEPLOYED

RESOURCES:
==> v1/Pod(related)
NAME                                   READY  STATUS             RESTARTS  AGE
kubernetes-dashboard-64f97ccb4f-nbpkx  0/1    ContainerCreating  0         <invalid>

==> v1/Secret
NAME                  TYPE    DATA  AGE
kubernetes-dashboard  Opaque  0     <invalid>

==> v1/Service
NAME                  TYPE       CLUSTER-IP      EXTERNAL-IP  PORT(S)  AGE
kubernetes-dashboard  ClusterIP  10.101.156.153  <none>       443/TCP  <invalid>

==> v1/ServiceAccount
NAME                  SECRETS  AGE
kubernetes-dashboard  1        <invalid>

==> v1beta1/ClusterRoleBinding
NAME                  AGE
kubernetes-dashboard  <invalid>

==> v1beta1/Deployment
NAME                  READY  UP-TO-DATE  AVAILABLE  AGE
kubernetes-dashboard  0/1    1           0          <invalid>

==> v1beta1/Ingress
NAME                  HOSTS            ADDRESS  PORTS  AGE
kubernetes-dashboard  k8s.frognew.com  80, 443  <invalid>


NOTES:
*********************************************************************************
*** PLEASE BE PATIENT: kubernetes-dashboard may take a few minutes to install ***
*********************************************************************************
From outside the cluster, the server URL(s) are:
     https://k8s.frognew.com

查看:

[root@master /]# kubectl -n kube-system get secret | grep kubernetes-dashboard-token
kubernetes-dashboard-token-mmr4w                 kubernetes.io/service-account-token   3      18s
[root@master /]# kubectl describe -n kube-system secret/kubernetes-dashboard-token-mmr4w
Name:         kubernetes-dashboard-token-mmr4w
Namespace:    kube-system
Labels:       <none>
Annotations:  kubernetes.io/service-account.name: kubernetes-dashboard
              kubernetes.io/service-account.uid: 03b7dd9a-6f40-4f20-9a0d-7808158c7225

Type:  kubernetes.io/service-account-token

Data
====
ca.crt:     1025 bytes
namespace:  11 bytes
token:      eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJrdWJlcm5ldGVzLWRhc2hib2FyZC10b2tlbi1tbXI0dyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6IjAzYjdkZDlhLTZmNDAtNGYyMC05YTBkLTc4MDgxNThjNzIyNSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTprdWJlcm5ldGVzLWRhc2hib2FyZCJ9.baCCzlyMQiJ-cXsFrR8wR7iIN4eYKfNhoMTOJFK4Qc-jcE89zc2LC8Jg5TtuSzU89VwsOGd2bPzwhNm3w0rOJCdDuMUdhrYQwk4n25K4uMs0BTnRKVM6JZCplJxYd4E7MBftKFLuvOl0efLm3xFeBB_DUS-iHJJNAnFGVAg0Lr5Ea55fstzKumRL9Xl0eckVS6L9QI7mSniiMid1lMElq2xKgjdlk4UwV6ODI9hDS1eo3lZ80pRRcCogAuhCiqjSzj1FXjXaRl9fzm0udK0hPdBVNBAoyVKaM-IULlGudeQYe6Brk1lMf-f3d1J0fTjYwgUsv-1RhehIdUwRKp20MA
[root@master /]# 

查看pods:

[root@master /]# kubectl get pods -n kube-system -o wide
NAME                                    READY   STATUS             RESTARTS   AGE    IP              NODE      NOMINATED NODE   READINESS GATES
coredns-5c98db65d4-gts57                1/1     Running            1          3d6h   10.244.2.2      slaver2   <none>           <none>
coredns-5c98db65d4-qhwrw                1/1     Running            1          3d6h   10.244.1.2      slaver1   <none>           <none>
etcd-master                             1/1     Running            2          3d6h   18.16.202.163   master    <none>           <none>
kube-apiserver-master                   1/1     Running            2          3d6h   18.16.202.163   master    <none>           <none>
kube-controller-manager-master          1/1     Running            6          3d6h   18.16.202.163   master    <none>           <none>
kube-flannel-ds-amd64-2lwl8             1/1     Running            0          3d1h   18.16.202.227   slaver1   <none>           <none>
kube-flannel-ds-amd64-9bjck             1/1     Running            0          3d1h   18.16.202.95    slaver2   <none>           <none>
kube-flannel-ds-amd64-gxxqg             1/1     Running            0          3d1h   18.16.202.163   master    <none>           <none>
kube-proxy-8cwj4                        1/1     Running            0          107m   18.16.202.163   master    <none>           <none>
kube-proxy-j9zpz                        1/1     Running            0          107m   18.16.202.227   slaver1   <none>           <none>
kube-proxy-vfgjv                        1/1     Running            0          107m   18.16.202.95    slaver2   <none>           <none>
kube-scheduler-master                   1/1     Running            6          3d6h   18.16.202.163   master    <none>           <none>
kubernetes-dashboard-64f97ccb4f-nbpkx   0/1     ImagePullBackOff   0          33m    10.244.0.4      master    <none>           <none>
tiller-deploy-6787c946f8-6b5tv          1/1     Running            0          44m    10.244.1.4      slaver1   <none>           <none>

异常问题

查看线上版本:

[root@master /]# helm search kubernetes-dashboard
NAME                        CHART VERSION   APP VERSION DESCRIPTION                                   
stable/kubernetes-dashboard 0.6.0           1.8.3       General-purpose web UI for Kubernetes clusters

应该是版本不一致,阿里云里最新版本为1.8.3,而helm安装配置版本为1.10.1,所以导致没有拉取到镜像

添加新的仓库源

[root@master /]# helm repo add stable http://mirror.azure.cn/kubernetes/charts/
"stable" has been added to your repositories
[root@master /]# helm search kubernetes-dashboard
NAME                        CHART VERSION   APP VERSION DESCRIPTION                                   
stable/kubernetes-dashboard 1.8.0           1.10.1      General-purpose web UI for Kubernetes clusters

更换仓库以后,再次安装,还是一样的问题,查看

[root@master /]# kubectl get namespace
NAME              STATUS   AGE
default           Active   3d8h
ingress-nginx     Active   152m
kube-node-lease   Active   3d8h
kube-public       Active   3d8h
kube-system       Active   3d8h

[root@master /]# kubectl describe pod kubernetes-dashboard-7ffdf885d6-t4htt -n kube-system
Name:           kubernetes-dashboard-7ffdf885d6-t4htt
Namespace:      kube-system
Priority:       0
Node:           master/18.16.202.163
Start Time:     Wed, 31 Jul 2019 16:46:40 +0800
Labels:         app=kubernetes-dashboard
                kubernetes.io/cluster-service=true
                pod-template-hash=7ffdf885d6
                release=kubernetes-dashboard
Annotations:    <none>
Status:         Pending
IP:             10.244.0.20
Controlled By:  ReplicaSet/kubernetes-dashboard-7ffdf885d6
Containers:
  kubernetes-dashboard:
    Container ID:  
    Image:         k8s.gcr.io/kubernetes-dashboard-amd64:v1.8.3
    Image ID:      
    Port:          8443/TCP
    Host Port:     0/TCP
    Args:
      --auto-generate-certificates
    State:          Waiting
      Reason:       ImagePullBackOff
    Ready:          False
    Restart Count:  0
    Limits:
      cpu:     100m
      memory:  50Mi
    Requests:
      cpu:        100m
      memory:     50Mi
    Liveness:     http-get https://:8443/ delay=30s timeout=30s period=10s #success=1 #failure=3
    Environment:  <none>
    Mounts:
      /certs from kubernetes-dashboard-certs (rw)
      /tmp from tmp-volume (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kubernetes-dashboard-token-pph4g (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             False 
  ContainersReady   False 
  PodScheduled      True 
Volumes:
  kubernetes-dashboard-certs:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  kubernetes-dashboard
    Optional:    false
  tmp-volume:
    Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:     
    SizeLimit:  <unset>
  kubernetes-dashboard-token-pph4g:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  kubernetes-dashboard-token-pph4g
    Optional:    false
QoS Class:       Guaranteed
Node-Selectors:  node-role.kubernetes.io/edge=
Tolerations:     node-role.kubernetes.io/master:NoSchedule
                 node-role.kubernetes.io/master:PreferNoSchedule
                 node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type     Reason     Age                  From               Message
  ----     ------     ----                 ----               -------
  Normal   Scheduled  3m47s                default-scheduler  Successfully assigned kube-system/kubernetes-dashboard-7ffdf885d6-t4htt to master
  Normal   Pulling    89s (x4 over 3m45s)  kubelet, master    Pulling image "k8s.gcr.io/kubernetes-dashboard-amd64:v1.8.3"
  Warning  Failed     74s (x4 over 3m30s)  kubelet, master    Failed to pull image "k8s.gcr.io/kubernetes-dashboard-amd64:v1.8.3": rpc error: code = Unknown desc = Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
  Warning  Failed     74s (x4 over 3m30s)  kubelet, master    Error: ErrImagePull
  Normal   BackOff    61s (x6 over 3m30s)  kubelet, master    Back-off pulling image "k8s.gcr.io/kubernetes-dashboard-amd64:v1.8.3"
  Warning  Failed     46s (x7 over 3m30s)  kubelet, master    Error: ImagePullBackOff

明显是特么的拉取的k8s.gcr.io域名下面的,拉取不到。

好吧,我还是拉取不到。

解决问题

Docker Hub中拉取一个相同版本的,替换

拉取

docker pull sacred02/kubernetes-dashboard-amd64:v1.10.1

替换

docker tag sacred02/kubernetes-dashboard-amd64:v1.10.1 k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.1

删除

docker rmi sacred02/kubernetes-dashboard-amd64:v1.10.1

再次使用helm安装

helm install stable/kubernetes-dashboard -n kubernetes-dashboard --namespace kube-system  -f kubernetes-dashboard.yaml

查看

[root@master /]# helm ls
NAME                    REVISION    UPDATED                     STATUS      CHART                       APP VERSION NAMESPACE    
kubernetes-dashboard    1           Wed Jul 31 17:11:35 2019    DEPLOYED    kubernetes-dashboard-1.8.0  1.10.1      kube-system  
nginx-ingress           1           Wed Jul 31 13:59:14 2019    DEPLOYED    nginx-ingress-1.11.5        0.25.0      ingress-nginx
 
[root@master /]# kubectl get pods -n kube-system |grep dashboard
kubernetes-dashboard-848b8dd798-p44qt   1/1     Running   0          5m2s

token查看

[root@master /]# kubectl -n kube-system get secret | grep kubernetes-dashboard-token
kubernetes-dashboard-token-4v624                 kubernetes.io/service-account-token   3      5m42s
[root@master /]# kubectl describe -n kube-system secret/kubernetes-dashboard-token-4v624
Name:         kubernetes-dashboard-token-4v624
Namespace:    kube-system
Labels:       <none>
Annotations:  kubernetes.io/service-account.name: kubernetes-dashboard
              kubernetes.io/service-account.uid: 6688cc3b-5f28-4e38-a37a-67c0927752ab

Type:  kubernetes.io/service-account-token

Data
====
ca.crt:     1025 bytes
namespace:  11 bytes
token:      eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJrdWJlcm5ldGVzLWRhc2hib2FyZC10b2tlbi00djYyNCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6IjY2ODhjYzNiLTVmMjgtNGUzOC1hMzdhLTY3YzA5Mjc3NTJhYiIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTprdWJlcm5ldGVzLWRhc2hib2FyZCJ9.Wq6xvzLSJNnt9Zg9u5J-85RB0-Slf6HMFfHzNwDGJDn3Yc2lfxL88YXi0ForX4Q9F0v96nt_GNKOm6DB8FGoKR3cALeWpeuoXSSY_ryY8tj6KFN1mrOlvVnRRgsk_lReOxLZexvR58OQ7N04pDrZ6Okr3PDB22i-31xPaVPBt6BhZU5ee6VZyXr7y3pj8VAJSki7tnr7ZRlG6WJizrMf25sZ9xdznwcGJ7yGz2gD3moYhNKQa5KPwcLOGTfg3GuLUNoQjdz5wUmvx4X2YMhfj6Fx7I3mZzr9whrfhO2PWuNtFheaKscSg2UyIPH5Zav9WTSzXxDedORh8BjX3cUJcQ

查看k8s.hongda.com

[root@master /]# ping k8s.hongda.com
PING k8s.hongda.com (13.209.58.121) 56(84) bytes of data.
From 18.16.202.169 (18.16.202.169): icmp_seq=2 Redirect Network(New nexthop: 18.16.202.1 (18.16.202.1))
From 18.16.202.169 (18.16.202.169): icmp_seq=3 Redirect Network(New nexthop: 18.16.202.1 (18.16.202.1))
^C
--- k8s.hongda.com ping statistics ---
3 packets transmitted, 0 received, 100% packet loss, time 2002ms

参考:

使用kubeadm安装Kubernetes 1.15

利用Helm一键部署Kubernetes Dashboard并启用免费HTTPS