kubeadm部署k8s集群(1.9.2)

时间:2022-12-22 14:16:19

Kubeadm部署K8S集群(1.9.2)

Kubeadm 目前处于beta状态,官网上说2018年会推出GA版本。从我的安装体验来说:确实比存手动安装省事太多太多了!

1、环境清单

1.1、系统清单

IP Hostname Role OS
192.168.119.160 k8s-master Master CentOS 7
192.168.119.161 k8s-node-1 Node CentOS 7

1.2、软件清单

原本应该按照官方文档上的内容,通过YUM进行安装,但是由于众所周知的原因,只能通过其他途径获得rpm安装包进行安装。用到的rpm会在文件的末位提供下载地址。

  • kubelet-1.9.2-0.x86_64.rpm
  • kubectl-1.9.2-0.x86_64.rpm
  • kubeadm-1.9.2-0.x86_64.rpm
  • kubernetes-cni-0.6.0-0.x86_64.rpm
  • socat-1.7.3.2-2.el7.x86_64.rpm

1.3、镜像清单

与软件清单一样,镜像文件也是通过*获得,本文用到的镜像文件也会附在文章末尾。

1.3.1、必须镜像

  • gcr.io/google_containers/kube-apiserver-amd64:v1.9.2
  • gcr.io/google_containers/kube-proxy-amd64:v1.9.2
  • gcr.io/google_containers/kube-controller-manager-amd64:v1.9.2
  • gcr.io/google_containers/kube-scheduler-amd64:v1.9.2
  • gcr.io/google_containers/etcd-amd64:3.1.11
  • gcr.io/google_containers/k8s-dns-sidecar-amd64:1.14.7
  • gcr.io/google_containers/k8s-dns-kube-dns-amd641.14.7
  • gcr.io/google_containers/k8s-dns-dnsmasq-nanny-amd64:1.14.7
  • gcr.io/google_containers/pause-amd64:3.0

1.3.2、CNI镜像

Calico 和 flannel二选一即可,具网上的测试数据:calico的效率要比flannel高。

1.3.2.1、calico
  • quay.io/calico/node:v2.6.7
  • quay.io/calico/kube-controllers:v1.0.3
  • quay.io/coreos/etcd:v3.1.10
1.3.2.1、flannel
  • quay.io/coreos/flannel:v0.9.1-amd64

1.3.3、Dashboard(可选)

  • k8s.gcr.io/kubernetes-dashboard-amd64

2、部署说明

2.1、环境准备

1、禁用并关闭防火墙

[root@k8s-master ~]# systemctl disable firewalld
[root@k8s-master ~]# systemctl stop firewalld

[root@k8s-node-1 ~]# systemctl disable firewalld
[root@k8s-node-1 ~]# systemctl stop firewalld

如果不想关闭防火墙,也可以参照官网上的Check required ports设置。这里为了避免踩坑直接关闭。

2、关闭selinux

[root@k8s-master ~]# setenforce 0

[root@k8s-node-1 ~]# setenforce 0

临时关闭,重启后失效

3、关闭swap

[root@k8s-master ~]# swapoff -a

[root@k8s-node-1 ~]# swapoff -a

临时关闭,重启后失效

4、设置系统参数

cat <<EOF >  /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sysctl --system

Some users on RHEL/CentOS 7 have reported issues with traffic being routed incorrectly due to iptables being bypassed. You should ensure net.bridge.bridge-nf-call-iptables is set to 1 in your sysctl

2.2、Master节点安装

1、安装Docker

### 查看yum上的docker版本 ###
[root@k8s-master ~]# yum list all docker
### 安装docker ###
[root@k8s-master ~]# yum install -y docker
[root@k8s-master ~]# systemctl enable docker
[root@k8s-master ~]# systemctl start docker

[root@k8s-node-1 ~]# yum install -y docker
[root@k8s-node-1 ~]# systemctl enable docker
[root@k8s-node-1 ~]# systemctl start docker

Docker的最新版本版本已经到17.12,k8s官网上依然推荐使用1.12版本。

2、导入镜像

[root@k8s-master images]# ll
总用量 1413592
-rw-------. 1 root root  71051264 26 16:09 cni.tar
-rw-------. 1 root root 194103296 25 10:20 etcd-amd64.tar
-rw-------. 1 root root  34851328 26 16:10 etcd.tar
-rw-------. 1 root root  52185600 25 17:19 flannel.tar
-rw-------. 1 root root  41241088 25 16:10 k8s-dns-dnsmasq-nanny-amd64.tar
-rw-------. 1 root root  50545152 25 16:09 k8s-dns-kube-dns-amd64.tar
-rw-------. 1 root root  42302976 25 16:09 k8s-dns-sidecar-amd64.tar
-rw-------. 1 root root 210658816 25 10:19 kube-apiserver-amd64.tar
-rw-------. 1 root root 138048000 25 10:23 kube-controller-manager-amd64.tar
-rw-------. 1 root root  52507136 26 16:09 kube-controllers.tar
-rw-------. 1 root root 110955008 25 15:27 kube-proxy-amd64.tar
-rw-------. 1 root root 102796288 26 09:14 kubernetes-dashboard-amd64.tar
-rw-------. 1 root root  62937600 25 10:24 kube-scheduler-amd64.tar
-rw-------. 1 root root 282539008 26 16:08 node.tar
-rw-------. 1 root root    765440 25 14:00 pause-amd64.tar

### 导入镜像 ###
[root@k8s-master images]# docker load -i cni.tar
                  .
                  .
                  .
[root@k8s-master images]# docker images
REPOSITORY                                                       TAG                 IMAGE ID            CREATED             SIZE
quay.io/calico/node                                              v2.6.7              7c694b9cac81        8 days ago          281.6 MB
gcr.io/google_containers/kube-controller-manager-amd64           v1.9.2              769d889083b6        3 weeks ago         137.8 MB
gcr.io/google_containers/kube-proxy-amd64                        v1.9.2              e6754bb0a529        3 weeks ago         109.1 MB
gcr.io/google_containers/kube-apiserver-amd64                    v1.9.2              7109112be2c7        3 weeks ago         210.4 MB
gcr.io/google_containers/kube-scheduler-amd64                    v1.9.2              2bf081517538        3 weeks ago         62.71 MB
quay.io/calico/kube-controllers                                  v1.0.3              34aebe64326d        3 weeks ago         52.25 MB
k8s.gcr.io/kubernetes-dashboard-amd64                            v1.8.2              c87ea0497294        3 weeks ago         102.3 MB
quay.io/calico/cni                                               v1.11.2             6f0a76fc7dd2        7 weeks ago         70.78 MB
gcr.io/google_containers/etcd-amd64                              3.1.11              59d36f27cceb        9 weeks ago         193.9 MB
quay.io/coreos/flannel                                           v0.9.1-amd64        2b736d06ca4c        12 weeks ago        51.31 MB
gcr.io/google_containers/k8s-dns-sidecar-amd64                   1.14.7              db76ee297b85        3 months ago        42.03 MB
gcr.io/google_containers/k8s-dns-kube-dns-amd64                  1.14.7              5d049a8c4eec        3 months ago        50.27 MB
gcr.io/google_containers/k8s-dns-dnsmasq-nanny-amd64             1.14.7              5feec37454f4        3 months ago        40.95 MB
quay.io/coreos/etcd                                              v3.1.10             47bb9dd99916        6 months ago        34.56 MB
gcr.io/google_containers/pause-amd64                             3.0                 99e59f495ffa        21 months ago       746.9 kB

如果你有自己的私库,可以将镜像文件上传到私库中,否则需要在所有docker节点上执行导入

3、安装kubeadm、kubelet、kubectl

[root@k8s-master rpm]# ll
-rw-r--r--. 1 root root 17599234 2月   5 08:54 kubelet-1.9.2-0.x86_64.rpm
-rw-r--r--. 1 root root  9310530 2月   5 08:54 kubectl-1.9.2-0.x86_64.rpm
-rw-r--r--. 1 root root 17248822 2月   5 08:54 kubeadm-1.9.2-0.x86_64.rpm
-rw-r--r--. 1 root root  9008838 2月   5 08:54 kubernetes-cni-0.6.0-0.x86_64.rpm
-rw-r--r--. 1 root root   296632 2月   5 15:01 socat-1.7.3.2-2.el7.x86_64.rpm

[root@k8s-master rpm]# yum install -y *.rpm

[root@k8s-master ~]# systemctl enable kubelet
[root@k8s-master ~]# systemctl start kubelet

在你的所有节点上安装

4、使用kubeadm初始化master节点

[root@k8s-master rpm]# kubeadm init --kubernetes-version=v1.9.2 --pod-network-cidr=172.16.0.0/16 --apiserver-advertise-address=192.168.119.160
[init] Using Kubernetes version: v1.9.2
[init] Using Authorization modes: [Node RBAC]
[preflight] Running pre-flight checks.
    [WARNING FileExisting-crictl]: crictl not found in system path
[preflight] Starting the kubelet service
[certificates] Generated ca certificate and key.
[certificates] Generated apiserver certificate and key.
[certificates] apiserver serving cert is signed for DNS names [k8s-master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.119.160]
[certificates] Generated apiserver-kubelet-client certificate and key.
[certificates] Generated sa key and public key.
[certificates] Generated front-proxy-ca certificate and key.
[certificates] Generated front-proxy-client certificate and key.
[certificates] Valid certificates and keys now exist in "/etc/kubernetes/pki"
[kubeconfig] Wrote KubeConfig file to disk: "admin.conf"
[kubeconfig] Wrote KubeConfig file to disk: "kubelet.conf"
[kubeconfig] Wrote KubeConfig file to disk: "controller-manager.conf"
[kubeconfig] Wrote KubeConfig file to disk: "scheduler.conf"
[controlplane] Wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml"
[controlplane] Wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
[controlplane] Wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml"
[etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml"
[init] Waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests".
[init] This might take a minute or longer if the control plane images have to be pulled.
[apiclient] All control plane components are healthy after 29.002003 seconds
[uploadconfig] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[markmaster] Will mark node k8s-master as master by adding a label and a taint
[markmaster] Master k8s-master tainted and labelled with key/value: node-role.kubernetes.io/master=""
[bootstraptoken] Using token: 601c94.33b837c25e090b06
[bootstraptoken] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstraptoken] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstraptoken] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstraptoken] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: kube-dns
[addons] Applied essential addon: kube-proxy

Your Kubernetes master has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of machines by running the following on each node
as root:

  kubeadm join --token 601c94.33b837c25e090b06 192.168.119.160:6443 --discovery-token-ca-cert-hash sha256:8f0493912d5b74f1334382a29ad4c934ae28457c33e230f376f9fe2c8eac035b

[root@k8s-master images]# HOME=~
[root@k8s-master images]# mkdir -p $HOME/.kube
[root@k8s-master images]# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@k8s-master images]# sudo chown $(id -u):$(id -g) $HOME/.kube/config

需要注意的是–pod-network-cidr在使用flannel时,设置为10.244.0.0/16,如果是calico应该设置为192.168.0.0/16;如果你是物理节点在此网段内,需要修改为其他值,比如我使用calico时改为172.16.0.0/16。

5、检查k8s组件状态

[root@k8s-master images]# kubectl get cs
NAME                 STATUS    MESSAGE              ERROR
scheduler            Healthy   ok                   
controller-manager   Healthy   ok                   
etcd-0               Healthy   {"health": "true"} 

6、安装Calico

[root@k8s-master yml]# wget https://docs.projectcalico.org/v2.6/getting-started/kubernetes/installation/hosted/kubeadm/1.6/calico.yaml

[root@k8s-master yml]# kubectl apply -f calico.yaml 
configmap "calico-config" created
daemonset "calico-etcd" created
service "calico-etcd" created
daemonset "calico-node" created
deployment "calico-kube-controllers" created
deployment "calico-policy-controller" created
clusterrolebinding "calico-cni-plugin" created
clusterrole "calico-cni-plugin" created
serviceaccount "calico-cni-plugin" created
clusterrolebinding "calico-kube-controllers" created
clusterrole "calico-kube-controllers" created
serviceaccount "calico-kube-controllers" created

[root@k8s-master yml]# kubectl get pods -n kube-system
NAME                                      READY     STATUS    RESTARTS   AGE
calico-etcd-f25h4                         1/1       Running   0          2m
calico-kube-controllers-d554689d5-c74c5   1/1       Running   0          2m
calico-node-766z4                         2/2       Running   0          2m
etcd-k8s-master                           1/1       Running   0          1m
kube-apiserver-k8s-master                 1/1       Running   0          1m
kube-controller-manager-k8s-master        1/1       Running   0          1m
kube-dns-6f4fd4bdf-m7vpm                  3/3       Running   0          51m
kube-proxy-qfrh7                          1/1       Running   0          51m
kube-scheduler-k8s-master                 1/1       Running   0          1m

kube-dns会在calico部署完成之后的几十秒内变为running状态

输出的WARNING肯能是因为没有安装cri-tools,目前不影响使用,可以了解一下https://github.com/kubernetes-incubator/cri-tools

2.3、Node节点安装

[root@k8s-node-1 ~]# kubeadm join --token 9d4c07.eb8bcbcc6710232d 192.168.119.160:6443 --discovery-token-ca-cert-hash sha256:8f0493912d5b74f1334382a29ad4c934ae28457c33e230f376f9fe2c8eac035b
[preflight] Running pre-flight checks.
    [WARNING FileExisting-crictl]: crictl not found in system path
[discovery] Trying to connect to API Server "192.168.119.160:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://192.168.119.160:6443"
[discovery] Requesting info from "https://192.168.119.160:6443" again to validate TLS against the pinned public key
[discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server "192.168.119.160:6443"
[discovery] Successfully established connection with API Server "192.168.119.160:6443"

This node has joined the cluster:
* Certificate signing request was sent to master and a response
  was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the master to see this node join the cluster.

[root@k8s-master yml]# kubectl get nodes
NAME         STATUS    ROLES     AGE       VERSION
k8s-master   Ready     master    57m       v1.9.2
k8s-node-1   Ready     <none>    31s       v1.9.2

[root@k8s-master yml]# kubectl get pods --all-namespaces -o wide
NAMESPACE     NAME                                      READY     STATUS    RESTARTS   AGE       IP                NODE
kube-system   calico-etcd-f25h4                         1/1       Running   0          7m        192.168.119.160   k8s-master
kube-system   calico-kube-controllers-d554689d5-c74c5   1/1       Running   0          7m        192.168.119.160   k8s-master
kube-system   calico-node-766z4                         2/2       Running   0          7m        192.168.119.160   k8s-master
kube-system   calico-node-rdtr2                         2/2       Running   1          54s       192.168.119.161   k8s-node-1
kube-system   etcd-k8s-master                           1/1       Running   0          7m        192.168.119.160   k8s-master
kube-system   kube-apiserver-k8s-master                 1/1       Running   0          7m        192.168.119.160   k8s-master
kube-system   kube-controller-manager-k8s-master        1/1       Running   0          7m        192.168.119.160   k8s-master
kube-system   kube-dns-6f4fd4bdf-m7vpm                  3/3       Running   0          57m       192.168.235.198   k8s-master
kube-system   kube-proxy-qfrh7                          1/1       Running   0          57m       192.168.119.160   k8s-master
kube-system   kube-proxy-rbsss                          1/1       Running   0          54s       192.168.119.161   k8s-node-1
kube-system   kube-scheduler-k8s-master                 1/1       Running   0          7m        192.168.119.160   k8s-master

kubeadm init 执行结束之后会输出kubeadm join,在node节点上直接执行即可,如果进行清屏操作,有没有记录,可以使用此命令输出:kubeadm token create --print-join-command

2.4、部署Dashboard

[root@k8s-master yml]# wget https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml
[root@k8s-master yml]# kubectl apply kubernetes-dashboard.yml

默认配置部署的dashboard账号kubernetes-dashboard绑定的角色为kubernetes-dashboard-minimal,我们需要为其配置一个权限更高的用户

Dashboard的服务方式已经被修改为nodeport

[root@k8s-master yml]# vi kubernetes-dashboard-rbac-admin.yml 

apiVersion: v1
kind: ServiceAccount
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard-admin
  namespace: kube-system

---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: kubernetes-dashboard-admin
  labels:
    k8s-app: kubernetes-dashboard
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: kubernetes-dashboard-admin
  namespace: kube-system

[root@k8s-master yml]# kubectl apply -f kubernetes-dashboard-rbac-admin.yml 

我们使用自己创建的kubernetes-dashboard-admin用户登录dashboard,并且使用token方式登录

### 找到kubernetes-dashboard-admin的secret记录
[root@k8s-master yml]# kubectl get secret --all-namespaces | grep kubernetes-dashboard-admin
kube-system   kubernetes-dashboard-admin-token-b5np5           kubernetes.io/service-account-token   3         22h

## 查看secret的token值
[root@k8s-master yml]# kubectl describe secret kubernetes-dashboard-admin-token-b5np5 -n kube-system
Name:         kubernetes-dashboard-admin-token-b5np5
Namespace:    kube-system
Labels:       <none>
Annotations:  kubernetes.io/service-account.name=kubernetes-dashboard-admin
              kubernetes.io/service-account.uid=b27052cb-0ef6-11e8-9845-005056322159

Type:  kubernetes.io/service-account-token

Data
====
ca.crt:     1025 bytes
namespace:  11 bytes
token:      eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJrdWJlcm5ldGVzLWRhc2hib2FyZC1hZG1pbi10b2tlbi1iNW5wNSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJrdWJlcm5ldGVzLWRhc2hib2FyZC1hZG1pbiIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImIyNzA1MmNiLTBlZjYtMTFlOC05ODQ1LTAwNTA1NjMyMjE1OSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTprdWJlcm5ldGVzLWRhc2hib2FyZC1hZG1pbiJ9.32kMrJe4DocRmDX66_gYaFS28RPICV-VHKJHuVKFwNfnhy-mq1mQceMdf23Xvd7LbdnkvQgOeIcZPGyf06Y1nAMMna7zVPYmZDHpXINb4sL-m45gaRoU3OELdPD1iwMIq4bpE1wo_sbctitUDvrCarQ3ZVeVCmv92qi0TMceAZ4Q7_RXNYL4U6LolZc6IXVij0aWBJtuPoMjUyamsET8gDNluSARfldt37BxKc80sE5R5CMQwEykvrq9Db7eYKFjEY5Tee4_10eobynzgaFAeJkAUNs0wuT42M2x1JpzYNs6u8X34aIRoiZaFGoOGBnWyaicCRfIxcOxL37m8-_p7g

查看dashboard的端口,并使用上面的token登录

[root@k8s-master yml]# kubectl get svc --all-namespaces
NAMESPACE     NAME                   TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)         AGE
default       kubernetes             ClusterIP   10.96.0.1       <none>        443/TCP         2d
kube-system   calico-etcd            ClusterIP   10.96.232.136   <none>        6666/TCP        2d
kube-system   kube-dns               ClusterIP   10.96.0.10      <none>        53/UDP,53/TCP   2d
kube-system   kubernetes-dashboard   NodePort    10.107.2.229    <none>        443:31879/TCP   22h

登录地址:https://192.168.119.160:31879/

kubeadm部署k8s集群(1.9.2)

kubeadm部署k8s集群(1.9.2)

2.5、节点重置/移除节点

### 驱离k8s-node-1节点上的pod ###
[root@k8s-master ~]# kubectl drain k8s-node-1 --delete-local-data --force --ignore-daemonsets

### 删除节点 ###
[root@k8s-master ~]# kubectl delete node k8s-node-1

### 重置节点 ###
[root@k8s-node-1 ~]# kubeadm reset

Reset 命令在需要删除的节点上执行,节点重置之后,会残留一些虚拟网络设备,比如calico:tunl0@NONE,这个重启系统之后就好了,flannel和cni相关的可以执行 ip link delete命令删除。

3、附件下载

网盘地址:https://pan.baidu.com/s/1i6Qbo4d

附件中包含部署所需的rpm、image和yml,image解压使用tar -zxvf命令,会得到.tar文件

calico的yml文件中的网络参数被修改为:172.16.0.0/16网段

dashboard的yml文件修改服务暴露方式为NodePort

4、参考资料

https://kubernetes.io/docs/setup/independent/install-kubeadm/

https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/

5、补充内容

默认情况下,master节点不参与负载,也就是一般的应用Pod不会发布到master节点上,要取消此限制可以:

[root@k8s-master ~]# kubectl get nodes
NAME         STATUS    ROLES     AGE       VERSION
k8s-master   Ready     master    13d       v1.9.2
k8s-node-1   Ready     <none>    13d       v1.9.2
k8s-node-2   Ready     <none>    13d       v1.9.2
k8s-node-3   Ready     <none>    8d        v1.9.2
[root@k8s-master ~]# kubectl describe node k8s-master
Name:               k8s-master
Roles:              master
Labels:             beta.kubernetes.io/arch=amd64
                    beta.kubernetes.io/os=linux
                    kubernetes.io/hostname=k8s-master
                    node-role.kubernetes.io/master=
Annotations:        flannel.alpha.coreos.com/backend-data={"VtepMAC":"1e:9d:38:aa:3e:38"}
                    flannel.alpha.coreos.com/backend-type=vxlan
                    flannel.alpha.coreos.com/kube-subnet-manager=true
                    flannel.alpha.coreos.com/public-ip=192.168.119.160
                    node.alpha.kubernetes.io/ttl=0
                    volumes.kubernetes.io/controller-managed-attach-detach=true
Taints:             node-role.kubernetes.io/master:NoSchedule
CreationTimestamp:  Wed, 28 Feb 2018 12:46:57 +0800
Conditions:
  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
  ----             ------  -----------------                 ------------------                ------                       -------
  OutOfDisk        False   Tue, 13 Mar 2018 13:07:20 +0800   Wed, 28 Feb 2018 12:46:53 +0800   KubeletHasSufficientDisk     kubelet has sufficient disk space available
  MemoryPressure   False   Tue, 13 Mar 2018 13:07:20 +0800   Wed, 28 Feb 2018 12:46:53 +0800   KubeletHasSufficientMemory   kubelet has sufficient memory available
  DiskPressure     False   Tue, 13 Mar 2018 13:07:20 +0800   Wed, 28 Feb 2018 12:46:53 +0800   KubeletHasNoDiskPressure     kubelet has no disk pressure
  Ready            True    Tue, 13 Mar 2018 13:07:20 +0800   Wed, 28 Feb 2018 12:47:27 +0800   KubeletReady                 kubelet is posting ready status
Addresses:
  InternalIP:  192.168.119.160
  Hostname:    k8s-master
Capacity:
 cpu:     1
 memory:  1867264Ki
 pods:    110
Allocatable:
 cpu:     1
 memory:  1764864Ki
 pods:    110
System Info:
 Machine ID:                 22ff211a91a04587817a950a91509124
 System UUID:                A6AF4D56-94B2-32F7-A02F-5DB983CABEC3
 Boot ID:                    fdc7e851-6417-435c-94a1-ec7db677337f
 Kernel Version:             3.10.0-514.el7.x86_64
 OS Image:                   CentOS Linux 7 (Core)
 Operating System:           linux
 Architecture:               amd64
 Container Runtime Version:  docker://1.12.6
 Kubelet Version:            v1.9.2
 Kube-Proxy Version:         v1.9.2
PodCIDR:                     10.244.0.0/24
ExternalID:                  k8s-master
Non-terminated Pods:         (8 in total)
  Namespace                  Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits
  ---------                  ----                                     ------------  ----------  ---------------  -------------
  kube-system                etcd-k8s-master                          0 (0%)        0 (0%)      0 (0%)           0 (0%)
  kube-system                kube-apiserver-k8s-master                250m (25%)    0 (0%)      0 (0%)           0 (0%)
  kube-system                kube-controller-manager-k8s-master       200m (20%)    0 (0%)      0 (0%)           0 (0%)
  kube-system                kube-dns-6f4fd4bdf-mwddx                 260m (26%)    0 (0%)      110Mi (6%)       170Mi (9%)
  kube-system                kube-flannel-ds-qrkwd                    0 (0%)        0 (0%)      0 (0%)           0 (0%)
  kube-system                kube-proxy-g47nx                         0 (0%)        0 (0%)      0 (0%)           0 (0%)
  kube-system                kube-scheduler-k8s-master                100m (10%)    0 (0%)      0 (0%)           0 (0%)
  kube-system                kubernetes-dashboard-845747bdd4-gcj47    0 (0%)        0 (0%)      0 (0%)           0 (0%)
Allocated resources:
  (Total limits may be over 100 percent, i.e., overcommitted.)
  CPU Requests  CPU Limits  Memory Requests  Memory Limits
  ------------  ----------  ---------------  -------------
  810m (81%)    0 (0%)      110Mi (6%)       170Mi (9%)
Events:         <none>

找到“Taints:”这一行

[root@k8s-master ~]# kubectl taint node k8s-master node-role.kubernetes.io/master:NoSchedule-