centos7.5下kubeadm安装kubernetes集群安装

时间:2022-06-01 19:30:02

文章是按https://blog.csdn.net/Excairun/article/details/88962769,来进行操作并记录相关结果

版本:k8s V14.0,docker-ce 18.09.03

1.环境准备

内核及版本

[root@k8s-node1 ~]# cat /etc/redhat-release
CentOS Linux release 7.6. (Core)
[root@k8s-node1 ~]# uname -r
3.10.-.el7.x86_64
[root@k8s-node1 ~]# uname -a
Linux k8s-node1 3.10.-.el7.x86_64 # SMP Thu Nov :: UTC x86_64 x86_64 x86_64 GNU/Linux
[root@k8s-node1 ~]#

修改主机名配置hosts

[root@localhost ~]# hostname
localhost.localdomain
[root@localhost ~]# hostname k8s-master ##临时生效
localhost.localdomain
[root@localhost ~]# hostnamectl set-hostname k8s-master ##重启后永久生效
[root@localhost ~]# reboot
[root@localhost ~]# cat /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
:: localhost localhost.localdomain localhost6 localhost6.localdomain6
128.192.111.130 k8s-master
128.192.111.131 k8s-node1
[root@localhost ~]#

同步时间

yum install -y ntpdate
ntpdate -u ntp.api.bz

关闭防火墙,selinux,swap,桥接网络配置

# 所有主机:基本系统配置

# 关闭Selinux/firewalld
systemctl stop firewalld
systemctl disable firewalld
setenforce
sed -i "s/SELINUX=enforcing/SELINUX=disabled/g" /etc/selinux/config # 关闭交换分区
swapoff -a
yes | cp /etc/fstab /etc/fstab_bak
cat /etc/fstab_bak |grep -v swap > /etc/fstab # 设置网桥包经IPTables,core文件生成路径 modprobe br_netfilter
cat >/etc/sysctl.conf <<EOF
vm.swappiness =
net.bridge.bridge-nf-call-ip6tables =
net.bridge.bridge-nf-call-iptables =
EOF sysctl -p

2.docker安装

#安装yum操作的基本服务,如果已安装可跳过此步骤
yum install -y net-tools epel-release
yum install -y vim yum-utils device-mapper-persistent-data lvm2
#添加阿里云的docker-ce源
yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
#查看docker可安装版本
yum list docker-ce.x86_64 --showduplicates |sort -r
#安装docker如果不指定版本则会安装最新可安装版本
yum install docker-ce-18.09.3-3.el7
#设置开机启动
systemctl enable docker
#启动服务
systemctl start docker

3.kubernetes相关配置

配置k8s走阿里镜像

[root@localhost ~]# cat /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=
gpgcheck=
repo_gpgcheck=
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg

安装kubectl,kubelet,kubernetes-cni,kubeadm

yum install -y kubectl-1.14. kubelet-1.14. kubernetes-cni-1.14. kubeadm-1.14.0
systemctl enable kubelet #开机启动

设置docker Cgroup Driver为systemd,达到与kubelet相一致的要求

cat > /etc/docker/daemon.json <<EOF
{
"exec-opts": ["native.cgroupdriver=systemd"],
"log-driver": "json-file",
"log-opts": {
"max-size": "100m"
},
"storage-driver": "overlay2",
"storage-opts": [
"overlay2.override_kernel_check=true"
]
}
EOF

重启docker

[root@localhost ~]# mkdir -p /etc/systemd/system/docker.service.d
[root@localhost ~]#
[root@localhost ~]# systemctl daemon-reload
[root@localhost ~]# systemctl restart docker

查看kubeadm v1.14.0要求的版本

[root@localhost ~]# kubeadm config images list
I0402 ::29.358043 version.go:] could not fetch a Kubernetes version from the internet: unable to get URL "https://dl.k8s.io/release/stable-1.txt": Get https://dl.k8s.io/release/stable-1.txt: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
I0402 ::29.358209 version.go:] falling back to the local client version: v1.14.0
k8s.gcr.io/kube-apiserver:v1.14.0
k8s.gcr.io/kube-controller-manager:v1.14.0
k8s.gcr.io/kube-scheduler:v1.14.0
k8s.gcr.io/kube-proxy:v1.14.0
k8s.gcr.io/pause:3.1
k8s.gcr.io/etcd:3.3.
k8s.gcr.io/coredns:1.3.

拉取国内镜像

[root@localhost ~]# cat pullK8sImages.sh
#!/bin/bash
KUBE_VERSION=v1.14.0
KUBE_PAUSE_VERSION=3.1
ETCD_VERSION=3.3.
DNS_VERSION=1.3.
DASHBOARD_VERSION=v1.10.1
username=registry.cn-hangzhou.aliyuncs.com/google_containers
#多了一个dashboard组件是为后续安装做准备
images=(
kube-proxy-amd64:${KUBE_VERSION}
kube-scheduler-amd64:${KUBE_VERSION}
kube-controller-manager-amd64:${KUBE_VERSION}
kube-apiserver-amd64:${KUBE_VERSION}
pause:${KUBE_PAUSE_VERSION}
etcd-amd64:${ETCD_VERSION}
coredns:${DNS_VERSION}
kubernetes-dashboard-amd64:DASHBOARD_VERSION
)
for image in ${images[@]}
do
NEW_IMAGE=`echo ${image}|awk '{gsub(/-amd64/,"",$0);print}'`
echo ${NEW_IMAGE}
docker pull ${username}/${image}
docker tag ${username}/${image} k8s.gcr.io/${NEW_IMAGE}
docker rmi ${username}/${image}
done
[root@localhost ~]# sh pullK8sImages.sh

拉取结果

[root@localhost ~]# docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
k8s.gcr.io/kube-proxy v1.14.0 5cd54e388aba days ago .1MB
k8s.gcr.io/kube-scheduler v1.14.0 00638a24688b days ago .6MB
k8s.gcr.io/kube-controller-manager v1.14.0 b95b1efa0436 days ago 158MB
k8s.gcr.io/kube-apiserver v1.14.0 ecf910f40d6e days ago 210MB
k8s.gcr.io/coredns 1.3. eb516548c180 months ago .3MB
k8s.gcr.io/etcd 3.3. 2c4adeb21b4f months ago 258MB
k8s.gcr.io/pause 3.1 da86e6ba6ca1 months ago 742kB
[root@localhost ~]#

4.kubeadm(仅master执行)安装

kubeadm init   --kubernetes-version=v1.14.0   --pod-network-cidr=10.244.0.0/  --apiserver-advertise-address=192.168.111.130  --token-ttl   --ignore-preflight-errors=Swap

安装结果

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/ Then you can join any number of worker nodes by running the following on each as root: kubeadm join 192.168.111.130: --token ymtl8s.933t59qfezi9gjcq \
--discovery-token-ca-cert-hash sha256:7816d0b2572e6c569ed8e63ece15a7a08d06ed3fc89698245bf2aaa6acc345d7

出现最后一行,表示安装成功,若报错,可执行

kubeadm reset

rm -rf $HOME/.kube/config

调整后,再执行kubeadm init
为了在点上使用kubectl,需作以下设置

[root@localhost ~]# kubectl version
Client Version: version.Info{Major:"", Minor:"", GitVersion:"v1.14.0", GitCommit:"641856db18352033a0d96dbc99153fa3b27298e5", GitTreeState:"clean", BuildDate:"2019-03-25T15:53:57Z", GoVersion:"go1.12.1", Compiler:"gc", Platform:"linux/amd64"}
The connection to the server localhost: was refused - did you specify the right host or port?
[root@localhost ~]# hostname
k8s-master
[root@localhost ~]# mkdir -p $HOME/.kube
[root@localhost ~]# cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@localhost ~]# chown $(id -u):$(id -g) $HOME/.kube/config
[root@localhost ~]# kubectl version
Client Version: version.Info{Major:"", Minor:"", GitVersion:"v1.14.0", GitCommit:"641856db18352033a0d96dbc99153fa3b27298e5", GitTreeState:"clean", BuildDate:"2019-03-25T15:53:57Z", GoVersion:"go1.12.1", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"", Minor:"", GitVersion:"v1.14.0", GitCommit:"641856db18352033a0d96dbc99153fa3b27298e5", GitTreeState:"clean", BuildDate:"2019-03-25T15:45:25Z", GoVersion:"go1.12.1", Compiler:"gc", Platform:"linux/amd64"}
[root@localhost ~]#

[root@localhost ~]# kubectl cluster-info Kubernetes master is running at https://192.168.111.130:6443 KubeDNS is running at https://192.168.111.130:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.

5.配置flannel网络(仅master节点执行)

不成功可多次尝试

[root@localhost ~]# yum -y install wget
[root@localhost ~]# wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
[root@localhost ~]# kubectl  apply -f kube-flannel.yml

6.添加子节点(节点机器上操作)

修改节点机器名称 linux centos7.5修改主机名和ip永久生效

##临时生效,重启失效
[root@localhost ~]# hostname
localhost.localdomain
[root@localhost ~]#hostname k8s-node1 ##永久有效,重启后生效
[root@localhost ~]#hostnamectl set-hostname k8s-node1
[root@localhost ~]# reboot
kubeadm join 192.168.111.130: --token ymtl8s.933t59qfezi9gjcq \
--discovery-token-ca-cert-hash sha256:7816d0b2572e6c569ed8e63ece15a7a08d06ed3fc89698245bf2aaa6acc345d7

执行结果

[root@localhost ~]# kubeadm join 192.168.111.130: --token ymtl8s.933t59qfezi9gjcq     --discovery-token-ca-cert-hash sha256:7816d0b2572e6c569ed8e63ece15a7a08d06ed3fc89698245bf2aaa6acc345d7
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.14" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Activating the kubelet service
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap... This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details. Run 'kubectl get nodes' on the control-plane to see this node join the cluster. [root@localhost ~]#

报错可执行对应操作,后再执行kubeadm

[root@localhost ~]# kubeadm join 192.168.111.130: --token ymtl8s.933t59qfezi9gjcq \
> --discovery-token-ca-cert-hash sha256:7816d0b2572e6c569ed8e63ece15a7a08d06ed3fc89698245bf2aaa6acc345d7
[preflight] Running pre-flight checks
error execution phase preflight: [preflight] Some fatal errors occurred:
[ERROR FileContent--proc-sys-net-ipv4-ip_forward]: /proc/sys/net/ipv4/ip_forward contents are not set to
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
[root@localhost ~]# echo > /proc/sys/net/ipv4/ip_forward
[root@localhost ~]# cat /proc/sys/net/ipv4/ip_forward

(master)查看要点信息

[root@localhost ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-master Ready master 40m v1.14.0
k8s-node1 Ready <none> 4m31s v1.14.0

查看节点详情

[root@localhost ~]# kubectl describe nodes k8s-node1
Name: k8s-node1
Roles: <none>
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=k8s-node1
kubernetes.io/os=linux
Annotations: flannel.alpha.coreos.com/backend-data: {"VtepMAC":"ca:bc:84:14:09:94"}
flannel.alpha.coreos.com/backend-type: vxlan
flannel.alpha.coreos.com/kube-subnet-manager: true
flannel.alpha.coreos.com/public-ip: 192.168.111.131
kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
node.alpha.kubernetes.io/ttl:
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Tue, Apr :: +
Taints: <none>
Unschedulable: false
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
MemoryPressure False Tue, Apr :: + Tue, Apr :: + KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Tue, Apr :: + Tue, Apr :: + KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Tue, Apr :: + Tue, Apr :: + KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Tue, Apr :: + Tue, Apr :: + KubeletReady kubelet is posting ready status
Addresses:
InternalIP: 192.168.111.131
Hostname: k8s-node1
Capacity:
cpu:
ephemeral-storage: 17394Mi
hugepages-1Gi:
hugepages-2Mi:
memory: 995896Ki
pods:
Allocatable:
cpu:
ephemeral-storage:
hugepages-1Gi:
hugepages-2Mi:
memory: 893496Ki
pods:
System Info:
Machine ID: a5a43f5916c643bf83d6f99425a4b9d2
System UUID: FCCE4D56-202D-568C--7A69D9ADF401
Boot ID: 68e3c38d-d1d6--9af5-01a9699ce00c
Kernel Version: 3.10.-.el7.x86_64
OS Image: CentOS Linux (Core)
Operating System: linux
Architecture: amd64
Container Runtime Version: docker://18.9.3
Kubelet Version: v1.14.0
Kube-Proxy Version: v1.14.0
PodCIDR: 10.244.1.0/
Non-terminated Pods: ( in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE
--------- ---- ------------ ---------- --------------- ------------- ---
kube-system kube-flannel-ds-amd64-bsrkx 100m (%) 100m (%) 50Mi (%) 50Mi (%) 7m19s
kube-system kube-proxy-2mj4q (%) (%) (%) (%) 7m19s
Allocated resources:
(Total limits may be over percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 100m (%) 100m (%)
memory 50Mi (%) 50Mi (%)
ephemeral-storage (%) (%)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Starting 7m20s kubelet, k8s-node1 Starting kubelet.
Normal NodeHasSufficientMemory 7m20s (x2 over 7m20s) kubelet, k8s-node1 Node k8s-node1 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 7m20s (x2 over 7m20s) kubelet, k8s-node1 Node k8s-node1 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 7m20s (x2 over 7m20s) kubelet, k8s-node1 Node k8s-node1 status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 7m19s kubelet, k8s-node1 Updated Node Allocatable limit across pods
Normal Starting 7m16s kube-proxy, k8s-node1 Starting kube-proxy.
Normal NodeReady 6m29s kubelet, k8s-node1 Node k8s-node1 status is now: NodeReady
[root@localhost ~]#

kubectl查看系统pod信息

[root@k8s-master ~]# kubectl get namespaces ##get 可查看services,deployments,pods,replicasets
NAME STATUS AGE
default Active 3h17m
kube-node-lease Active 3h17m
kube-public Active 3h17m
kube-system Active 3h17m
root@k8s-master ~]# kubectl get po --all-namespaces -o wide ##-o查看pod运行在那个node上,--namespace=kube-system查看系统pod
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
kube-system coredns-fb8b8dccf-ltlx4 / Running 3h12m 10.244.0.4 k8s-master <none> <none>
kube-system coredns-fb8b8dccf-q949f / Running 3h12m 10.244.0.5 k8s-master <none> <none>
kube-system etcd-k8s-master / Running 3h11m 192.168.111.130 k8s-master <none> <none>
kube-system kube-apiserver-k8s-master / Running 3h12m 192.168.111.130 k8s-master <none> <none>
kube-system kube-controller-manager-k8s-master / Running 3h11m 192.168.111.130 k8s-master <none> <none>
kube-system kube-flannel-ds-amd64-2gr2v / Running 177m 192.168.111.130 k8s-master <none> <none>
kube-system kube-flannel-ds-amd64-bsrkx / Running 157m 192.168.111.131 k8s-node1 <none> <none>
kube-system kube-flannel-ds-amd64-xdg5p / Running 15m 192.168.111.132 k8s-node2 <none> <none>
kube-system kube-proxy-2mj4q / Running 157m 192.168.111.131 k8s-node1 <none> <none>
kube-system kube-proxy-ffd8s / Running 15m 192.168.111.132 k8s-node2 <none> <none>
kube-system kube-proxy-qp5k7 / Running 3h12m 192.168.111.130 k8s-master <none> <none>
kube-system kube-scheduler-k8s-master / Running 3h12m 192.168.111.130 k8s-master <none> <none>
[root@k8s-master ~]#

docker ps -a查看container

[root@k8s-master docker]# docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
c8a00a7aab85 tomcat: "catalina.sh run" seconds ago Up seconds 0.0.0.0:->/tcp mytomcat_1
75bde1b990e7 b95b1efa0436 "kube-controller-man…" minutes ago Up minutes k8s_kube-controller-manager_kube-controller-manager-k8s-master_kube-system_0ff88c9b6e64cded3762e51ff18bce90_4
0073cac306f0 ff281650a721 "/opt/bin/flanneld -…" minutes ago Up minutes k8s_kube-flannel_kube-flannel-ds-amd64-2gr2v_kube-system_4cf85962--11e9-a96d-000c291ae345_1
f769c27066dc ff281650a721 "cp -f /etc/kube-fla…" minutes ago Exited () minutes ago k8s_install-cni_kube-flannel-ds-amd64-2gr2v_kube-system_4cf85962--11e9-a96d-000c291ae345_1
b54ff4716e39 k8s.gcr.io/pause:3.1 "/pause" minutes ago Up minutes k8s_POD_kube-flannel-ds-amd64-2gr2v_kube-system_4cf85962--11e9-a96d-000c291ae345_1
408f7e4b42e8 k8s.gcr.io/pause:3.1 "/pause" minutes ago Up minutes k8s_POD_kube-controller-manager-k8s-master_kube-system_0ff88c9b6e64cded3762e51ff18bce90_4
fecb42b177df eb516548c180 "/coredns -conf /etc…" minutes ago Up minutes k8s_coredns_coredns-fb8b8dccf-ltlx4_kube-system_2f8b4ffb-550e-11e9-a96d-000c291ae345_1
e5dbdf2d4af3 eb516548c180 "/coredns -conf /etc…" minutes ago Up minutes k8s_coredns_coredns-fb8b8dccf-q949f_kube-system_2f8c9997-550e-11e9-a96d-000c291ae345_1
de96a90cd8c8 ecf910f40d6e "kube-apiserver --ad…" minutes ago Up minutes k8s_kube-apiserver_kube-apiserver-k8s-master_kube-system_078b26a5af4c34641521cf85bb8b5ee7_3
2a5b2f1eff20 5cd54e388aba "/usr/local/bin/kube…" minutes ago Up minutes k8s_kube-proxy_kube-proxy-qp5k7_kube-system_2f1a1559-550e-11e9-a96d-000c291ae345_1
c9de0db0ce55 00638a24688b "kube-scheduler --bi…" minutes ago Up minutes k8s_kube-scheduler_kube-scheduler-k8s-master_kube-system_58272442e226c838b193bbba4c44091e_3
97364e091073 2c4adeb21b4f "etcd --advertise-cl…" minutes ago Up minutes k8s_etcd_etcd-k8s-master_kube-system_804ba6a1bef952d18f2040a1ff90dbc3_3
bd0c87c3d533 k8s.gcr.io/pause:3.1 "/pause" minutes ago Up minutes k8s_POD_coredns-fb8b8dccf-q949f_kube-system_2f8c9997-550e-11e9-a96d-000c291ae345_4
08f894c8c252 k8s.gcr.io/pause:3.1 "/pause" minutes ago Up minutes k8s_POD_coredns-fb8b8dccf-ltlx4_kube-system_2f8b4ffb-550e-11e9-a96d-000c291ae345_4
e0e7875598de k8s.gcr.io/pause:3.1 "/pause" minutes ago Up minutes k8s_POD_kube-scheduler-k8s-master_kube-system_58272442e226c838b193bbba4c44091e_3
461eaa7b2899 k8s.gcr.io/pause:3.1 "/pause" minutes ago Up minutes k8s_POD_kube-proxy-qp5k7_kube-system_2f1a1559-550e-11e9-a96d-000c291ae345_1
5233aca7eca5 k8s.gcr.io/pause:3.1 "/pause" minutes ago Up minutes k8s_POD_etcd-k8s-master_kube-system_804ba6a1bef952d18f2040a1ff90dbc3_3
841be3a3cc7d k8s.gcr.io/pause:3.1 "/pause" minutes ago Up minutes k8s_POD_kube-apiserver-k8s-master_kube-system_078b26a5af4c34641521cf85bb8b5ee7_4
4698599e362c b95b1efa0436 "kube-controller-man…" minutes ago Exited () minutes ago k8s_kube-controller-manager_kube-controller-manager-k8s-master_kube-system_0ff88c9b6e64cded3762e51ff18bce90_3
f0f87dd886f9 k8s.gcr.io/pause:3.1 "/pause" minutes ago Exited () minutes ago k8s_POD_kube-controller-manager-k8s-master_kube-system_0ff88c9b6e64cded3762e51ff18bce90_3
a0c24081b996 eb516548c180 "/coredns -conf /etc…" About an hour ago Exited () minutes ago k8s_coredns_coredns-fb8b8dccf-q949f_kube-system_2f8c9997-550e-11e9-a96d-000c291ae345_0
b8b4efead3b4 k8s.gcr.io/pause:3.1 "/pause" About an hour ago Exited () minutes ago k8s_POD_coredns-fb8b8dccf-q949f_kube-system_2f8c9997-550e-11e9-a96d-000c291ae345_3
96a92d82c0dc eb516548c180 "/coredns -conf /etc…" About an hour ago Exited () minutes ago k8s_coredns_coredns-fb8b8dccf-ltlx4_kube-system_2f8b4ffb-550e-11e9-a96d-000c291ae345_0
c2f167c6e9c8 k8s.gcr.io/pause:3.1 "/pause" About an hour ago Exited () minutes ago k8s_POD_coredns-fb8b8dccf-ltlx4_kube-system_2f8b4ffb-550e-11e9-a96d-000c291ae345_3
cb1e6e3458ee ff281650a721 "/opt/bin/flanneld -…" About an hour ago Exited () minutes ago k8s_kube-flannel_kube-flannel-ds-amd64-2gr2v_kube-system_4cf85962--11e9-a96d-000c291ae345_0
99acbfbb1e68 5cd54e388aba "/usr/local/bin/kube…" About an hour ago Exited () minutes ago k8s_kube-proxy_kube-proxy-qp5k7_kube-system_2f1a1559-550e-11e9-a96d-000c291ae345_0
09195555e12f k8s.gcr.io/pause:3.1 "/pause" About an hour ago Exited () minutes ago k8s_POD_kube-proxy-qp5k7_kube-system_2f1a1559-550e-11e9-a96d-000c291ae345_0
ed47844f108f k8s.gcr.io/pause:3.1 "/pause" About an hour ago Exited () minutes ago k8s_POD_kube-flannel-ds-amd64-2gr2v_kube-system_4cf85962--11e9-a96d-000c291ae345_0
f09401726136 00638a24688b "kube-scheduler --bi…" About an hour ago Exited () minutes ago k8s_kube-scheduler_kube-scheduler-k8s-master_kube-system_58272442e226c838b193bbba4c44091e_2
a03a28a6ef98 ecf910f40d6e "kube-apiserver --ad…" About an hour ago Exited () minutes ago k8s_kube-apiserver_kube-apiserver-k8s-master_kube-system_078b26a5af4c34641521cf85bb8b5ee7_2
4769961090db 2c4adeb21b4f "etcd --advertise-cl…" About an hour ago Exited () minutes ago k8s_etcd_etcd-k8s-master_kube-system_804ba6a1bef952d18f2040a1ff90dbc3_2
191090440ed3 k8s.gcr.io/pause:3.1 "/pause" About an hour ago Exited () minutes ago k8s_POD_etcd-k8s-master_kube-system_804ba6a1bef952d18f2040a1ff90dbc3_2
89001ab3e457 k8s.gcr.io/pause:3.1 "/pause" About an hour ago Exited () minutes ago k8s_POD_kube-apiserver-k8s-master_kube-system_078b26a5af4c34641521cf85bb8b5ee7_2
4e93b1057da0 k8s.gcr.io/pause:3.1 "/pause" About an hour ago Exited () minutes ago k8s_POD_kube-scheduler-k8s-master_kube-system_58272442e226c838b193bbba4c44091e_2

至此,kubernetes安装完成