使用kubeadm 安装 kubernetes 1.15.1

时间:2023-03-09 14:47:02
使用kubeadm 安装 kubernetes 1.15.1

简介:

Kubernetes作为Google开源的容器运行平台,受到了大家的热捧。搭建一套完整的kubernetes平台,也成为试用这套平台必须迈过的坎儿。kubernetes1.5版本以及之前,安装还是相对比较方便的,官方就有通过yum源在centos7安装kubernetes。但是在kubernetes1.6之后,安装就比较繁琐了,需要证书各种认证,对于刚接触kubernetes的人来说很不友好。

docker : kubernetes依赖的容器运行时
kubelet: kubernetes最核心的agent组件,每个节点都会启动一个,负责像pods及节点的生命周期等管理
kubectl: kubernetes的命令行控制工具,只可以在master上使用.
kubeadm: 用来bootstrap kubernetes. 初始化一个k8s集群.

架构说明:

配置host

[root@master /]# cat /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 18.16.202.163 master
18.16.202.227 slaver1
18.16.202.95 slaver2

配置代理上网:

etc/profile文件内添加:

export http_proxy="http://18.16.202.169:8118"
export https_proxy="https://18.16.202.169:8118" printf -v no_ip_proxy '%s,' 18.16.202.{1..255}; export no_proxy=.baidu.com,.aliyun.com,.aliyuncs.com,.360doc.com,.163.com,.163yun.com,.tencent.com,qq.com,.daocloud.io,.cn,local,localhost,localdomain,127.0.0.1,"${no_ip_proxy%,}"

注意这里不能使用星号模糊匹配,Linux中不支持

ip_host="192.168.3.7:8118"
export http_proxy="http://${ip_host}"
export https_proxy="https://${ip_host}" printf -v no_ip_proxy '%s,' 192.168.236.{1..255}; export no_proxy=.baidu.com,.aliyun.com,.aliyuncs.com,.360doc.com,.163.com,.163yun.com,.tencent.com,qq.com,.daocloud.io,.cn,local,localhost,localdomain,127.0.0.1,"${no_ip_proxy%,}"

如果要取消代理,可以直接命令设置:

unset https_proxy
unset http_proxy

再次使用代理:

source /etc/profile

所有节点前置配置:

关闭防火墙

systemctl stop firewalld
systemctl disable firewalld

iptables对bridge的数据进行处理:

创建/etc/sysctl.d/k8s.conf文件,添加如下内容:

net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1

执行命令使修改生效。

modprobe br_netfilter
sysctl -p /etc/sysctl.d/k8s.conf

禁用SELinux

setenforce 0

编辑文件/etc/selinux/config,将SELINUX修改为disabled,如下:

sed -i 's/SELINUX=permissive/SELINUX=disabled/' /etc/sysconfig/selinux

#SELINUX=disabled

关闭系统Swap

Kubernetes 1.8开始要求关闭系统的Swap,如果不关闭,默认配置下kubelet将无法启动。方法一,通过kubelet的启动参数–fail-swap-on=false更改这个限制。方法二,关闭系统的Swap。

swapoff -a

修改/etc/fstab文件,注释掉SWAP的自动挂载,使用free -m确认swap已经关闭。

#注释掉swap分区
[root@localhost /]# sed -i 's/.*swap.*/#&/' /etc/fstab #/dev/mapper/centos-swap swap swap defaults 0 0 [root@localhost /]# free -m
total used free shared buff/cache available
Mem: 962 154 446 6 361 612
Swap: 0 0 0

永久禁用swap

echo "vm.swappiness = 0">> /etc/sysctl.conf

kube-proxy开启ipvs的前置条件

由于ipvs已经加入到了内核的主干,所以为kube-proxy开启ipvs的前提需要加载以下的内核模块:

ip_vs

ip_vs_rr

ip_vs_wrr

ip_vs_sh

nf_conntrack_ipv4

在所有的Kubernetes节点node1和node2上执行以下脚本:

cat > /etc/sysconfig/modules/ipvs.modules <<EOF
#!/bin/bash
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack_ipv4
EOF
chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep -e ip_vs -e nf_conntrack_ipv4

上面脚本创建了的/etc/sysconfig/modules/ipvs.modules文件,保证在节点重启后能自动加载所需模块。 使用lsmod | grep -e ip_vs -e nf_conntrack_ipv4命令查看是否已经正确加载所需的内核模块。

接下来还需要确保各个节点上已经安装了ipset软件包。

yum install -y ipset

为了便于查看ipvs的代理规则,最好安装一下管理工具ipvsadm 。

yum install -y ipvsadm

如果以上前提条件如果不满足,则即使kube-proxy的配置开启了ipvs模式,也会退回到iptables模式。

安装docker

sudo yum install -y yum-utils device-mapper-persistent-data lvm2
sudo yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
sudo yum makecache fast

查看最新docker版本:

[root@localhost /]# yum list docker-ce.x86_64  --showduplicates |sort -r
已加载插件:fastestmirror
可安装的软件包
* updates: mirrors.aliyun.com
Loading mirror speeds from cached hostfile
* extras: mirrors.aliyun.com
* elrepo: mirrors.tuna.tsinghua.edu.cn
docker-ce.x86_64 3:19.03.0-3.el7 docker-ce-stable
docker-ce.x86_64 3:18.09.8-3.el7 docker-ce-stable
docker-ce.x86_64 3:18.09.7-3.el7 docker-ce-stable
docker-ce.x86_64 3:18.09.6-3.el7 docker-ce-stable
docker-ce.x86_64 3:18.09.5-3.el7 docker-ce-stable
docker-ce.x86_64 3:18.09.4-3.el7 docker-ce-stable
docker-ce.x86_64 3:18.09.3-3.el7 docker-ce-stable
docker-ce.x86_64 3:18.09.2-3.el7 docker-ce-stable
docker-ce.x86_64 3:18.09.1-3.el7 docker-ce-stable
docker-ce.x86_64 3:18.09.0-3.el7 docker-ce-stable
docker-ce.x86_64 18.06.3.ce-3.el7 docker-ce-stable
docker-ce.x86_64 18.06.2.ce-3.el7 docker-ce-stable
docker-ce.x86_64 18.06.1.ce-3.el7 docker-ce-stable
docker-ce.x86_64 18.06.0.ce-3.el7 docker-ce-stable
docker-ce.x86_64 18.03.1.ce-1.el7.centos docker-ce-stable
docker-ce.x86_64 18.03.0.ce-1.el7.centos docker-ce-stable
docker-ce.x86_64 17.12.1.ce-1.el7.centos docker-ce-stable
docker-ce.x86_64 17.12.0.ce-1.el7.centos docker-ce-stable
docker-ce.x86_64 17.09.1.ce-1.el7.centos docker-ce-stable
docker-ce.x86_64 17.09.0.ce-1.el7.centos docker-ce-stable
docker-ce.x86_64 17.06.2.ce-1.el7.centos docker-ce-stable
docker-ce.x86_64 17.06.1.ce-1.el7.centos docker-ce-stable
docker-ce.x86_64 17.06.0.ce-1.el7.centos docker-ce-stable
docker-ce.x86_64 17.03.3.ce-1.el7 docker-ce-stable
docker-ce.x86_64 17.03.2.ce-1.el7.centos docker-ce-stable
docker-ce.x86_64 17.03.1.ce-1.el7.centos docker-ce-stable
docker-ce.x86_64 17.03.0.ce-1.el7.centos docker-ce-stable

安装docker:

# sudo yum -y install docker-ce
sudo yum install -y --setopt=obsoletes=0 docker-ce-18.09.8-3.el7
systemctl enable docker.service
systemctl restart docker

我这里安装的是docker-ce 18.09

设置为开机启动:

[root@master /]# systemctl enable docker
Created symlink from /etc/systemd/system/multi-user.target.wants/docker.service to /usr/lib/systemd/system/docker.service.

修改docker cgroup driver为systemd

对于使用systemd作为init system的Linux的发行版,使用systemd作为docker的cgroup driver可以确保服务器节点在资源紧张的情况更加稳定,因此这里修改各个节点上docker的cgroup driver为systemd。

创建或修改/etc/docker/daemon.json

{
"registry-mirrors": ["https://tqvgn53t.mirror.aliyuncs.com"],
"exec-opts": ["native.cgroupdriver=systemd"]
}

重启docker:

systemctl restart docker

docker info | grep Cgroup
Cgroup Driver: systemd

安装kubeadm和kubelet

cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg
https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
EOF

测试地址https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64是否可用。

curl https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64

安装:

yum makecache fast
yum install -y kubelet kubeadm kubectl

安装kubeadm init初始化集群所需docker镜像

开始初始化集群之前可以使用kubeadm config images pull预先在各个节点上拉取所k8s需要的docker镜像

[root@localhost /]# kubeadm config images list
W0725 10:52:57.395062 8776 version.go:98] could not fetch a Kubernetes version from the internet: unable to get URL "https://dl.k8s.io/release/stable-1.txt": Get https://dl.k8s.io/release/stable-1.txt: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
W0725 10:52:57.395395 8776 version.go:99] falling back to the local client version: v1.15.1
k8s.gcr.io/kube-apiserver:v1.15.1
k8s.gcr.io/kube-controller-manager:v1.15.1
k8s.gcr.io/kube-scheduler:v1.15.1
k8s.gcr.io/kube-proxy:v1.15.1
k8s.gcr.io/pause:3.1
k8s.gcr.io/etcd:3.3.10
k8s.gcr.io/coredns:1.3.1
[root@localhost /]# kubeadm config images pull
W0725 10:55:12.586377 8781 version.go:98] could not fetch a Kubernetes version from the internet: unable to get URL "https://dl.k8s.io/release/stable-1.txt": Get https://dl.k8s.io/release/stable-1.txt: proxyconnect tcp: net/http: TLS handshake timeout
W0725 10:55:12.586550 8781 version.go:99] falling back to the local client version: v1.15.1

明显是网络问题,k8s.gcr.io 资源获取不了

在网上找了其他的资源,创建一个shell文件,粘贴运行

MY_REGISTRY=gcr.azk8s.cn/google-containers

## 拉取镜像
docker pull ${MY_REGISTRY}/kube-apiserver:v1.15.1
docker pull ${MY_REGISTRY}/kube-controller-manager:v1.15.1
docker pull ${MY_REGISTRY}/kube-scheduler:v1.15.1
docker pull ${MY_REGISTRY}/kube-proxy:v1.15.1
docker pull ${MY_REGISTRY}/pause:3.1
docker pull ${MY_REGISTRY}/etcd:3.3.10
docker pull ${MY_REGISTRY}/coredns:1.3.1 ## 添加Tag
docker tag ${MY_REGISTRY}/kube-apiserver:v1.15.1 k8s.gcr.io/kube-apiserver:v1.15.1
docker tag ${MY_REGISTRY}/kube-controller-manager:v1.15.1 k8s.gcr.io/kube-controller-manager:v1.15.1
docker tag ${MY_REGISTRY}/kube-scheduler:v1.15.1 k8s.gcr.io/kube-scheduler:v1.15.1
docker tag ${MY_REGISTRY}/kube-proxy:v1.15.1 k8s.gcr.io/kube-proxy:v1.15.1
docker tag ${MY_REGISTRY}/pause:3.1 k8s.gcr.io/pause:3.1
docker tag ${MY_REGISTRY}/etcd:3.3.10 k8s.gcr.io/etcd:3.3.10
docker tag ${MY_REGISTRY}/coredns:1.3.1 k8s.gcr.io/coredns:1.3.1 #删除无用的镜像
docker images | grep ${MY_REGISTRY} | awk '{print "docker rmi " $1":"$2}' | sh -x echo "end"

上面的所有操作可以在一个节点上面完成,然后对进行复制即可。

集群操作

kubeadm初始化配置

使用kubeadm config print init-defaults可以打印集群初始化默认的使用的配置:

[root@localhost /]# kubeadm config print init-defaults
apiVersion: kubeadm.k8s.io/v1beta2
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
token: abcdef.0123456789abcdef
ttl: 24h0m0s
usages:
- signing
- authentication
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 1.2.3.4
bindPort: 6443
nodeRegistration:
criSocket: /var/run/dockershim.sock
name: localhost.localdomain
taints:
- effect: NoSchedule
key: node-role.kubernetes.io/master
---
apiServer:
timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta2
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager: {}
dns:
type: CoreDNS
etcd:
local:
dataDir: /var/lib/etcd
imageRepository: k8s.gcr.io
kind: ClusterConfiguration
kubernetesVersion: v1.14.0
networking:
dnsDomain: cluster.local
serviceSubnet: 10.96.0.0/12
scheduler: {}

从默认的配置中可以看到,可以使用imageRepository定制在集群初始化时拉取k8s所需镜像的地址。

基于默认配置定制出本次使用kubeadm初始化集群所需的配置文件

kubeadm.yaml

apiVersion: kubeadm.k8s.io/v1beta2
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 18.16.202.163
bindPort: 6443
nodeRegistration:
taints:
- effect: PreferNoSchedule
key: node-role.kubernetes.io/master
---
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
kubernetesVersion: v1.15.1
networking:
podSubnet: 10.244.0.0/16

使用kubeadm默认配置初始化的集群,会在master节点打上node-role.kubernetes.io/master:NoSchedule的污点,阻止master节点接受调度运行工作负载。这里测试环境只有两个节点,所以将这个taint修改为node-role.kubernetes.io/master:PreferNoSchedule

kubeadm初始化集群

使用kubeadm初始化集群,在master上执行下面的命令:

因为我使用的是虚拟机,只分配一个cpu,所以指定了参数--ignore-preflight-errors=NumCPU,如果你的cpu足够,不要添加这个参数.

[root@master /]# kubeadm init --config /home/kubeadm.yaml --ignore-preflight-errors=Swap --ignore-preflight-errors=NumCPU
[init] Using Kubernetes version: v1.15.1
[preflight] Running pre-flight checks
[WARNING NumCPU]: the number of available CPUs 1 is less than the required 2
[WARNING HTTPProxyCIDR]: connection to "10.96.0.0/12" uses proxy "https://18.16.202.169:8118". This may lead to malfunctional cluster setup. Make sure that Pod and Services IP ranges specified correctly as exceptions in proxy configuration
[WARNING HTTPProxyCIDR]: connection to "10.244.0.0/16" uses proxy "https://18.16.202.169:8118". This may lead to malfunctional cluster setup. Make sure that Pod and Services IP ranges specified correctly as exceptions in proxy configuration
[WARNING Service-Docker]: docker service is not enabled, please run 'systemctl enable docker.service'
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Activating the kubelet service
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [master localhost] and IPs [18.16.202.163 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [master localhost] and IPs [18.16.202.163 127.0.0.1 ::1]
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 18.16.202.163]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[apiclient] All control plane components are healthy after 46.528199 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.15" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node master as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node master as control-plane by adding the taints [node-role.kubernetes.io/master:PreferNoSchedule]
[bootstrap-token] Using token: jrts59.18pe12atfafgcxca
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy Your Kubernetes control-plane has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/ Then you can join any number of worker nodes by running the following on each as root: kubeadm join 18.16.202.163:6443 --token jrts59.18pe12atfafgcxca \
--discovery-token-ca-cert-hash sha256:56d6c7d7b63a9109444ece68a1b155d8a9ac049ba57febab2c72d40d8ab7d426

上面记录了完成的初始化输出的内容,根据输出的内容基本上可以看出手动初始化安装一个Kubernetes集群所需要的关键步骤。 其中有以下关键内容:

  • [kubelet-start] 生成kubelet的配置文件”/var/lib/kubelet/config.yaml”

  • [certs]生成相关的各种证书

  • [kubeconfig]生成相关的kubeconfig文件

  • [control-plane]使用/etc/kubernetes/manifests目录中的yaml文件创建apiserver、controller-manager、scheduler的静态pod

  • [bootstraptoken]生成token记录下来,后边使用kubeadm join往集群中添加节点时会用到

  • 下面的命令是配置常规用户如何使用kubectl访问集群:

    mkdir -p $HOME/.kube
    sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
    sudo chown $(id -u):$(id -g) $HOME/.kube/config
  • 最后给出了将节点加入集群的命令kubeadm join 18.16.202.163:6443 --token jrts59.18pe12atfafgcxca \ --discovery-token-ca-cert-hash sha256:56d6c7d7b63a9109444ece68a1b155d8a9ac049ba57febab2c72d40d8ab7d426

如果初始化过程出现问题,使用如下命令重置:

kubeadm reset

查看一下集群状态,确认个组件都处于healthy状态:

[root@master /]# kubectl get cs
NAME STATUS MESSAGE ERROR
scheduler Healthy ok
controller-manager Healthy ok
etcd-0 Healthy {"health":"true"}

将slaver节点添加到集群

kubeadm join 18.16.202.163:6443 --token jrts59.18pe12atfafgcxca \
--discovery-token-ca-cert-hash sha256:56d6c7d7b63a9109444ece68a1b155d8a9ac049ba57febab2c72d40d8ab7d426

查看集群信息

在master中查看:

[root@master /]# kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
master Ready master 5h7m v1.15.1 18.16.202.163 <none> CentOS Linux 7 (Core) 4.17.6-1.el7.elrepo.x86_64 docker://18.9.8
slaver1 Ready <none> 4h38m v1.15.1 18.16.202.227 <none> CentOS Linux 7 (Core) 4.17.6-1.el7.elrepo.x86_64 docker://18.9.8
slaver2 Ready <none> 4h35m v1.15.1 18.16.202.95 <none> CentOS Linux 7 (Core) 4.17.6-1.el7.elrepo.x86_64 docker://18.9.8

重启kubelet:

# 重载所有修改过的配置文件
systemctl daemon-reload
# 重启kubelet
systemctl start kubelet.service
# 开机重启
systemctl enable kubelet.service

查看集群信息:

[root@master /]# kubectl cluster-info
Kubernetes master is running at https://18.16.202.163:6443
KubeDNS is running at https://18.16.202.163:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.

查看Pod运行:

[root@master /]#  kubectl get pods --all-namespaces -o wide
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
kube-system coredns-5c98db65d4-gts57 1/1 Running 0 5h9m 10.244.2.2 slaver2 <none> <none>
kube-system coredns-5c98db65d4-qhwrw 1/1 Running 0 5h9m 10.244.1.2 slaver1 <none> <none>
kube-system etcd-master 1/1 Running 2 5h9m 18.16.202.163 master <none> <none>
kube-system kube-apiserver-master 1/1 Running 2 5h8m 18.16.202.163 master <none> <none>
kube-system kube-controller-manager-master 1/1 Running 5 5h9m 18.16.202.163 master <none> <none>
kube-system kube-flannel-ds-amd64-2lwl8 1/1 Running 0 41m 18.16.202.227 slaver1 <none> <none>
kube-system kube-flannel-ds-amd64-9bjck 1/1 Running 0 41m 18.16.202.95 slaver2 <none> <none>
kube-system kube-flannel-ds-amd64-gxxqg 1/1 Running 0 41m 18.16.202.163 master <none> <none>
kube-system kube-proxy-6gxw9 1/1 Running 0 4h39m 18.16.202.227 slaver1 <none> <none>
kube-system kube-proxy-rx8vv 1/1 Running 0 4h37m 18.16.202.95 slaver2 <none> <none>
kube-system kube-proxy-skw5b 1/1 Running 3 5h9m 18.16.202.163 master <none> <none>
kube-system kube-scheduler-master 1/1 Running 6 5h8m 18.16.202.163 master <none> <none>

安装Pod Network

接下来安装flannel network add-on:

mkdir -p ~/k8s/
cd ~/k8s
curl -O https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
kubectl apply -f kube-flannel.yml podsecuritypolicy.extensions/psp.flannel.unprivileged created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.extensions/kube-flannel-ds-amd64 created
daemonset.extensions/kube-flannel-ds-arm64 created
daemonset.extensions/kube-flannel-ds-arm created
daemonset.extensions/kube-flannel-ds-ppc64le created
daemonset.extensions/kube-flannel-ds-s390x created

如果Node有多个网卡的话,参考flannel issues 39701,目前需要在kube-flannel.yml中使用--iface参数指定集群主机内网网卡的名称,否则可能会出现dns无法解析。需要将kube-flannel.yml下载到本地,flanneld启动参数加上--iface=<iface-name>

# 如果Node有多个网卡的话,参考flannel issues 39701,
# https://github.com/kubernetes/kubernetes/issues/39701
# 目前需要在kube-flannel.yml中使用--iface参数指定集群主机内网网卡的名称,
# 否则可能会出现dns无法解析。容器无法通信的情况,需要将kube-flannel.yml下载到本地,
# flanneld启动参数加上--iface=<iface-name>
containers:
- name: kube-flannel
image: registry.cn-shanghai.aliyuncs.com/gcr-k8s/flannel:v0.10.0-amd64
command:
- /opt/bin/flanneld
args:
- --ip-masq
- --kube-subnet-mgr
- --iface=ens33
- --iface=eth0
⚠️⚠️⚠️--iface=ens33 的值,是你当前的网卡,或者可以指定多网卡

测试集群DNS是否可用

[root@master /]# kubectl run curl --image=radial/busyboxplus:curl -it
kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.
If you don't see a command prompt, try pressing enter.

进入后执行nslookup kubernetes.default确认解析正常:

[ root@curl-6bf6db5c4f-vhsqc:/ ]$ nslookup kubernetes.default
Server: 10.96.0.10
Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local Name: kubernetes.default
Address 1: 10.96.0.1 kubernetes.default.svc.cluster.local

当前容器网络:

[ root@curl-6bf6db5c4f-vhsqc:/ ]$ ifconfig
eth0 Link encap:Ethernet HWaddr D6:20:96:C7:DA:5A
inet addr:10.244.2.3 Bcast:0.0.0.0 Mask:255.255.255.0
UP BROADCAST RUNNING MULTICAST MTU:1450 Metric:1
RX packets:22 errors:0 dropped:0 overruns:0 frame:0
TX packets:12 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:2285 (2.2 KiB) TX bytes:889 (889.0 B) lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
UP LOOPBACK RUNNING MTU:65536 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) [ root@curl-6bf6db5c4f-vhsqc:/ ]$ exit
Session ended, resume using 'kubectl attach curl-6bf6db5c4f-vhsqc -c curl -i -t' command when the pod is running

查看node:

# 只有网络插件也安装配置完成之后,才能会显示为ready状态

[root@master /]# kubectl get node
NAME STATUS ROLES AGE VERSION
master Ready master 6h45m v1.15.1
slaver1 Ready <none> 6h15m v1.15.1
slaver2 Ready <none> 6h12m v1.15.1

从集群中移除Node

如果需要从集群中移除node2这个Node执行下面的命令:

在master节点上执行:

kubectl drain node2 --delete-local-data --force --ignore-daemonsets
kubectl delete node node2

在node2上执行:

kubeadm reset
ifconfig cni0 down
ip link delete cni0
ifconfig flannel.1 down
ip link delete flannel.1
rm -rf /var/lib/cni/

kube-proxy开启ipvs

修改ConfigMap的kube-system/kube-proxy中的config.conf,mode: "ipvs"

[root@master /]# kubectl edit cm kube-proxy -n kube-system
configmap/kube-proxy edited

cm为configmaps缩写

kube-proxy配置修改后为:

apiVersion: v1

data:

config.conf: |-

apiVersion: kubeproxy.config.k8s.io/v1alpha1

bindAddress: 0.0.0.0

clientConnection:

acceptContentTypes: ""

burst: 10

contentType: application/vnd.kubernetes.protobuf

kubeconfig: /var/lib/kube-proxy/kubeconfig.conf

qps: 5

clusterCIDR: 10.244.0.0/16

configSyncPeriod: 15m0s

conntrack:

maxPerCore: 32768

min: 131072

tcpCloseWaitTimeout: 1h0m0s

tcpEstablishedTimeout: 24h0m0s

enableProfiling: false

healthzBindAddress: 0.0.0.0:10256

hostnameOverride: ""

iptables:

masqueradeAll: false

masqueradeBit: 14

minSyncPeriod: 0s

syncPeriod: 30s

ipvs:

excludeCIDRs: null

minSyncPeriod: 0s

scheduler: ""

strictARP: false

syncPeriod: 30s

kind: KubeProxyConfiguration

metricsBindAddress: 127.0.0.1:10249

mode: "ipvs"

nodePortAddresses: null

oomScoreAdj: -999

portRange: ""

resourceContainer: /kube-proxy

加粗部分就为修改部分。

重启各个节点上的kube-proxy pod:

[root@master /]# kubectl get pod -n kube-system | grep kube-proxy | awk '{system("kubectl delete pod "$1" -n kube-system")}'
pod "kube-proxy-6gxw9" deleted
pod "kube-proxy-rx8vv" deleted
pod "kube-proxy-skw5b" deleted

查看:

[root@master /]# kubectl get pod -n kube-system -o wide | grep kube-proxy
kube-proxy-8cwj4 1/1 Running 0 2m35s 18.16.202.163 master <none> <none>
kube-proxy-j9zpz 1/1 Running 0 2m48s 18.16.202.227 slaver1 <none> <none>
kube-proxy-vfgjv 1/1 Running 0 2m38s 18.16.202.95 slaver2 <none> <none> [root@master /]# kubectl logs kube-proxy-8cwj4 -n kube-system
I0729 07:05:35.580934 1 server_others.go:170] Using ipvs Proxier.
W0729 07:05:35.585891 1 proxier.go:401] IPVS scheduler not specified, use rr by default
I0729 07:05:35.588572 1 server.go:534] Version: v1.15.1
I0729 07:05:35.642475 1 conntrack.go:52] Setting nf_conntrack_max to 131072
I0729 07:05:35.653344 1 config.go:96] Starting endpoints config controller
I0729 07:05:35.654584 1 controller_utils.go:1029] Waiting for caches to sync for endpoints config controller
I0729 07:05:35.654629 1 config.go:187] Starting service config controller
I0729 07:05:35.654649 1 controller_utils.go:1029] Waiting for caches to sync for service config controller
I0729 07:05:35.755738 1 controller_utils.go:1036] Caches are synced for endpoints config controller
I0729 07:05:35.755806 1 controller_utils.go:1036] Caches are synced for service config controller

日志中打印出了Using ipvs Proxier,说明ipvs模式已经开启。

参考:

使用kubeadm安装Kubernetes 1.15

kubeadm安装kubernetes1.13集群

Kubernetes Install

kubectl命令大全