2、kubeadm快速部署kubernetes(v1.15.0)集群190623

时间:2022-01-30 12:58:16

一、网络规划

  • 节点网络:192.168.100.0/24
  • Service网络:10.96.0.0/12
  • Pod网络(默认):10.244.0.0/16

二、组件分布及节点规划

  • master(192.168.100.51): API Server/ etcd/ controller-manager/ sheduler
  • node(192.168.100.61,62, ...):kube-proxy/ kubelet/ docker/ flannel

三、基础环境

  1. Kernel 3.10+ or 4+
  2. docker version <=17.03
  3. 停用swap
  1. 配置hosts解析
192.168.100.51  master
192.168.100.61 node01
192.168.100.62 node02
  1. 配置时间同步
# echo "0 */1 * * * /usr/sbin/ntpdate ntp.aliyun.com" >> /var/spool/cron/root
# date; ssh node01 'date'; ssh node02 'date'
  1. 关闭iptables/firewalld,关闭selinux
# systemctl stop firewalld
# systemctl disable firewalld
# iptables -vnL
# vim /etc/selinux/config
SELINUX=disabled
# reboot
  1. 安装docker环境
# yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
# yum list docker-ce --showduplicates |sort -r #列出可以版本的docker-ce
# yum install -y --setopt=obsoletes=0 docker-ce-17.03.0.ce-1.el7.centos
# systemctl start docker
# systemctl enable docker
# docker version
Version: 17.03.0-ce
  1. 导入提前准备好的kubernetes镜像(所有节点)
# vim pull-k8s-images.sh
#!/bin/bash
img_list='k8s.gcr.io/kube-apiserver:v1.15.0
k8s.gcr.io/kube-controller-manager:v1.15.0
k8s.gcr.io/kube-scheduler:v1.15.0
k8s.gcr.io/kube-proxy:v1.15.0
k8s.gcr.io/pause:3.1
k8s.gcr.io/etcd:3.3.10
k8s.gcr.io/coredns:1.3.1
quay.io/coreos/flannel:v0.11.0-amd64'
for img in $img_list; do #下载镜像
docker pull $img
done
docker save -o k8s-img-v1.15.0.gz $img_list #打包
# bash pull-k8s-images.sh
# docker load -i k8s-img-v1.15.0.tar.gz  #导入

以上5步需要在所以节点都执行

四、部署master节点

  1. 安装kubelet/kubeadm/kubectl
[root@k8s-master ~]# cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
[root@k8s-master ~]# yum install kubelet kubeadm kubectl -y
[root@k8s-master ~]# systemctl enable kubelet
  1. 初始化master
[root@k8s-master ~]# kubeadm init --kubernetes-version=v1.15.0 --pod-network-cidr=10.244.0.0/16 --service-cidr=10.96.0.0/12
  1. 配置集群管理员
[root@k8s-master ~]# mkdir -p $HOME/.kube
[root@k8s-master ~]# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@k8s-master ~]# sudo chown $(id -u):$(id -g) $HOME/.kube/config
  1. 记录加入集群的令牌
kubeadm join 192.168.100.51:6443 --token nag8y9.9vllybijsnn7xrzd \
--discovery-token-ca-cert-hash sha256:0f8e9cec4c19ca004fd7c9a906691e5295dd5e38e5265e0edcba0b06cc2a7e14
  1. 部署flannel
[root@k8s-master ~]# wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
[root@k8s-master ~]# kubectl apply -f kube-flannel.yml
  1. 验证步骤
[root@k8s-master ~]# kubectl get componentstatus
NAME STATUS MESSAGE ERROR
scheduler Healthy ok
controller-manager Healthy ok
etcd-0 Healthy {"health":"true"}
[root@k8s-master ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
master Ready master 54m v1.15.0
[root@k8s-master ~]# kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
coredns-5c98db65d4-4vwgt 1/1 Running 0 54m
coredns-5c98db65d4-72l8v 1/1 Running 0 54m
etcd-master 1/1 Running 0 53m
kube-apiserver-master 1/1 Running 0 53m
kube-controller-manager-master 1/1 Running 0 53m
kube-flannel-ds-amd64-8wznx 1/1 Running 0 9m22s
kube-proxy-wb86v 1/1 Running 0 54m
kube-scheduler-master 1/1 Running 0 53m

五、部署node节点

  1. 安装kubelet/kubeadm
[root@k8s-node01 ~]# cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
[root@k8s-node01 ~]# yum install kubeadm kubelet -y
[root@k8s-node01 ~]# systemctl enable kubelet
  1. 将node加入集群
[root@k8s-node01 ~]# kubeadm join 192.168.100.51:6443 --token nag8y9.9vllybijsnn7xrzd \
--discovery-token-ca-cert-hash sha256:0f8e9cec4c19ca004fd7c9a906691e5295dd5e38e5265e0edcba0b06cc2a7e14
  1. 在master上执行验证节点是否加入集群
[root@k8s-master ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
master Ready master 72m v1.15.0
node01 Ready <none> 5m33s v1.15.0
node02 NotReady <none> 14s v1.15.0

六、补充

  • 配置k8s忽略使用swap
# vim /etc/sysconfig/kubelet
KUBELET_EXTRA_ARGS="--fail-swap-on=false"
  • 配置docker代理
# vim /usr/lib/systemd/system/docker.service
[Service]
Environment="HTTPS_PROXY=http://www.ik8s.io:10080"
Environment="NO_PROXY=127.0.0.0/8"
# systemctl daemon-reload
# systemctl restart docker