【Containerd版】Kubeadm高可用安装K8s集群1.23+

时间:2022-10-27 07:48:24

@

基本环境配置

成为K8s架构师只需一步,点我了解

节点规划

主机名 IP地址 说明
k8s-master01 ~ 03 10.0.0.201 ~ 203 master节点 * 3
k8s-master-lb 10.0.0.236 keepalived虚拟IP
k8s-node01 ~ 02 10.0.0.204 ~ 205 worker节点 * 2

网段规划及软件版本

配置信息 备注
系统版本 CentOS 7.9
Docker版本 20.10.x
Pod网段 172.16.0.0/12
Service网段 192.168.0.0/16

基本配置

所有节点配置hosts,修改/etc/hosts如下:

10.0.0.201 k8s-master01
10.0.0.202 k8s-master02
10.0.0.203 k8s-master03
10.0.0.236 k8s-master-lb # 如果不是高可用集群,该IP为Master01的IP
10.0.0.204 k8s-node01
10.0.0.205 k8s-node02

yum源配置:

curl -o /etc/yum.repos.d/CentOS-Base.repo https://mirrors.aliyun.com/repo/Centos-7.repo
yum install -y yum-utils device-mapper-persistent-data lvm2
yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
sed -i -e '/mirrors.cloud.aliyuncs.com/d' -e '/mirrors.aliyuncs.com/d' /etc/yum.repos.d/CentOS-Base.repo

必备工具安装:

yum install wget jq psmisc vim net-tools telnet yum-utils device-mapper-persistent-data lvm2 git -y

拥抱云原生,高薪就业点我了解

所有节点关闭防火墙、selinux、dnsmasq、swap:

systemctl disable --now firewalld
systemctl disable --now dnsmasq
systemctl disable --now NetworkManager setenforce 0
sed -i 's#SELINUX=enforcing#SELINUX=disabled#g' /etc/sysconfig/selinux
sed -i 's#SELINUX=enforcing#SELINUX=disabled#g' /etc/selinux/config
swapoff -a && sysctl -w vm.swappiness=0
sed -ri '/^[^#]*swap/s@^@#@' /etc/fstab

时间同步:

rpm -ivh http://mirrors.wlnmp.com/centos/wlnmp-release-centos.noarch.rpm
yum install ntpdate -y
ln -sf /usr/share/zoneinfo/Asia/Shanghai /etc/localtime
echo 'Asia/Shanghai' >/etc/timezone
ntpdate time2.aliyun.com
# 加入到crontab
*/5 * * * * /usr/sbin/ntpdate time2.aliyun.com

所有节点配置limit:

ulimit -SHn 65535

vim /etc/security/limits.conf
# 末尾添加如下内容
* soft nofile 65536
* hard nofile 131072
* soft nproc 65535
* hard nproc 655350
* soft memlock unlimited
* hard memlock unlimited

免密钥配置:

ssh-keygen -t rsa
for i in k8s-master01 k8s-master02 k8s-master03 k8s-node01 k8s-node02;do ssh-copy-id -i .ssh/id_rsa.pub $i;done

下载安装所有的源码文件:

cd /root/ ; git clone https://gitee.com/dukuan/k8s-ha-install.git

所有节点升级系统并重启:

yum update -y --exclude=kernel* && reboot

内核升级配置

下载内核:

cd /root
wget http://193.49.22.109/elrepo/kernel/el7/x86_64/RPMS/kernel-ml-devel-4.19.12-1.el7.elrepo.x86_64.rpm
wget http://193.49.22.109/elrepo/kernel/el7/x86_64/RPMS/kernel-ml-4.19.12-1.el7.elrepo.x86_64.rpm

从master01节点传到其他节点:

for i in k8s-master02 k8s-master03 k8s-node01 k8s-node02;do scp kernel-ml-4.19.12-1.el7.elrepo.x86_64.rpm kernel-ml-devel-4.19.12-1.el7.elrepo.x86_64.rpm $i:/root/ ; done

所有节点安装内核:

cd /root && yum localinstall -y kernel-ml*
# 所有节点更改内核启动顺序
grub2-set-default 0 && grub2-mkconfig -o /etc/grub2.cfg grubby --args="user_namespace.enable=1" --update-kernel="$(grubby --default-kernel)"
# 检查默认内核是不是4.19
[root@k8s-master02 ~]# grubby --default-kernel
/boot/vmlinuz-4.19.12-1.el7.elrepo.x86_64
# 所有节点重启,然后检查内核是不是4.19
[root@k8s-master02 ~]# uname -a
Linux k8s-master02 4.19.12-1.el7.elrepo.x86_64 #1 SMP Fri Dec 21 11:06:36 EST 2018 x86_64 x86_64 x86_64 GNU/Linux

所有节点安装ipvsadm:

yum install ipvsadm ipset sysstat conntrack libseccomp -y

所有节点配置ipvs模块:

modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack
vim /etc/modules-load.d/ipvs.conf
# 加入以下内容
ip_vs
ip_vs_lc
ip_vs_wlc
ip_vs_rr
ip_vs_wrr
ip_vs_lblc
ip_vs_lblcr
ip_vs_dh
ip_vs_sh
ip_vs_fo
ip_vs_nq
ip_vs_sed
ip_vs_ftp
ip_vs_sh
nf_conntrack
ip_tables
ip_set
xt_set
ipt_set
ipt_rpfilter
ipt_REJECT
ipip
# 设置开机自动加载
systemctl enable --now systemd-modules-load.service

所有节点配置k8s内核:

cat <<EOF > /etc/sysctl.d/k8s.conf
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
fs.may_detach_mounts = 1
net.ipv4.conf.all.route_localnet = 1
vm.overcommit_memory=1
vm.panic_on_oom=0
fs.inotify.max_user_watches=89100
fs.file-max=52706963
fs.nr_open=52706963
net.netfilter.nf_conntrack_max=2310720 net.ipv4.tcp_keepalive_time = 600
net.ipv4.tcp_keepalive_probes = 3
net.ipv4.tcp_keepalive_intvl =15
net.ipv4.tcp_max_tw_buckets = 36000
net.ipv4.tcp_tw_reuse = 1
net.ipv4.tcp_max_orphans = 327680
net.ipv4.tcp_orphan_retries = 3
net.ipv4.tcp_syncookies = 1
net.ipv4.tcp_max_syn_backlog = 16384
net.ipv4.ip_conntrack_max = 65536
net.ipv4.tcp_max_syn_backlog = 16384
net.ipv4.tcp_timestamps = 0
net.core.somaxconn = 16384
EOF
sysctl --system

重启服务器后,测试配置是否还在加载:

reboot
lsmod | grep --color=auto -e ip_vs -e nf_conntrack

高薪K8s全栈架构师视频点我了解,提供完善并且免费的售后服务、免费更新、免费技术问答、直击年薪30万!

K8s组件及Runtime安装

Containerd安装

所有节点安装docker-ce-20.10:

yum install docker-ce-20.10.* docker-ce-cli-20.10.* -y

配置Containerd所需的模块(所有节点):

# cat <<EOF | sudo tee /etc/modules-load.d/containerd.conf
overlay
br_netfilter
EOF

所有节点加载模块:

# modprobe -- overlay
# modprobe -- br_netfilter

所有节点,配置Containerd所需的内核:

cat <<EOF | sudo tee /etc/sysctl.d/99-kubernetes-cri.conf
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-ip6tables = 1
EOF

所有节点加载内核:

# sysctl --system

所有节点配置Containerd的配置文件:

# mkdir -p /etc/containerd
# containerd config default | tee /etc/containerd/config.toml

所有节点将Containerd的Cgroup改为Systemd:

# vim /etc/containerd/config.toml
找到containerd.runtimes.runc.options,添加SystemdCgroup = true
所有节点将sandbox_image的Pause镜像改成符合自己版本的地址registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.6:

所有节点启动Containerd,并配置开机自启动:

# systemctl daemon-reload
# systemctl enable --now containerd

所有节点配置crictl客户端连接的运行时位置:

# cat > /etc/crictl.yaml <<EOF
runtime-endpoint: unix:///run/containerd/containerd.sock
image-endpoint: unix:///run/containerd/containerd.sock
timeout: 10
debug: false
EOF

K8s组件安装

所有节点安装1.23最新版本kubeadm、kubelet和kubectl:
# yum install kubeadm-1.23* kubelet-1.23* kubectl-1.23* -y

更改Kubelet的配置使用Containerd作为Runtime:

# cat >/etc/sysconfig/kubelet<<EOF
KUBELET_KUBEADM_ARGS="--container-runtime=remote --runtime-request-timeout=15m --container-runtime-endpoint=unix:///run/containerd/containerd.sock"
EOF

设置Kubelet开机自启动:

# systemctl daemon-reload
# systemctl enable --now kubelet

高可用实现

所有Master节点通过yum安装HAProxy和KeepAlived:

yum install keepalived haproxy -y

所有Master节点配置HAProxy:

[root@k8s-master01 etc]# mkdir /etc/haproxy
[root@k8s-master01 etc]# vim /etc/haproxy/haproxy.cfg
global
maxconn 2000
ulimit-n 16384
log 127.0.0.1 local0 err
stats timeout 30s defaults
log global
mode http
option httplog
timeout connect 5000
timeout client 50000
timeout server 50000
timeout http-request 15s
timeout http-keep-alive 15s frontend monitor-in
bind *:33305
mode http
option httplog
monitor-uri /monitor frontend k8s-master
bind 0.0.0.0:16443
bind 127.0.0.1:16443
mode tcp
option tcplog
tcp-request inspect-delay 5s
default_backend k8s-master backend k8s-master
mode tcp
option tcplog
option tcp-check
balance roundrobin
default-server inter 10s downinter 5s rise 2 fall 2 slowstart 60s maxconn 250 maxqueue 256 weight 100
server k8s-master01 10.0.0.201:6443 check
server k8s-master02 10.0.0.202:6443 check
server k8s-master03 10.0.0.203:6443 check

所有Master节点配置KeepAlived,

Master01节点的配置:
[root@k8s-master01 etc]# mkdir /etc/keepalived [root@k8s-master01 ~]# vim /etc/keepalived/keepalived.conf
! Configuration File for keepalived
global_defs {
router_id LVS_DEVEL
script_user root
enable_script_security
}
vrrp_script chk_apiserver {
script "/etc/keepalived/check_apiserver.sh"
interval 5
weight -5
fall 2
rise 1
}
vrrp_instance VI_1 {
state MASTER
interface ens33
mcast_src_ip 10.0.0.201
virtual_router_id 51
priority 101
advert_int 2
authentication {
auth_type PASS
auth_pass K8SHA_KA_AUTH
}
virtual_ipaddress {
10.0.0.236
}
track_script {
chk_apiserver
}
}
Master02节点的配置:
! Configuration File for keepalived
global_defs {
router_id LVS_DEVEL
script_user root
enable_script_security
}
vrrp_script chk_apiserver {
script "/etc/keepalived/check_apiserver.sh"
interval 5
weight -5
fall 2
rise 1
}
vrrp_instance VI_1 {
state BACKUP
interface ens33
mcast_src_ip 10.0.0.202
virtual_router_id 51
priority 100
advert_int 2
authentication {
auth_type PASS
auth_pass K8SHA_KA_AUTH
}
virtual_ipaddress {
10.0.0.236
}
track_script {
chk_apiserver
}
}
Master03节点的配置:
! Configuration File for keepalived
global_defs {
router_id LVS_DEVEL
script_user root
enable_script_security
}
vrrp_script chk_apiserver {
script "/etc/keepalived/check_apiserver.sh"
interval 5
weight -5
fall 2
rise 1
}
vrrp_instance VI_1 {
state BACKUP
interface ens33
mcast_src_ip 10.0.0.203
virtual_router_id 51
priority 100
advert_int 2
authentication {
auth_type PASS
auth_pass K8SHA_KA_AUTH
}
virtual_ipaddress {
10.0.0.236
}
track_script {
chk_apiserver
}
}

所有master节点配置KeepAlived健康检查文件:

[root@k8s-master01 keepalived]# cat /etc/keepalived/check_apiserver.sh
#!/bin/bash err=0
for k in $(seq 1 3)
do
check_code=$(pgrep haproxy)
if [[ $check_code == "" ]]; then
err=$(expr $err + 1)
sleep 1
continue
else
err=0
break
fi
done if [[ $err != "0" ]]; then
echo "systemctl stop keepalived"
/usr/bin/systemctl stop keepalived
exit 1
else
exit 0
fi

启动haproxy和keepalived:

[root@k8s-master01 keepalived]# systemctl daemon-reload
[root@k8s-master01 keepalived]# systemctl enable --now haproxy
[root@k8s-master01 keepalived]# systemctl enable --now keepalived

高薪K8s全栈架构师视频点我了解,提供完善并且免费的售后服务、免费更新、免费技术问答、直击年薪30万! 来都来了点我看看吧~

集群初始化

Master01初始化

vim kubeadm-config.yaml
apiVersion: kubeadm.k8s.io/v1beta2
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
token: 7t2weq.bjbawausm0jaxury
ttl: 24h0m0s
usages:
- signing
- authentication
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 10.0.0.201
bindPort: 6443
nodeRegistration:
criSocket: /run/containerd/containerd.sock
name: k8s-master01
taints:
- effect: NoSchedule
key: node-role.kubernetes.io/master
---
apiServer:
certSANs:
- 10.0.0.236
timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta2
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controlPlaneEndpoint: 10.0.0.236:16443
controllerManager: {}
dns:
type: CoreDNS
etcd:
local:
dataDir: /var/lib/etcd
imageRepository: registry.cn-hangzhou.aliyuncs.com/google_containers
kind: ClusterConfiguration
kubernetesVersion: v1.23.1 # 更改此处的版本号和kubeadm version一致
networking:
dnsDomain: cluster.local
podSubnet: 172.16.0.0/12
serviceSubnet: 192.168.0.0/16
scheduler: {}

更新kubeadm文件:

kubeadm config migrate --old-config kubeadm-config.yaml --new-config new.yaml

将new.yaml文件复制到其他master节点:

for i in k8s-master02 k8s-master03; do scp new.yaml $i:/root/; done

所有Master节点提前下载镜像:

kubeadm config images pull --config /root/new.yaml

Master01节点初始化:

kubeadm init --config /root/new.yaml  --upload-certs

初始化成功后信息如下:

  kubeadm join 10.0.0.236:16443 --token 7t2weq.bjbawausm0jaxury \
--discovery-token-ca-cert-hash sha256:df72788de04bbc2e8fca70becb8a9e8503a962b5d7cd9b1842a0c39930d08c94 \
--control-plane --certificate-key c595f7f4a7a3beb0d5bdb75d9e4eff0a60b977447e76c1d6885e82c3aa43c94c Please note that the certificate-key gives access to cluster sensitive data, keep it secret!
As a safeguard, uploaded-certs will be deleted in two hours; If necessary, you can use
"kubeadm init phase upload-certs --upload-certs" to reload certs afterward. Then you can join any number of worker nodes by running the following on each as root: kubeadm join 10.0.0.236:16443 --token 7t2weq.bjbawausm0jaxury \
--discovery-token-ca-cert-hash sha256:df72788de04bbc2e8fca70becb8a9e8503a962b5d7cd9b1842a0c39930d08c94

Master01节点配置环境变量,用于访问Kubernetes集群:

cat <<EOF >> /root/.bashrc
export KUBECONFIG=/etc/kubernetes/admin.conf
EOF
source /root/.bashrc

添加Master节点

使用上述初始化命令生产的join命令添加即可:

kubeadm join 10.0.0.236:16443 --token 7t2weq.bjbawausm0jaxury \
--discovery-token-ca-cert-hash sha256:df72788de04bbc2e8fca70becb8a9e8503a962b5d7cd9b1842a0c39930d08c94 \
--control-plane --certificate-key c595f7f4a7a3beb0d5bdb75d9e4eff0a60b977447e76c1d6885e82c3aa43c94c

添加Worker节点

kubeadm join 10.0.0.236:16443 --token 7t2weq.bjbawausm0jaxury \
--discovery-token-ca-cert-hash sha256:df72788de04bbc2e8fca70becb8a9e8503a962b5d7cd9b1842a0c39930d08c94

添加完成后结果如下:【Containerd版】Kubeadm高可用安装K8s集群1.23+

CNI插件Calico安装

只在master01执行:

cd /root/k8s-ha-install && git checkout manual-installation-v1.23.x && cd calico/

修改Pod网段:

POD_SUBNET=`cat /etc/kubernetes/manifests/kube-controller-manager.yaml | grep cluster-cidr= | awk -F= '{print $NF}'`

sed -i "s#POD_CIDR#${POD_SUBNET}#g" calico.yaml
kubectl apply -f calico.yaml

创建完成后,等待几分钟后查看状态:

【Containerd版】Kubeadm高可用安装K8s集群1.23+

Metrics Server部署

将Master01节点的front-proxy-ca.crt复制到所有Node节点:

scp /etc/kubernetes/pki/front-proxy-ca.crt k8s-node01:/etc/kubernetes/pki/front-proxy-ca.crt
scp /etc/kubernetes/pki/front-proxy-ca.crt k8s-node(其他节点自行拷贝):/etc/kubernetes/pki/front-proxy-ca.crt

安装metrics server:

cd /root/k8s-ha-install/kubeadm-metrics-server

# kubectl  create -f comp.yaml
serviceaccount/metrics-server created
clusterrole.rbac.authorization.k8s.io/system:aggregated-metrics-reader created
clusterrole.rbac.authorization.k8s.io/system:metrics-server created
rolebinding.rbac.authorization.k8s.io/metrics-server-auth-reader created
clusterrolebinding.rbac.authorization.k8s.io/metrics-server:system:auth-delegator created
clusterrolebinding.rbac.authorization.k8s.io/system:metrics-server created
service/metrics-server created
deployment.apps/metrics-server created
apiservice.apiregistration.k8s.io/v1beta1.metrics.k8s.io created

查看Metrics Server状态:

【Containerd版】Kubeadm高可用安装K8s集群1.23+

状态正常后,查看度量指标:

# kubectl top node
NAME CPU(cores) CPU% MEMORY(bytes) MEMORY%
k8s-master01 153m 3% 1701Mi 44%
k8s-master02 125m 3% 1693Mi 44%
k8s-master03 129m 3% 1590Mi 41%
k8s-node01 73m 1% 989Mi 25%
k8s-node02 64m 1% 950Mi 24%
# kubectl top po -A
NAMESPACE NAME CPU(cores) MEMORY(bytes)
kube-system calico-kube-controllers-66686fdb54-74xkg 2m 17Mi
kube-system calico-node-6gqpb 21m 85Mi
kube-system calico-node-bmvjt 29m 76Mi
kube-system calico-node-hdp9c 15m 82Mi
kube-system calico-node-wwrfv 23m 86Mi
kube-system calico-node-zzv88 22m 84Mi
kube-system calico-typha-67c6dc57d6-hj6l4 2m 23Mi
kube-system calico-typha-67c6dc57d6-jm855 2m 22Mi
kube-system coredns-7d89d9b6b8-sr6mf 1m 16Mi
kube-system coredns-7d89d9b6b8-xqwjk 1m 16Mi
kube-system etcd-k8s-master01 24m 96Mi
kube-system etcd-k8s-master02 20m 91Mi
kube-system etcd-k8s-master03 21m 92Mi
kube-system kube-apiserver-k8s-master01 41m 502Mi
kube-system kube-apiserver-k8s-master02 35m 476Mi
kube-system kube-apiserver-k8s-master03 71m 480Mi
kube-system kube-controller-manager-k8s-master01 15m 65Mi
kube-system kube-controller-manager-k8s-master02 1m 26Mi
kube-system kube-controller-manager-k8s-master03 2m 27Mi
kube-system kube-proxy-8lt45 1m 18Mi
kube-system kube-proxy-d6jfh 1m 18Mi
kube-system kube-proxy-hfnvz 1m 19Mi
kube-system kube-proxy-nsms8 1m 18Mi
kube-system kube-proxy-xmlhq 3m 21Mi
kube-system kube-scheduler-k8s-master01 2m 26Mi
kube-system kube-scheduler-k8s-master02 2m 24Mi
kube-system kube-scheduler-k8s-master03 2m 24Mi
kube-system metrics-server-d54b585c4-4dqpf 46m 16Mi

高薪K8s全栈架构师视频点我了解,提供完善并且免费的售后服务、免费更新、免费技术问答、直击年薪30万! 来都来了点我看看吧~

Dashboard部署

安装

cd /root/k8s-ha-install/dashboard/

[root@k8s-master01 dashboard]# kubectl  create -f .
serviceaccount/admin-user created
clusterrolebinding.rbac.authorization.k8s.io/admin-user created
namespace/kubernetes-dashboard created
serviceaccount/kubernetes-dashboard created
service/kubernetes-dashboard created
secret/kubernetes-dashboard-certs created
secret/kubernetes-dashboard-csrf created
secret/kubernetes-dashboard-key-holder created
configmap/kubernetes-dashboard-settings created
role.rbac.authorization.k8s.io/kubernetes-dashboard created
clusterrole.rbac.authorization.k8s.io/kubernetes-dashboard created
rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
clusterrolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
deployment.apps/kubernetes-dashboard created
service/dashboard-metrics-scraper created
deployment.apps/dashboard-metrics-scraper created

登录Dashboard

查看dashboard端口号:

# kubectl get svc kubernetes-dashboard -n kubernetes-dashboard

查看管理员Token:

[root@k8s-master01 1.1.1]# kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep admin-user | awk '{print $1}')
Name: admin-user-token-r4vcp
Namespace: kube-system
Labels: <none>
Annotations: kubernetes.io/service-account.name: admin-user
kubernetes.io/service-account.uid: 2112796c-1c9e-11e9-91ab-000c298bf023 Type: kubernetes.io/service-account-token Data
====
ca.crt: 1025 bytes
namespace: 11 bytes
token: eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJhZG1pbi11c2VyLXRva2VuLXI0dmNwIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImFkbWluLXVzZXIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiIyMTEyNzk2Yy0xYzllLTExZTktOTFhYi0wMDBjMjk4YmYwMjMiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6a3ViZS1zeXN0ZW06YWRtaW4tdXNlciJ9.bWYmwgRb-90ydQmyjkbjJjFt8CdO8u6zxVZh-19rdlL_T-n35nKyQIN7hCtNAt46u6gfJ5XXefC9HsGNBHtvo_Ve6oF7EXhU772aLAbXWkU1xOwQTQynixaypbRIas_kiO2MHHxXfeeL_yYZRrgtatsDBxcBRg-nUQv4TahzaGSyK42E_4YGpLa3X3Jc4t1z0SQXge7lrwlj8ysmqgO4ndlFjwPfvg0eoYqu9Qsc5Q7tazzFf9mVKMmcS1ppPutdyqNYWL62P1prw_wclP0TezW1CsypjWSVT4AuJU8YmH8nTNR1EXn8mJURLSjINv6YbZpnhBIPgUGk1JYVLcn47w

通过任意宿主机+端口号即可访问Dashboard:

【Containerd版】Kubeadm高可用安装K8s集群1.23+

【Containerd版】Kubeadm高可用安装K8s集群1.23+

高薪K8s全栈架构师视频点我了解,提供完善并且免费的售后服务、免费更新、免费技术问答、直击年薪30万! 来都来了点我看看吧~

参考:K8s架构师视频

K8s技术交流群:

【Containerd版】Kubeadm高可用安装K8s集群1.23+

【Containerd版】Kubeadm高可用安装K8s集群1.23+的更多相关文章

  1. kubernetes教程第一章-kubeadm高可用安装k8s集群

    目录 Kubeadm高可用安装k8s集群 kubeadm高可用安装1.18基本说明 k8s高可用架构解析 kubeadm基本环境配置 kubeadm基本组件安装 kubeadm集群初始化 高可用Mas ...

  2. Kubernetes全栈架构师(Kubeadm高可用安装k8s集群)--学习笔记

    目录 k8s高可用架构解析 Kubeadm基本环境配置 Kubeadm系统及内核升级 Kubeadm基本组件安装 Kubeadm高可用组件安装 Kubeadm集群初始化 高可用Master及Token ...

  3. Kubernetes实战指南(三十四): 高可用安装K8s集群1&period;20&period;x

    @ 目录 1. 安装说明 2. 节点规划 3. 基本配置 4. 内核配置 5. 基本组件安装 6. 高可用组件安装 7. 集群初始化 8. 高可用Master 9. 添加Node节点 10. Cali ...

  4. Kubernetes全栈架构师(二进制高可用安装k8s集群扩展篇)--学习笔记

    目录 二进制Metrics&Dashboard安装 二进制高可用集群可用性验证 生产环境k8s集群关键性配置 Bootstrapping: Kubelet启动过程 Bootstrapping: ...

  5. Kubernetes全栈架构师(二进制高可用安装k8s集群部署篇)--学习笔记

    目录 二进制高可用基本配置 二进制系统和内核升级 二进制基本组件安装 二进制生成证书详解 二进制高可用及etcd配置 二进制K8s组件配置 二进制使用Bootstrapping自动颁发证书 二进制No ...

  6. 高可用的K8S集群部署方案

    涉及到的内容 LVS HAProxy Harbor etcd Kubernetes (Master Worker) 整体拓补图 以上是最小生产可用的整体拓补图(相关节点根据需要进行增加,但不能减少) ...

  7. 使用kubeadm安装k8s集群故障处理三则

    最近在作安装k8s集群,测试了几种方法,最终觉得用kubeadm应该最规范. 限于公司特别的网络情况,其安装比网上不能访问google的情况还要艰难. 慢慢积累经验吧. 今天遇到的三则故障记下来作参考 ...

  8. Dubbo入门到精通学习笔记(十四):ActiveMQ集群的安装、配置、高可用测试,ActiveMQ高可用&plus;负载均衡集群的安装、配置、高可用测试

    文章目录 ActiveMQ 高可用集群安装.配置.高可用测试( ZooKeeper + LevelDB) ActiveMQ高可用+负载均衡集群的安装.配置.高可用测试 准备 正式开始 ActiveMQ ...

  9. 冰河教你一次性成功安装K8S集群(基于一主两从模式)

    写在前面 研究K8S有一段时间了,最开始学习K8S时,根据网上的教程安装K8S环境总是报错.所以,我就改变了学习策略,先不搞环境搭建了.先通过官网学习了K8S的整体架构,底层原理,又硬啃了一遍K8S源 ...

随机推荐

  1. Web Api 简介

    ASP.NET Web API 简介  ASP.NET MVC 4 包含了 ASP.NET Web API, 这是一个创建可以连接包括浏览器.移动设备等多种客户端的 Http 服务的新框架, ASP. ...

  2. PBOC金融IC卡,卡片与终端交互的13个步骤,简介-第四组&lpar;转&rpar;

    十:联机处理-可选项终端根据卡片行为分析的结果,执行对应的处理.若卡片响应联机,则终端发起联机操作.联机处理使得发卡行后台可以根据基于后台的风险管理参数检查并授权批准或拒绝交易.除了传统的联机欺诈和信 ...

  3. 时间就像Hourglass一样&comma;积累&lpar;沉淀&rpar;越多&comma;收获越大

    package cn.bdqn; public class Hourglass { public static void main(String[] args) { for (int i = 2; i ...

  4. 基于 HTML5 Canvas 的 3D 模型列表贴图

    少量图片对于我们赋值是没有什么难度,但是如果图片的量大的话,我们肯定希望能很直接地显示在界面上供我们使用,再就是排放的位置等等,这些都需要比较直观的操作,在实际应用中会让我们省很多力以及时间.下面这个 ...

  5. &lbrack;BZOJ1000&rsqb; A&plus;B Problem

    Description Calculate a+b Input Two integer a,b (0<=a,b<=10) Output Output a+b Sample Input 1 ...

  6. Python基础-week05

    本节大纲:Author:http://www.cnblogs.com/Jame-mei 模块介绍 time & datetime模块 random os sys shutil json &am ...

  7. sed 操作命令

    sed介绍 grep 只能过滤文件内容,sed既能过滤文件内容同时还能对文件内容进行修改.  sed 算是一种编程语言,它有自己的固定语法. sed是一种行编辑器,sed会在内存中开辟一块独立的空间( ...

  8. 转:&period;NET面试题汇总(三)

    原文地址:http://www.cnblogs.com/yuan-jun/p/6600692.html 1.简述 private. protected. public. internal 修饰符的访问 ...

  9. Linux--多网卡的7种Bond模式【转】

    转自:http://www.cnblogs.com/lcword/p/5914089.html 网卡bond是通过把多张网卡绑定为一个逻辑网卡,实现本地网卡的冗余,带宽扩容和负载均衡.在应用部署中是一 ...

  10. 对Mybatis的初步认识

    1.认识Mybatis MyBatis 是支持普通 SQL 查询,存储过程和高级映射的优秀持久层框架. MyBatis 消除了几乎所有的 JDBC 代码和参数的手工设置以及对结果集的检索. MyBat ...