kubernetes容器集群管理部署node节点组件

时间:2023-01-09 05:09:09

发送配置文件到各个节点

[root@master ~]# scp /opt/kubernetes/cfg/*kubeconfig root@192.168.238.128:/opt/kubernetes/cfg/
bootstrap.kubeconfig 100% 1881 1.8KB/s 00:00
kube-proxy.kubeconfig 100% 5315 5.2KB/s 00:00
[root@master ~]# scp /opt/kubernetes/cfg/*kubeconfig root@192.168.238.129:/opt/kubernetes/cfg/
bootstrap.kubeconfig 100% 1881 1.8KB/s 00:00
kube-proxy.kubeconfig 100% 5315 5.2KB/s 00:00

部署node包

[root@node01 ~]# wget https://dl.k8s.io/v1.15.0/kubernetes-node-linux-amd64.tar.gz
[root@node01 ~]# tar -xf kubernetes-node-linux-amd64.tar
[root@node01 bin]# pwd
/root/kubernetes/node/bin
[root@node01 bin]# ls
kubeadm kubectl kubelet kube-proxy
[root@node01 bin]# cp kubelet kube-proxy /opt/kubernetes/bin/
[root@node01 bin]# chmod +x /opt/kubernetes/bin/
[root@node01 bin]# cat kubelet.sh
#!/bin/bash
NODE_ADDRESS=${1:-"192.168.238.129"}
DNS_SERVER_IP=${2:-"10.10.10.2"} cat <<EOF >/opt/kubernetes/cfg/kubelet KUBELET_OPTS="--logtostderr=true \\
--v=4 \\
--address=${NODE_ADDRESS} \\
--hostname-override=${NODE_ADDRESS} \\
--kubeconfig=/opt/kubernetes/cfg/kubelet.kubeconfig \\
--experimental-bootstrap-kubeconfig=/opt/kubernetes/cfg/bootstrap.kubeconfig \\
--cert-dir=/opt/kubernetes/ssl \\
--allow-privileged=true \\
--cluster-dns=${DNS_SERVER_IP} \\
--cluster-domain=cluster.local \\
--fail-swap-on=false \\
--pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google-containers/pause-amd64:3.0"
EOF cat <<EOF >/usr/lib/systemd/system/kubelet.service [Unit]
Description=Kubernetes kubelet
After=docker.service
Requires=docker.service
[Service]
EnvironmentFile=-/opt/kubernetes/cfg/kubelet
ExecStart=/opt/kubernetes/bin/kubelet \$KUBELET_OPTS
Restart=on-failure
KillMode=process
[Install]
WantedBy=multi-user.target
EOF systemctl daemon-reload
systemctl enable kubelet
systemctl restart kubelet
[root@node01 bin]# sh kubelet.sh 192.168.238.129 10.10.10.2
启动失败
[root@node01 bin]# systemctl status kubelet
● kubelet.service - Kubernetes kubelet
Loaded: loaded (/usr/lib/systemd/system/kubelet.service; enabled; vendor preset: disabled)
Active: failed (Result: start-limit) since Tue 2019-07-09 08:42:39 CST; 5s ago
Process: 16005 ExecStart=/opt/kubernetes/bin/kubelet $KUBELET_OPTS (code=exited, status=1/FAILURE)
Main PID: 16005 (code=exited, status=1/FAILURE) Jul 09 08:42:39 node01 systemd[1]: kubelet.service: main process exited, code=exited, sta...URE
Jul 09 08:42:39 node01 systemd[1]: Unit kubelet.service entered failed state.
Jul 09 08:42:39 node01 systemd[1]: kubelet.service failed.
Jul 09 08:42:39 node01 systemd[1]: kubelet.service holdoff time over, scheduling restart.
Jul 09 08:42:39 node01 systemd[1]: Stopped Kubernetes kubelet.
Jul 09 08:42:39 node01 systemd[1]: start request repeated too quickly for kubelet.service
Jul 09 08:42:39 node01 systemd[1]: Failed to start Kubernetes kubelet.
Jul 09 08:42:39 node01 systemd[1]: Unit kubelet.service entered failed state.
Jul 09 08:42:39 node01 systemd[1]: kubelet.service failed.
Hint: Some lines were ellipsized, use -l to show in full.
错误日志,原因是:kubelet-bootstrap并没有权限创建证书。所以要创建这个用户的权限并绑定到这个角色上。解决方法是在master上执行kubectl create clusterrolebinding kubelet-bootstrap --clusterrole=system:node-bootstrapper --user=kubelet-bootstrap
[root@node01 bin]# tail -n 50 /var/log/messages
Jul 9 08:42:39 localhost kubelet: error: failed to run Kubelet: cannot create certificate signing request: certificatesigningrequests.certificates.k8s.io is forbidden: User "kubelet-bootstrap" cannot create certificatesigningrequests.certificates.k8s.io at the cluster scope
解决办法
[root@master ssl]# kubectl create clusterrolebinding kubelet-bootstrap --clusterrole=system:node-bootstrapper --user=kubelet-bootstrap
clusterrolebinding.rbac.authorization.k8s.io/kubelet-bootstrap created
[root@master ssl]# scp bootstrap.kubeconfig root@192.168.238.129:/opt/kubernetes/cfg/
bootstrap.kubeconfig 100% 1881 1.8KB/s 00:00
重新启动
[root@node01 bin]# systemctl start kubelet
[root@node01 bin]# systemctl status kubelet
● kubelet.service - Kubernetes kubelet
Loaded: loaded (/usr/lib/systemd/system/kubelet.service; enabled; vendor preset: disabled)
Active: active (running) since Tue 2019-07-09 08:49:02 CST; 7s ago
Main PID: 16515 (kubelet)
Memory: 15.1M
CGroup: /system.slice/kubelet.service
└─16515 /opt/kubernetes/bin/kubelet --logtostderr=true --v=4 --address=192.168.23... Jul 09 08:49:02 node01 kubelet[16515]: I0709 08:49:02.587730 16515 controller.go:114] k...ler
Jul 09 08:49:02 node01 kubelet[16515]: I0709 08:49:02.587734 16515 controller.go:118] k...ags
Jul 09 08:49:02 node01 kubelet[16515]: I0709 08:49:02.720859 16515 mount_linux.go:210] ...emd
Jul 09 08:49:02 node01 kubelet[16515]: W0709 08:49:02.720943 16515 cni.go:171] Unable t...t.d
Jul 09 08:49:02 node01 kubelet[16515]: I0709 08:49:02.723626 16515 iptables.go:589] cou...ait
Jul 09 08:49:02 node01 kubelet[16515]: I0709 08:49:02.724943 16515 server.go:182] Versi....11
Jul 09 08:49:02 node01 kubelet[16515]: I0709 08:49:02.724968 16515 feature_gate.go:226]...[]}
Jul 09 08:49:02 node01 kubelet[16515]: I0709 08:49:02.725028 16515 plugins.go:101] No c...ed.
Jul 09 08:49:02 node01 kubelet[16515]: I0709 08:49:02.725035 16515 server.go:303] No cl... ""
Jul 09 08:49:02 node01 kubelet[16515]: I0709 08:49:02.725047 16515 bootstrap.go:58] Usi...ile
Hint: Some lines were ellipsized, use -l to show in full.
[root@node01 bin]# cat /opt/kubernetes/cfg/kubelet KUBELET_OPTS="--logtostderr=true \
--v=4 \
--address=192.168.238.129 \
--hostname-override=192.168.238.129 \
--kubeconfig=/opt/kubernetes/cfg/kubelet.kubeconfig \
--experimental-bootstrap-kubeconfig=/opt/kubernetes/cfg/bootstrap.kubeconfig \
--cert-dir=/opt/kubernetes/ssl \
--allow-privileged=true \
--cluster-dns=10.10.10.2shutdwon \
--cluster-domain=cluster.local \
--fail-swap-on=false \
--pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google-containers/pause-amd64:3.0" [root@node01 bin]# cat proxy.sh
#!/bin/bash
NODE_ADDRESS=${1:-"192.168.238.129"} cat <<EOF >/opt/kubernetes/cfg/kube-proxy
KUBE_PROXY_OPTS="--logtostderr=true \
--v=4 \
--hostname-override=${NODE_ADDRESS} \
--kubeconfig=/opt/kubernetes/cfg/kube-proxy.kubeconfig"
EOF cat <<EOF >/usr/lib/systemd/system/kube-proxy.service
[Unit]
Description=Kubernetes Proxy
After=network.target [Service]
EnvironmentFile=-/opt/kubernetes/cfg/kube-proxy
ExecStart=/opt/kubernetes/bin/kube-proxy \$KUBE_PROXY_OPTS
Restart=on-failure [Install]
WantedBy=multi-user.target
EOF systemctl daemon-reload
systemctl start kube-proxy.service
systemctl status kube-proxy.service
systemctl enable kube-proxy.service [root@node01 bin]# sh proxy.sh 192.168.238.129
● kube-proxy.service - Kubernetes Proxy
Loaded: loaded (/usr/lib/systemd/system/kube-proxy.service; enabled; vendor preset: disabled)
Active: active (running) since Mon 2019-07-15 05:50:58 CST; 19ms ago
Main PID: 10759 (kube-proxy)
Memory: 1.8M
CGroup: /system.slice/kube-proxy.service Jul 15 05:50:58 node01 systemd[1]: kube-proxy.service holdoff time over, scheduling restart.
Jul 15 05:50:58 node01 systemd[1]: Stopped Kubernetes Proxy.
Jul 15 05:50:58 node01 systemd[1]: Started Kubernetes Proxy.
[root@node01 bin]# systemctl status kube-proxy
● kube-proxy.service - Kubernetes Proxy
Loaded: loaded (/usr/lib/systemd/system/kube-proxy.service; enabled; vendor preset: disabled)
Active: active (running) since Mon 2019-07-15 05:50:58 CST; 16s ago
Main PID: 10759 (kube-proxy)
Memory: 25.2M
CGroup: /system.slice/kube-proxy.service
‣ 10759 /opt/kubernetes/bin/kube-proxy --logtostderr=true --v=4 --hostname-override=192.168.238.129 --kubeconfig=/opt/kubernetes/cfg/kube-proxy.kubeconfig Jul 15 05:51:05 node01 kube-proxy[10759]: I0715 05:51:05.178631 10759 config.go:141] Calling handler.OnEndpointsUpdate
Jul 15 05:51:06 node01 kube-proxy[10759]: I0715 05:51:06.006411 10759 config.go:141] Calling handler.OnEndpointsUpdate
Jul 15 05:51:07 node01 kube-proxy[10759]: I0715 05:51:07.186278 10759 config.go:141] Calling handler.OnEndpointsUpdate
Jul 15 05:51:08 node01 kube-proxy[10759]: I0715 05:51:08.013543 10759 config.go:141] Calling handler.OnEndpointsUpdate
Jul 15 05:51:09 node01 kube-proxy[10759]: I0715 05:51:09.192931 10759 config.go:141] Calling handler.OnEndpointsUpdate
Jul 15 05:51:10 node01 kube-proxy[10759]: I0715 05:51:10.021327 10759 config.go:141] Calling handler.OnEndpointsUpdate
Jul 15 05:51:11 node01 kube-proxy[10759]: I0715 05:51:11.199918 10759 config.go:141] Calling handler.OnEndpointsUpdate
Jul 15 05:51:12 node01 kube-proxy[10759]: I0715 05:51:12.028701 10759 config.go:141] Calling handler.OnEndpointsUpdate
Jul 15 05:51:13 node01 kube-proxy[10759]: I0715 05:51:13.207165 10759 config.go:141] Calling handler.OnEndpointsUpdate
Jul 15 05:51:14 node01 kube-proxy[10759]: I0715 05:51:14.035887 10759 config.go:141] Calling handler.OnEndpointsUpdate [root@node01 bin]# cat /opt/kubernetes/cfg/kube-proxy
KUBE_PROXY_OPTS="--logtostderr=true --v=4 --hostname-override=192.168.238.129 --kubeconfig=/opt/kubernetes/cfg/kube-proxy.kubeconfig" 主节点查看node请求
[root@master bin]# kubectl get csr
NAME AGE REQUESTOR CONDITION
node-csr-MiuGllbQ1uICHoHLb_EspVlTUy_1vsdHaN62XUVAX0k 8s kubelet-bootstrap Pending
node-csr-TFcNOIGV2VHnkiqGIzxyWjhR9bEb576oP33SnyxLAy8 47s kubelet-bootstrap Pending
接受node请求
[root@master bin]# kubectl certificate approve node-csr-MiuGllbQ1uICHoHLb_EspVlTUy_1vsdHaN62XUVAX0k
certificatesigningrequest.certificates.k8s.io/node-csr-MiuGllbQ1uICHoHLb_EspVlTUy_1vsdHaN62XUVAX0k approved
[root@master bin]# kubectl certificate approve node-csr-TFcNOIGV2VHnkiqGIzxyWjhR9bEb576oP33SnyxLAy8
certificatesigningrequest.certificates.k8s.io/node-csr-TFcNOIGV2VHnkiqGIzxyWjhR9bEb576oP33SnyxLAy8 approved
[root@master bin]# kubectl get csr
NAME AGE REQUESTOR CONDITION
node-csr-MiuGllbQ1uICHoHLb_EspVlTUy_1vsdHaN62XUVAX0k 2m7s kubelet-bootstrap Approved,Issued
node-csr-TFcNOIGV2VHnkiqGIzxyWjhR9bEb576oP33SnyxLAy8 2m46s kubelet-bootstrap Approved,Issued
查看节点信息
[root@master bin]# kubectl get node
NAME STATUS ROLES AGE VERSION
192.168.238.128 Ready <none> 100s v1.9.11
192.168.238.129 Ready <none> 110s v1.9.11
查看集群状态
[root@master bin]# kubectl get cs
NAME STATUS MESSAGE ERROR
etcd-2 Healthy {"health": "true"}
etcd-1 Healthy {"health": "true"}
scheduler Healthy ok
etcd-0 Healthy {"health": "true"}
controller-manager Healthy ok
查看节点自动签发的证书
[root@node01 ~]# ls /opt/kubernetes/ssl/kubelet-client.*
/opt/kubernetes/ssl/kubelet-client.crt /opt/kubernetes/ssl/kubelet-client.key

节点2同理操作。

删除单个节点的请求

kubectl delete csr 节点名称

删除所有节点请求

kubectl delete csr --all

删除加入的节点

kubectl delete nodes node名称

删除所有节点

kubectl delete nodes --all