calico for kubernetes

时间:2022-09-10 13:18:04
(这一篇中很多错误,勿参考!)
The reference urls:
https://github.com/projectcalico/calico-docker/blob/master/docs/kubernetes/KubernetesIntegration.md
 
I have 3 hosts: 10.11.151.97, 10.11.151.100, 10.11.150.101. Unfortunately, there is no internet access in all 3 hosts.  Following the guide, I build the Kubernetes cluster in 'bash command' mode, rather than the 'service mode' described in the reference.
10.11.151.97 is the kubernetes master, the other two are its nodes.
 

1, Run Etcd Cluster

etcd_token=kb3-etcd-cluster
local_name=kbetcd0
local_ip=10.11.151.97
local_peer_port=4010
local_client_port1=4011
local_client_port2=4012
node1_name=kbetcd1
node1_ip=10.11.151.100
node1_port=4010
node2_name=kbetcd2
node2_ip=10.11.151.101
node2_port=4010 ./etcd -name $local_name \
-initial-advertise-peer-urls http://$local_ip:$local_peer_port \
-listen-peer-urls http://0.0.0.0:$local_peer_port \
-listen-client-urls http://0.0.0.0:$local_client_port1,http://0.0.0.0:$local_client_port2 \
-advertise-client-urls http://$local_ip:$local_client_port1,http://$local_ip:$local_client_port2 \
-initial-cluster-token $etcd_token \
-initial-cluster $local_name=http://$local_ip:$local_peer_port,$node1_name=http://$node1_ip:$node1_port,$node2_name=http://$node2_ip:$node2_port \
-initial-cluster-state new &

  

In each host, run etcd as this command since the etcd should run in cluster mode. If succeed, you should see 'published {Name: *} to cluster *' output. 
 

2, Setup Master

2.1 Start Kubernetes

Run kube-apiserver:
./kube-apiserver --logtostderr=true --v=0 --etcd_servers=http://127.0.0.1:4012 --kubelet_port=10250 --allow_privileged=false --service-cluster-ip-range=172.16.0.0/12 --insecure-bind-address=0.0.0.0 --insecure-port=8080 2>&1 > apiserver.out &
Run kube-controller-manager:
./kube-controller-manager --logtostderr=true --v=0 --master=http://tc-151-97:8080 --cloud-provider="" 2>&1 >controller.out &

  Run kube-scheduler:

./kube-scheduler --logtostderr=true --v=0 --master=http://tc-151-97:8080 2>&1 > scheduler.out &

2.2 Install calico in on Master

sudo ETCD_AUTHORITY=127.0.0.1:4011 ./calicoctl node

  

3, Setup Nodes

3.1 Install calico

For the nodes have no internet access, I downloaded the calico plugin mannual from:
https://github.com/projectcalico/calico-kubernetes/releases/tag/v0.6.0

Move the plugin to the kubernetes plugin directory:

sudo mv calico_kubernetes /usr/libexec/kubernetes/kubelet-plugins/net/exec/calico/calico

Start the calico:

sudo ETCD_AUTHORITY=127.0.0.1:4011 ./calicoctl node

3.2 Start kubelet with calico network:

Start the kubelet with --network-plugin parameter:
./kube-proxy --logtostderr=true --v=0 --master=http://tc-151-97:8080 --proxy-mode=iptables &
./kubelet --logtostderr=true --v=0 --api_servers=http://tc-151-97:8080 --address=0.0.0.0 —-network-plugin=calico --allow_privileged=false --pod-infra-container-image=10.11.150.76:5000/kubernetes/pause:latest &

Here is the kubelet command output:

I1124 15:11:52.226324 28368 server.go:808] Watching apiserver
I1124 15:11:52.393448 28368 plugins.go:56] Registering credential provider: .dockercfg
I1124 15:11:52.398087 28368 server.go:770] Started kubelet
E1124 15:11:52.398190 28368 kubelet.go:756] Image garbage collection failed: unable to find data for container /
I1124 15:11:52.398165 28368 server.go:72] Starting to listen on 0.0.0.0:10250
W1124 15:11:52.401695 28368 kubelet.go:775] Failed to move Kubelet to container "/kubelet": write /sys/fs/cgroup/memory/kubelet/memory.swappiness: invalid argument
I1124 15:11:52.401748 28368 kubelet.go:777] Running in container "/kubelet"
I1124 15:11:52.497377 28368 factory.go:194] System is using systemd
I1124 15:11:52.610946 28368 kubelet.go:885] Node tc-151-100 was previously registered
I1124 15:11:52.734788 28368 factory.go:236] Registering Docker factory
I1124 15:11:52.735851 28368 factory.go:93] Registering Raw factory
I1124 15:11:52.969060 28368 manager.go:1006] Started watching for new ooms in manager
I1124 15:11:52.969114 28368 oomparser.go:199] OOM parser using kernel log file: "/var/log/messages"
I1124 15:11:52.970296 28368 manager.go:250] Starting recovery of all containers
I1124 15:11:53.148967 28368 manager.go:255] Recovery completed
I1124 15:11:53.240408 28368 manager.go:104] Starting to sync pod status with apiserver
I1124 15:11:53.240439 28368 kubelet.go:1953] Starting kubelet main sync loop.

  

I do not know wheather the kubelet is run right. Someone tell me how to verify it ?
 
I do the same process in another node.
 

3, Create some pods and test.

apiVersion: v1
kind: ReplicationController
metadata:
name: test-1
spec:
replicas: 1
template:
metadata:
labels:
app: test-1
spec:
containers:
- name: iperf
image: 10.11.150.76:5000/openxxs/iperf:1.2
nodeSelector:
kubernetes.io/hostname: tc-151-100
---
apiVersion: v1
kind: ReplicationController
metadata:
name: test-2
spec:
replicas: 1
template:
metadata:
labels:
app: test-2
spec:
containers:
- name: iperf
image: 10.11.150.76:5000/openxxs/iperf:1.2
nodeSelector:
kubernetes.io/hostname: tc-151-100
---
apiVersion: v1
kind: ReplicationController
metadata:
name: test-3
spec:
replicas: 1
template:
metadata:
labels:
app: test-3
spec:
containers:
- name: iperf
image: 10.11.150.76:5000/openxxs/iperf:1.2
nodeSelector:
kubernetes.io/hostname: tc-151-101
---
apiVersion: v1
kind: ReplicationController
metadata:
name: test-4
spec:
replicas: 1
template:
metadata:
labels:
app: test-4
spec:
containers:
- name: iperf
image: 10.11.150.76:5000/openxxs/iperf:1.2
nodeSelector:
kubernetes.io/hostname: tc-151-101
./kubectl create -f test.yaml

This command create 4 pods, 2 for 10.11.151.100, 2 for 10.11.151.101.

[@tc_151_97 /home/domeos/openxxs/bin]# ./kubectl get pods
NAME READY STATUS RESTARTS AGE
test-1-1ztr2 1/1 Running 0 5m
test-2-8p2sr 1/1 Running 0 5m
test-3-1hkwa 1/1 Running 0 5m
test-4-jbdbq 1/1 Running 0 5m

  

[@tc-151-100 /home/domeos/openxxs/bin]# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
6dfc83ec1d12 10.11.150.76:5000/openxxs/iperf:1.2 "/block" 6 minutes ago Up 6 minutes k8s_iperf.a4ede594_test-1-1ztr2_default_f1b54d0b-927c-11e5-a77a-782bcb435e46_ca4496d0
78087a93da00 10.11.150.76:5000/openxxs/iperf:1.2 "/block" 6 minutes ago Up 6 minutes k8s_iperf.a4ede594_test-2-8p2sr_default_f1c2da7d-927c-11e5-a77a-782bcb435e46_330d815c
f80a1474f4c4 10.11.150.76:5000/kubernetes/pause:latest "/pause" 6 minutes ago Up 6 minutes k8s_POD.34f4dfd2_test-2-8p2sr_default_f1c2da7d-927c-11e5-a77a-782bcb435e46_af7199c0
eb14879757e6 10.11.150.76:5000/kubernetes/pause:latest "/pause" 6 minutes ago Up 6 minutes k8s_POD.34f4dfd2_test-1-1ztr2_default_f1b54d0b-927c-11e5-a77a-782bcb435e46_af2cc1c3
8accff535ff9 calico/node:latest "/sbin/start_runit" 27 minutes ago Up 27 minutes calico-node
In the node 10.11.151.100, the calico status:
[@tc-151-100 ~/baoquanwang/calico-docker-utils]$ sudo ETCD_AUTHORITY=127.0.0.1:4011 ./calicoctl status
calico-node container is running. Status: Up 24 minutes
Running felix version 1.2.0 IPv4 BGP status
+---------------+-------------------+-------+----------+------------------------------------------+
| Peer address | Peer type | State | Since | Info |
+---------------+-------------------+-------+----------+------------------------------------------+
| 10.11.151.101 | node-to-node mesh | start | 07:18:44 | Connect Socket: Connection refused |
| 10.11.151.97 | node-to-node mesh | start | 07:07:40 | Active Socket: Connection refused |
+---------------+-------------------+-------+----------+------------------------------------------+ IPv6 BGP status
+--------------+-----------+-------+-------+------+
| Peer address | Peer type | State | Since | Info |
+--------------+-----------+-------+-------+------+
+--------------+-----------+-------+-------+------+ 
However, in another node 10.11.151.101:
[@tc-151-101 ~/baoquanwang/calico-docker-utils]$ sudo ETCD_AUTHORITY=127.0.0.1:4011 ./calicoctl status
calico-node container is running. Status: Up 2 minutes
Running felix version 1.2.0 IPv4 BGP status
Unable to connect to server control socket (/etc/service/bird/bird.ctl): Connection refused IPv6 BGP status
+--------------+-----------+-------+-------+------+
| Peer address | Peer type | State | Since | Info |
+--------------+-----------+-------+-------+------+
+--------------+-----------+-------+-------+------+

What has happened ?

 
And that, there is no calico ip route in both nodes:
[@tc-151-100 ~/baoquanwang/calico-docker-utils]$ ip route
default via 10.11.151.254 dev em1 proto static metric 1024
10.11.151.0/24 dev em1 proto kernel scope link src 10.11.151.100
172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.42.1
[@tc-151-101 ~/baoquanwang/calico-docker-utils]$ ip route
default via 10.11.151.254 dev em1 proto static metric 1024
10.11.151.0/24 dev em1 proto kernel scope link src 10.11.151.101
172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.42.1
There is no log output in /var/log/calico/kubernetes/calico.log.
 
 
 
 
 
 

calico for kubernetes的更多相关文章

  1. Calico在Kubernetes中的搭建

    一,需求 Kubernetes官方推荐的是Flannel,但是Flannel是一个overlay的网络,对性能会有一定的影响.Calico恰好能解决一下overlay网络的不足. Calico在Kub ...

  2. [转帖]Kubernetes CNI网络最强对比:Flannel、Calico、Canal和Weave

    Kubernetes CNI网络最强对比:Flannel.Calico.Canal和Weave https://blog.csdn.net/RancherLabs/article/details/88 ...

  3. 容器、容器集群管理平台与 Kubernetes 技术漫谈

    原文:https://www.kubernetes.org.cn/4786.html 我们为什么使用容器? 我们为什么使用虚拟机(云主机)? 为什么使用物理机? 这一系列的问题并没有一个统一的标准答案 ...

  4. Kubernetes实战(一):k8s v1.11.x v1.12.x 高可用安装

    说明:部署的过程中请保证每个命令都有在相应的节点执行,并且执行成功,此文档已经帮助几十人(仅包含和我取得联系的)快速部署k8s高可用集群,文档不足之处也已更改,在部署过程中遇到问题请先检查是否遗忘某个 ...

  5. calico 原理分析

    1.calico没有使用CNI的网桥模式,calico的CNI插件还需要在host机器上为每个容器的veth pair配置一条路由规则.cni插件是calico与kubernetes对接部分. 2.B ...

  6. calico集成详解

    一.摘要 ======================================================================================= 包括三项: c ...

  7. Kubernetes 概述和搭建(多节点)

    一.Kubernetes整体概述和架构 Kubernetes是什么 Kubernetes是一个轻便的和可扩展的开源平台,用于管理容器化应用和服务.通过Kubernetes能够进行应用的自动化部署和扩缩 ...

  8. Calico相关资料链接

    部署calico的两个yaml文件: kubectl apply -f http://docs.projectcalico.org/v2.3/getting-started/kubernetes/in ...

  9. 基于kubernetes自研容器管理平台的技术实践

    一.容器云的背景 伴随着微服务的架构的普及,结合开源的Dubbo和Spring Cloud等微服务框架,宜信内部很多业务线逐渐了从原来的单体架构逐渐转移到微服务架构.应用从有状态到无状态,具体来说将业 ...

随机推荐

  1. autocomplete的使用

    autocomplete使用分为本地调用方法和读取远程读取数据源的方法 (1)本地调用方法 <script src="Scripts/jquery-1.4.1.min.js" ...

  2. Scrum敏捷项目管理精要

    1. 简介: 敏捷项目管理在我们国家起步比较晚,成功运用的项目不多 百分之六十五的敏捷项目用户为scrum 2.互联网时代的特征,雷军的话: 专注,极致,口碑,快(敏捷项目开发就是要快速) 3.敏捷开 ...

  3. Codeforces 633B A Trivial Problem

    B. A Trivial Problem time limit per test 2 seconds memory limit per test 256 megabytes input standar ...

  4. C&colon;&bsol;Program Files &lpar;x86&rpar;&bsol;MSBuild&bsol;Microsoft&period;Cpp&bsol;v4&period;0&bsol;V110&bsol;Microsoft&period;CppCommon&period;targets&lpar;611&comma;5&rpar;&colon; error MSB

    project options, linker, manifest, Generate Manifest-> NO. 项目->属性->链接器->清单文件->生成清单  改 ...

  5. linux tail

    tail 命令从指定点开始将文件写到标准输出,使用tail命令的-f选项可以方便的查阅正在改变的日志文件,tail -f filename会把filename里最尾部的内容显示在屏幕上,并且不但刷新, ...

  6. Linux下设置静态IP和获取动态IP的方法

    Linux下为机器设置静态IP地址: vim  /etc/sysconfig/network-scripts/ifcfg-eth0 修改这个文件内容如下形式: # Intel Corporation ...

  7. IDF 实验室部分题目WriteUp

    前天花了一个下午的时间刷了几道IDF实验室的题目, 这个网站实在是有点冷清, 题目也比较少, 所以就被我和师兄们刷榜了2333... 因为我最先开始做, 所以就干脆刷到第一去了. 题目很水, 切莫见怪 ...

  8. Python 变量有效范围

  9. spring boot框架eclipse快速搭建

    1.maven安装配置好,使用eclipse创建maven项目(选择maven-archetype-quickstart) 2.然后进入http://docs.spring.io/spring-boo ...

  10. 老铁,这年头不会点Git真不行!!!

    版本控制 说到版本控制,脑海里总会浮现大学毕业是写毕业论文的场景,你电脑上的毕业论文一定出现过这番景象! 毕业论文_初稿.doc 毕业论文_修改1.doc 毕业论文_修改2.doc 毕业论文_修改3. ...