学习笔记(2)——实验室集群LVS配置

时间:2022-12-20 20:36:15

查看管理结点mgt的网卡信息,为mgt设置VIP

[root@mgt ~]# ifconfig
eth0 Link encap:Ethernet HWaddr 5C:F3:FC:E9::
inet addr:192.168.253.100 Bcast:192.168.253.255 Mask:255.255.255.0
inet6 addr: :cc0:::5ef3:fcff:fee9:/ Scope:Global
inet6 addr: fe80::5ef3:fcff:fee9:/ Scope:Link
UP BROADCAST RUNNING MULTICAST MTU: Metric:
RX packets: errors: dropped: overruns: frame:
TX packets: errors: dropped: overruns: carrier:
collisions: txqueuelen:
RX bytes: (1.9 MiB) TX bytes: (4.0 MiB)
Interrupt: Memory:- eth1 Link encap:Ethernet HWaddr 5C:F3:FC:E9::7A
inet addr:172.20.0.1 Bcast:172.20.0.255 Mask:255.255.255.0
inet6 addr: :cc0:::5ef3:fcff:fee9:617a/ Scope:Global
inet6 addr: fe80::5ef3:fcff:fee9:617a/ Scope:Link
UP BROADCAST RUNNING MULTICAST MTU: Metric:
RX packets: errors: dropped: overruns: frame:
TX packets: errors: dropped: overruns: carrier:
collisions: txqueuelen:
RX bytes: (1.5 MiB) TX bytes: (12.3 MiB)
Interrupt: Memory:- lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
inet6 addr: ::/ Scope:Host
UP LOOPBACK RUNNING MTU: Metric:
RX packets: errors: dropped: overruns: frame:
TX packets: errors: dropped: overruns: carrier:
collisions: txqueuelen:
RX bytes: (765.9 MiB) TX bytes: (765.9 MiB)

发现mgt结点有2块网卡,eth0配置了外部IP,供其与局域网内部其他机器访问通信,eth01配置了内部IP,供其与各个计算节点之间访问通信。现在需要把mgt结点作为LVS的DirectorServer,需要在其上设置虚拟IP(VIP)。备注:若需要修改eth0的IP地址,可执行下面的命令(其他网卡参数修改类似):

[root@mgt ~]# vi /etc/sysconfig/network-scripts/ifcfg-eth0

(1)新建directorserver.sh脚本,代码如下:

#!/bin/bash
setenforce
VIP=192.168.253.110
/sbin/ifconfig eth0: 192.168.253.110 broadcast 192.168.253.110 netmask 255.255.255.255 up
/sbin/route add -host 192.168.253.110 dev eth0:
sysctl -p

此时再次查看网卡信息,除了已有的eth0、eth1和lo之外,新增了eth0:0,即为虚拟IP地址:

[root@mgt zmq]# ifconfig
eth0: Link encap:Ethernet HWaddr 5C:F3:FC:E9::
inet addr:192.168.253.110 Bcast:192.168.253.110 Mask:255.255.255.255
UP BROADCAST RUNNING MULTICAST MTU: Metric:
Interrupt: Memory:-

查看其路由表,发现新增了一条192.168.253.110在eth0设备上的理由转发规则:

[root@mgt zmq]# route
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
192.168.253.110 * 255.255.255.255 UH eth0 //新增
172.20.0.0 * 255.255.255.0 U eth1
192.168.253.0 * 255.255.255.0 U eth0
link-local * 255.255.0.0 U eth0
link-local * 255.255.0.0 U eth1
default 192.168.253.254 0.0.0.0 UG eth0

在director server结点上开启包转发功能:

[root@mgt zmq]# echo "" >/proc/sys/net/ipv4/ip_forward

(2)在计算节点上设置VIP,计算节点为集群的realserver,绑定在每个节点的回环地址上。以node01为例:

[root@node01 ~]# ifconfig
//设备eth0绑定了外部IP
eth0 Link encap:Ethernet HWaddr 5C:F3:FC:E9::
inet addr:192.168.253.101 Bcast:192.168.253.255 Mask:255.255.255.0
inet6 addr: :cc0:::5ef3:fcff:fee9:/ Scope:Global
inet6 addr: fe80::5ef3:fcff:fee9:/ Scope:Link
UP BROADCAST RUNNING MULTICAST MTU: Metric:
RX packets: errors: dropped: overruns: frame:
TX packets: errors: dropped: overruns: carrier:
collisions: txqueuelen:
RX bytes: (42.8 MiB) TX bytes: (9.6 MiB)
Interrupt: Memory:-
//虚拟网卡eth0:0绑定了内部IP
eth0: Link encap:Ethernet HWaddr 5C:F3:FC:E9::
inet addr:172.20.0.11 Bcast:172.20.0.255 Mask:255.255.255.0
UP BROADCAST RUNNING MULTICAST MTU: Metric:
Interrupt: Memory:- lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
inet6 addr: ::/ Scope:Host
UP LOOPBACK RUNNING MTU: Metric:
RX packets: errors: dropped: overruns: frame:
TX packets: errors: dropped: overruns: carrier:
collisions: txqueuelen:
RX bytes: (340.7 KiB) TX bytes: (340.7 KiB)

执行脚本realserver.sh,脚本内容如下:

#!/bin/bash
setenforce
VIP=192.168.253.110
/sbin/ifconfig lo: 192.168.253.110 broadcast 192.168.253.110 netmask 255.255.255.255 up
/sbin/route add -host 192.168.253.110 dev lo:
echo "" > /proc/sys/net/ipv4/conf/lo/arp_ignore
echo "" > /proc/sys/net/ipv4/conf/lo/arp_announce
echo "" > /proc/sys/net/ipv4/conf/all/arp_ignore
echo "" > /proc/sys/net/ipv4/conf/all/arp_announce
sysctl -p
[root@node01 zmq]# ./realserver.sh
setenforce: SELinux is disabled
net.ipv4.ip_forward =
net.ipv4.conf.default.rp_filter =
net.ipv4.conf.default.accept_source_route =
kernel.sysrq =
kernel.core_uses_pid =
net.ipv4.tcp_syncookies =
error: "net.bridge.bridge-nf-call-ip6tables" is an unknown key
error: "net.bridge.bridge-nf-call-iptables" is an unknown key
error: "net.bridge.bridge-nf-call-arptables" is an unknown key
kernel.msgmnb =
kernel.msgmax =
kernel.shmmax =
kernel.shmall =

再次查看网卡信息:

[root@node01 zmq]# ifconfig
//新增了一条记录
lo: Link encap:Local Loopback
inet addr:192.168.253.110 Mask:255.255.255.255
UP LOOPBACK RUNNING MTU: Metric:

在每个计算节点上执行上述过程。注意:上述配置在每次重启network服务(命令:service network restart)后会失效。

(3)配置director server,执行脚本ipvsadm.sh,其内容为:

#/bin/hash
op=$
MAP_PORT=
WEB_PORT=
AL_PORT=
TEST_PORT=
LVS_SERVER_VIP=192.168.253.110
MODE=wrr hosts=(192.168.253.101 192.168.253.102 192.168.253.103 192.168.253.104 192.168.253.105)
WMap=( )
WWeb=( )
WAl=( )
WTest=( ) ipvsadm -C
ipvsadm -A -t ${LVS_SERVER_VIP}:${MAP_PORT} -s ${MODE}
ipvsadm -A -t ${LVS_SERVER_VIP}:${WEB_PORT} -s ${MODE}
ipvsadm -A -t ${LVS_SERVER_VIP}:${AL_PORT} -s ${MODE}
if [ "$op" == test ]; then
ipvsadm -A -t ${LVS_SERVER_VIP}:${TEST_PORT} -s ${MODE}
fi
i=
while [ $i -lt ${#hosts[@]} ];
do
ipvsadm -a -t ${LVS_SERVER_VIP}:${MAP_PORT} -r ${hosts[$i]}:${MAP_PORT} -w ${WMap[$i]} -g
ipvsadm -a -t ${LVS_SERVER_VIP}:${WEB_PORT} -r ${hosts[$i]}:${WEB_PORT} -w ${WWeb[$i]} -g
ipvsadm -a -t ${LVS_SERVER_VIP}:${AL_PORT} -r ${hosts[$i]}:${AL_PORT} -w ${WAl[$i]} -g
if [ "$op" == test ]; then
ipvsadm -a -t ${LVS_SERVER_VIP}:${TEST_PORT} -r ${hosts[$i]}:${TEST_PORT} -w ${WTest[$i]} -g
fi
i=$(( $i + ))
done ipvsadm -Ln

可以看到:

[root@mgt zmq]# ./ipvsadm.sh
IP Virtual Server version 1.2. (size=)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP 192.168.253.110: wrr
-> 192.168.253.101: Route
-> 192.168.253.102: Route
-> 192.168.253.103: Route
-> 192.168.253.104: Route
-> 192.168.253.105: Route
TCP 192.168.253.110: wrr
-> 192.168.253.101: Route
-> 192.168.253.102: Route
-> 192.168.253.103: Route
-> 192.168.253.104: Route
-> 192.168.253.105: Route
TCP 192.168.253.110: wrr
-> 192.168.253.101: Route
-> 192.168.253.102: Route
-> 192.168.253.103: Route
-> 192.168.253.104: Route
-> 192.168.253.105: Route

(4)执行脚本lvsstatus.sh,可查看LVS转发状态,脚本为:

#!/bin/bash
echo "geohpc" | /usr/bin/sudo -S ipvsadm -L

结果如下图(共分发了1个绘图请求-端口9527,5个算法计算请求-端口35569):

学习笔记(2)——实验室集群LVS配置