4、keepalived高可用nginx负载均衡

时间:2023-03-09 18:08:35
4、keepalived高可用nginx负载均衡

keepalived:

HTTP_GET        //使用keepalived获取后端real server健康状态检测

SSL_GET(https)  //这里以为这后端使用的是http协议

TCP_CHECK

下面演示基于TCP_CHECK做检测

# man keepalived    //查看TCP_CHECK配置段

# TCP healthchecker
TCP_CHECK
{
# ======== generic connection options
# Optional IP address to connect to.
# The default is the realserver IP     //默认使用real server的IP
connect_ip <IP ADDRESS>     //可省略
# Optional port to connect to
# The default is the realserver port
connect_port <PORT>         //可省略
# Optional interface to use to
# originate the connection
bindto <IP ADDRESS>
# Optional source port to
# originate the connection from
bind_port <PORT>
# Optional connection timeout in seconds.
# The default is 5 seconds
connect_timeout <INTEGER>
# Optional fwmark to mark all outgoing
# checker packets with
fwmark <INTEGER>

# Optional random delay to start the initial check
# for maximum N seconds.
# Useful to scatter multiple simultaneous
# checks to the same RS. Enabled by default, with
# the maximum at delay_loop. Specify 0 to disable
warmup <INT>
# Retry count to make additional checks if check
# of an alive server fails. Default: 1
retry <INT>
# Delay in seconds before retrying. Default: 1
delay_before_retry <INT>
} #TCP_CHECK

# cd /etc/keepalived

# vim keepalived.conf   //两台keepalived都要设置

 virtual_server 192.168.184.150  {    //这里可以合并
delay_loop
lb_algo wrr
lb_kind DR
net_mask 255.255.0.0
protocol TCP
sorry_server 127.0.0.1 real_server 192.168.184.143 {
weight
TCP_CHECK {
connect_timeout
}
} real_server 192.168.184.144 {
weight
TCP_CHECK {
connect_timeout
}
}
}

systemctl restart keepalived

# systemctl status keepalived

 ● keepalived.service - LVS and VRRP High Availability Monitor
Loaded: loaded (/usr/lib/systemd/system/keepalived.service; disabled; vendor preset: disabled)
Active: active (running) since Thu -- :: CST; 1min 32s ago
Process: ExecStart=/usr/sbin/keepalived $KEEPALIVED_OPTIONS (code=exited, status=/SUCCESS)
Main PID: (keepalived)
CGroup: /system.slice/keepalived.service
├─ /usr/sbin/keepalived -D
├─ /usr/sbin/keepalived -D
└─ /usr/sbin/keepalived -D Dec :: node1 Keepalived_healthcheckers[]: Check on service [192.168.184.144]: failed after retry.
Dec :: node1 Keepalived_healthcheckers[]: Removing service [192.168.184.144]: from VS [192.168.184.150]:
Dec :: node1 Keepalived_healthcheckers[]: Remote SMTP server [127.0.0.1]: connected.
Dec :: node1 Keepalived_healthcheckers[]: SMTP alert successfully sent.
Dec :: node1 Keepalived_vrrp[]: Sending gratuitous ARP on eth0 for 192.168.184.150
Dec :: node1 Keepalived_vrrp[]: VRRP_Instance(VI_1) Sending/queueing gratuitous ARPs on eth0 for 192.168.184.150
Dec :: node1 Keepalived_vrrp[]: Sending gratuitous ARP on eth0 for 192.168.184.150
Dec :: node1 Keepalived_vrrp[]: Sending gratuitous ARP on eth0 for 192.168.184.150
Dec :: node1 Keepalived_vrrp[]: Sending gratuitous ARP on eth0 for 192.168.184.150
Dec :: node1 Keepalived_vrrp[]: Sending gratuitous ARP on eth0 for 192.168.184.150 //发送广播地址已经添加
You have new mail in /var/spool/mail/root

如果两个后端的real server都上线,可以查看当前的状态

4、keepalived高可用nginx负载均衡

此时请求是完美响应的

4、keepalived高可用nginx负载均衡

把后端的143主机停止httpd服务

4、keepalived高可用nginx负载均衡

如果后端的两台主机全部下线,则当前的keepalived则会启用sorry服务

4、keepalived高可用nginx负载均衡

注意这里对于后端的两台real server,一定要保证他们的VIP安装上,不然无法返回信息。详细参考keepalived3

4、keepalived高可用nginx负载均衡

负载均衡一个独立的服务程序,以nginx为例

HA Services:

nginx

为了做上面的实例,根据随笔3的配置,进行以下修改:

当VIP还配置在两台real server上时,是有路由信息的

4、keepalived高可用nginx负载均衡

首先对两台后端的real server进行修改,real server配置了VIP,所以先把VIP去掉

4、keepalived高可用nginx负载均衡

# ifconfig lo:0 down   //注意两台real server都要down

4、keepalived高可用nginx负载均衡

两台real server也配置了arp_announce和arp_ignore,都要恢复原来的状态,这里已经编写了脚本,直接执行即可

 #!/bin/bash
# vip=192.168.184.150
case $ in
start)
echo > /proc/sys/net/ipv4/conf/all/arp_ignore
echo > /proc/sys/net/ipv4/conf/eth0/arp_ignore
echo > /proc/sys/net/ipv4/conf/all/arp_announce
echo > /proc/sys/net/ipv4/conf/eth0/arp_announce
ifconfig lo: $vip netmask 255.255.255.255 broadcast $vip up
route add -host $vip dev lo:
;;
stop)
ifconfig lo: down
echo > /proc/sys/net/ipv4/conf/all/arp_ignore
echo > /proc/sys/net/ipv4/conf/eth0/arp_ignore
echo > /proc/sys/net/ipv4/conf/all/arp_announce
echo > /proc/sys/net/ipv4/conf/eth0/arp_announce
esac

# bash -x set.sh stop

# cat /proc/sys/net/ipv4/conf/all/arp_ignore    //可以进行查看是否执行成功

以上只对两台real server保留web服务

下面对两个director进行修改  

# systemctl stop keepalived   //首先关闭两台keepalived

# ip addr del 192.168.184.150/32 dev eth0

因为要对nginx做负载均衡,所以下面安装nginx

# yum install nginx   //如果显示没有此版本,则可以取nginx的官方去查找配置yum源  http://nginx.org/en/linux_packages.html#stable

下面配置nginx成为反向代理,并且将用户的请求代理至后端的upstream

# vim /etc/nginx/nginx.conf   //查看nginx的主配置文件,在http配置段添加upstream组

 http {
include /etc/nginx/mime.types;
default_type application/octet-stream; log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"'; 9 upstream webservers { //定义upstream组
10 server 192.168.184.143:80 weight=1; 如果不用端口映射的话,不用指明端口;weight指明权重,keepalived自身就可以对后端的服务器做健康状态检测,负载均衡算法是rr
11 server 192.168.184.144:80 weight=1;
12 } 14 server { //使用上一节中的服务器配置并对其进行修改以使其成为代理服务器配置
15 location / {
16 proxy_pass http://webservers/;
17 }
18 } access_log /var/log/nginx/access.log main; sendfile on;
#tcp_nopush on; keepalive_timeout ; #gzip on; include /etc/nginx/conf.d/*.conf;
}

# systemctl start nginx   //启动nginx服务,这时192.168.184.141已经成为了代理服务,实现了负载均衡

4、keepalived高可用nginx负载均衡4、keepalived高可用nginx负载均衡

另外一个节点192.168.184.142需要同样的文件

# scp nginx.conf node2:/etc/nginx    //把同样的配置复制到node2上,启动服务后同样没有问题。

接下来就配置利用keepalived高可用使用nginx

本地keepalived做高可用时,探测心跳用到的可能是私有地址,但通过配置VIP作为公网地址接收外面访问的,所以VIP在哪个主机上,那个主机就是负载均衡器。

接下来配置keepalived在高可用中使用nginx,此时只需要在两台主机上启动keepalivd做高可用,nginx做代理服务,然后配置一个VIP作为用户的访问入口,VIP在哪个主机上,那个主机就是master。但有一个问题:两台keepalived在本地做高可用时探测彼此的心跳用到的可能是私有地址,但VIP是公网地址,所以VIP所在接口就是负载均衡器。

ipvs是通过添加ipvs规则,但nginx是服务,有可能存在套接字争用,所以需要监控nginx服务。

nginx和httpd服务存在可能争用端口的问题,所以需要写一个脚本监控nginx服务,如果nginx服务不在线,就让所在主机的优先级降低

keepalived如何实现监控nginx是否在线?

# yum -y install psmisc  //安装killall  https://www.banaspati.net/centos/how-to-install-killall-command-on-centos-7.html

# killall -0 nginx    //在nginx启动的时候,执行此命令没有任何输出

# echo $?   //输出0,证明上个命令执行的结果是正确的

# killall -0 nginx     //如果nginx没有启动,这里会有提示
  nginx: no process found

# echo $?  //输入为1,证明上面的命令执行错误

因此以上可以作为nginx服务是否启动的标准

# vim /etc/keepalived/keepalived.conf   //两个节点即keepalived高可用节点上都要进行此配置

 ! Configuration File for keepalived

 global_defs {
notification_email {
root@localhost
}
notification_email_from kaadmin@localhost
smtp_server 127.0.0.1
smtp_connect_timeout
router_id LVS_DEVEL
vrrp_mcast_group4 224.0.1.118
# vrrp_skip_check_adv_addr
# vrrp_strict
# vrrp_garp_interval
# vrrp_gna_interval
} vrrp_script chk_mt { //很少用此脚本
script "/etc/keepalived/down.sh"
interval
weight -
} 24 vrrp_script chk_nginx { //添加此段判断nginx服务是否在线
25 script "killall -0 nginx &>/dev/null" //执行此命令,自动返回0或者1
26 interval 1 //如果执行是被,间隔1秒
27 weight -10 //此主机权重减10
28 }

vrrp_instance VI_1 {
state MASTER
interface eth0
virtual_router_id
priority
advert_int
authentication {
auth_type PASS
auth_pass
}
virtual_ipaddress {
192.168.184.150
}
43 track_script { //在对应的vrrp实例中配置IP时,调用此脚本,把此脚本的检测结果当作nginx服务器是否在线的判断标准,如果检测nginx不在线,则此主机权重减10,确保只有nginx在线的主机是
44 chk_nginx //主节点
45 }
notify_master "/etc/keepalived/notify.sh master"
notify_backup "/etc/keepalived/notify.sh backup"
notify_fault "/etc/keepalived/notify.sh fault"
}

4、keepalived高可用nginx负载均衡

如果把脚本script "killall -0 nginx &>/dev/null" 直接定义在配置文件中,启动keepalived是无法执行的,所以还是需要把执行的命令定义为另一个脚本,并把脚本路径写在此处,如下:script "/etc/keepalived/chk.sh"

# vim /etc/keepalived/chk.sh

 #!/bin/bash
# killall - nginx &>/dev/null

# systemctl status nginx       //同时保证两台keepalived的nginx服务是启动的

# systemctl start keepalived   //两台代理服务器(即141、142)启用keepalived服务

此时无论是keepalived高可用主机1(即192.168.184.141)或者主机2(即192.168.184.142)都可以利用nginx代理服务把请求利用负载均衡的方式代理到后端的服务器组。

4、keepalived高可用nginx负载均衡     4、keepalived高可用nginx负载均衡

4、keepalived高可用nginx负载均衡     4、keepalived高可用nginx负载均衡

# systemctl status keepalived   //在master-141上启动keepalived

4、keepalived高可用nginx负载均衡

# systemctl status keepalived   //在BACKUP-142上启动keepalived

4、keepalived高可用nginx负载均衡

以上问题的解决办法:

https://github.com/acassen/keepalived/issues/901

https://warlord0blog.wordpress.com/2018/05/15/nginx-and-keepalived/

# useradd -g users -M keepalived_script

另外在查看主机2即192.168.184.142的keepalived的状态时,按照主机1的配置,把killall -0 nginx &>/dev/null另外定义一个脚本一直出现执行/etc/keepalived/chk.sh返回的状态是1,情况如下:

4、keepalived高可用nginx负载均衡

由于无法排除原因,本此实验把chk.sh中的执行命令换成如下命令:

# vim /etc/keepalived/chk.sh

#!/bin/bash
# #killall - nginx &>/dev/null
[ "`ps -aux | grep nginx | wc -l`" -ge ]

此时查看BACKUP的keepalived返回的状态就是0

4、keepalived高可用nginx负载均衡

出现以上情况的办法:https://github.com/acassen/keepalived/issues/901

 ! Configuration File for keepalived

 global_defs {
notification_email {
root@localhost
}
notification_email_from kaadmin@localhost
smtp_server 127.0.0.1
smtp_connect_timeout
router_id LVS_DEVEL
vrrp_mcast_group4 224.0.1.118
vrrp_skip_check_adv_addr
13 enable_script_security //添加此行即可
vrrp_strict
vrrp_garp_interval
vrrp_gna_interval
}

# systemctl status keepalived    //此时BACKUP一切正常

4、keepalived高可用nginx负载均衡

在192.168.184.141即master主机上查看IP,可以看到VIP在此主机上

4、keepalived高可用nginx负载均衡

此时在浏览器中访问VIP即可查看:

4、keepalived高可用nginx负载均衡     4、keepalived高可用nginx负载均衡

下面把node1上的nginx服务下线,查看VIP是否能自动把VIP转移到node2上:

# systemctl stop nginx

4、keepalived高可用nginx负载均衡

4、keepalived高可用nginx负载均衡

# ip addr list

4、keepalived高可用nginx负载均衡

# systemctl status keepalived

4、keepalived高可用nginx负载均衡

4、keepalived高可用nginx负载均衡

再次使用浏览器访问时,node2依然可以把用户请求分发到后端的upstream组

其实作为高可用的备节点时,主机上的nginx服务是不启动的,只有主机是MASTER才启动nginx服务。

有一个问题:当两台keepalived高可用nginx代理服务时,那么当master主机上的nginx代理服务工作正常时,backup节点上的nginx代理服务是否要先停掉?

最好的办法是backup节点的nginx代理服务一直在线。

假如master节点只是nginx服务进程故障,而其他服务完好,那么就让nginx服务只需重启即可,如何让nginx自动重启?

# vim notify.sh   //两台主机都要修改

 #!/bin/bash
# vip=172.168.184.150
contact='root@localhost' notify() {
mailsubject="`hostname` to be $1: $vip floating"
mailbody="`date '+%F %H:%M:%S'`: vrrp transition, `hostname` changed to be $1"
echo $mailbody | mail -s "$mailsubject" $contact
} case "$1" in
master)
notify master
systemctl restart nginx.service //当主机成为主节点时启动起来
exit
;; backup)
notify backup
systemctl restart nginx.service //当成为备节点时也重启
exit
;;
fault)
notify fault
exit
;; *)
echo 'Usage: `basename $0` {master|backup|fault}'
exit
esac

# scp notify.sh node2://etc/keepalived/

# systemctl stop nginx  //此时这里会发现其实master节点上的nginx代理是无法停止的,因为当master转为backup时,notify.sh依然重启backup上的

//nginx服务,除非master节点上nginx因为故障无法启动或者master宕机或者80端口被占用。

# systemctl stop nginx;systemctl start httpd    //此时master的nginx就无法启动了

# systemcctl stop httpd   //此时nginx会自动启动的

4、keepalived高可用nginx负载均衡

以上就是利用keepalived高可用nginx代理服务

以上是单主模型,双主模型如何配置?双主模型时,两台master都在向后做负载均衡调度。

基于以上配置,再做修改

# vim keepalived   //两台都要修改

 ! Configuration File for keepalived

 global_defs {
notification_email {
root@localhost
}
notification_email_from kaadmin@localhost
smtp_server 127.0.0.1
smtp_connect_timeout
router_id LVS_DEVEL
#vrrp_mcast_group4 224.0.1.118
vrrp_skip_check_adv_addr
vrrp_strict
vrrp_garp_interval
vrrp_gna_interval
} vrrp_script chk_mt {
script "/etc/keepalived/down.sh"
interval
weight -
} vrrp_script chk_nginx {
script "/etc/keepalived/chk.sh"
interval
weight -
} vrrp_instance VI_1 {
state MASTER
interface eth0
virtual_router_id
priority
advert_int
authentication {
auth_type PASS
auth_pass
}
virtual_ipaddress {
192.168.184.150/
}
track_script {
chk_nginx
}
notify_master "/etc/keepalived/notify.sh master"
notify_backup "/etc/keepalived/notify.sh backup"
notify_fault "/etc/keepalived/notify.sh fault"
} vrrp_instance VI_2 { //添加实例2,红色加粗部分是需要修改的地方
state BACKUP
interface eth0
virtual_router_id
priority
advert_int
authentication {
auth_type PASS
auth_pass
}
virtual_ipaddress {
192.168.184.151/32
}
track_script {
chk_nginx
}
notify_master "/etc/keepalived/notify.sh master"
notify_backup "/etc/keepalived/notify.sh backup"
notify_fault "/etc/keepalived/notify.sh fault"
}

# vim notify.sh    //修改

 #!/bin/bash
# vip=172.168.184.150
contact='root@localhost' notify() {
mailsubject="`hostname` to be $1: $vip floating"
mailbody="`date '+%F %H:%M:%S'`: vrrp transition, `hostname` changed to be $1"
echo $mailbody | mail -s "$mailsubject" $contact
} case "$1" in
master)
notify master
# systemctl restart nginx.service //此处就需要注销了,因为在双主模型下,假如主机1上的实例1故障,那么主机1上的实例2就会成为此主机的master,如果此时成为master还要重启,
exit //那么主机2上的master同样也要重启,因为主机1上的实例2就是主机2上的实例1.下面 backup) 原理同样
;; backup)
notify backup
# systemctl restart nginx.service
exit
;; fault)
notify fault
exit
;; *)
echo 'Usage: `basename $0` {master|backup|fault}'
exit
esac

# vim keepalived.conf   在主机2即192.168.184.142上配置,这里的添加的内容和主机1上的实例2保持一致即可

vrrp_instance VI_2 {
state MASTER //变为主节点
interface eth0
virtual_router_id 52
priority 100 //优先级提高
advert_int
authentication {
auth_type PASS
auth_pass 2222
}
virtual_ipaddress {
192.168.184.151/32
#192.168.184.151/32 dev eno16777736 label eno16777736:2 //注意如果是长网卡名,可以这样修改

} track_script {
chk_nginx
} notify_master "/etc/keepalived/notify.sh master"
notify_backup "/etc/keepalived/notify.sh backup"
notify_fault "/etc/keepalived/notify.sh fault"
}

此时两台主机的keepalived是完美的

4、keepalived高可用nginx负载均衡

4、keepalived高可用nginx负载均衡

4、keepalived高可用nginx负载均衡           4、keepalived高可用nginx负载均衡

4、keepalived高可用nginx负载均衡           4、keepalived高可用nginx负载均衡

两台新添加的VIP实现了双主模型,同时同时可以利用nginx代理服务对后端的upstream进行负载均衡

# systemctl stop nginx    //此时把node1上的nginx代理服务停掉,它是不会重启的。所以在node1上的VIP:192.168.184.150就转移到node2上了

4、keepalived高可用nginx负载均衡

查看node2上的IP,可以看出150和151都在node2上

4、keepalived高可用nginx负载均衡

此时在对两个VIP:192.168.184.151和192.168.184.152进行访问时,依然都会得到之前的响应

此时如果手动把node1节点上的nginx代理服务启动起来的话,node1就会把master:192.168.184.150夺回来。

以上原理是这样的:

在node1上master是192.168.184.,backup是192.168.184.151

在node2上master是192.168.184.151,backup是192.168.184.150

当node1的nginx代理服务故障时,node1就无法提供服务,无论是master还是backup都不再起作用,此时如果node2上的nginx代理服务是正常的,node2上的backup即node1上的master就会把node1上的VIP抢过来附加在node2上的网卡上,此时node2上就会有两个VIP,一主一丛同时工作了。

双主节点如何发挥作用?通过DNS做轮询,通过ipvs或nginx代理服务负载均衡至后端的real server或者upstream,

如果基于会话绑定的话,会有一定的风险,因为首先客户端的访问会先经过DNS域名解析后得到高可用其中一台director的IP地址,然后客户端访问IP地址,假如director1采用的算法是IP hash算法,那么来自客户端的同一个请求将始终定向至同一个upstream sever,假如客户端清空了DNS缓存,那么此时DNS解析的结果就有可能定向至另一个director,那么就有50%的可能此次客户端的请求被调度至另一个upstream server。

使用会话保持的场景,除了对缓存服务器外,其他很少用,一般会话信息都保存在session server

另外一种场景,如果nginx代理服务后是缓存服务器,采用什么算法?建议使用DH算法,但DH算法nginx不支持,虽然LVS支持DH算法,但是LVS无法理解URL,所以没有意义。要做的是不但可以分析URL,还要能把同一个URL发送到同一个缓存服务器。

对用户的请求做哈希http://www.zsythink.net/archives/1182

数据不变,多次计算哈希结果就不变。把哈希结果编码成16进制的数字,用这个数字对后端的服务器数量取模,因为不管哈希值是多少,除2取模后要么是1要么是0,如果用户请求的URL不变,那么哈希计算结果是不变的,对后端服务器取模的结果也不会改变。

博客作业:

keepalived 高可用 ipvs

nginx