CentOS下搭建Nginx+Tomcat实现集群负载与session复制

时间:2022-07-25 16:53:08

CentOS下搭建Nginx+Tomcat实现集群负载与session复制

第一章 测试环境说明
1.1 系统说明
系统均选用最小化安装的centos 5.7
1.2 软件说明
nginx-0.8.55
pcre-8.13
apache-tomcat-6.0.35 
jdk-6u31-linux-x64
nginx-upstream-jvm-route-0.1
1.3 规划说明
客户端通过访问nginx做的负载均衡层去访问后端的web运行层(tomcat),如下图:
CentOS下搭建Nginx+Tomcat实现集群负载与session复制
另外,关于session复制原理,简单来说如下图:
CentOS下搭建Nginx+Tomcat实现集群负载与session复制
负载层:192.168.254.200
安装:pcre、nginx、nginx-upstream-jvm-route-0.1
后端tomcat运行层:192.168.254.221、192.168.254.222
安装:tomcat、jdk


第2章 安装部署说明
2.1 负载均衡层安装部署说明
2.1.1 依赖包安装
yum install wget make gcc gcc-c++ -y
yum install pcre-devel openssl-devel patch -y
2.1.2 创建nginx运行帐号
useradd www -s /sbin/nologin -M
2.1.3 Pcre安装
解压pcre安装包:tar xvf pcre-8.13.tar.gz 
cd pcre-8.13
编译pcre:./configure --prefix=/usr/local/pcre
安装:make && make install
2.1.4 Nginx安装
解压nginx和nginx-upstream
tar xvf nginx-upstream-jvm-route-0.1.tar.gz 
tar xvf nginx-0.8.55.tar.gz 
cd nginx-0.8.55
配置jvmroute路径:
patch -p0 < ../nginx_upstream_jvm_route/jvm_route.patch 


编译nginx:
./configure \
--user=www \
--group=www \
--prefix=/usr/local/nginx \
--with-http_stub_status_module \
--with-http_ssl_module \
--with-http_flv_module \
--with-http_gzip_static_module \
--pid-path=/var/run/nginx.pid \
--error-log-path=/var/log/nginx/error.log \
--http-log-path=/var/log/nginx/access.log \
--http-client-body-temp-path=/var/tmp/nginx/client_body_temp \
--http-proxy-temp-path=/var/tmp/nginx/proxy_temp \
--http-fastcgi-temp-path=/var/tmp/nginx/fastcgi_temp \
--http-uwsgi-temp-path=/var/tmp/nginx/uwsgi_temp \
--http-scgi-temp-path=/var/tmp/nginx/scgi_temp \
--add-module=/root/scripts/src/nginx_upstream_jvm_route/
安装:
make && make install
2.1.5 Nginx配置文件修改
Nginx作为负载的配置文件修改很简单,只需添加后端web服务器的ip及端口即可,修改运行帐号,下面配置文件中的红色字体为本次测试环境的修改值;
user www www;
worker_processes 8;
#error_log logs/nginx_error.log crit;
#pid /usr/local/nginx/nginx.pid;
#Specifies the value for maximum file descriptors that can be opened by this process.
worker_rlimit_nofile 51200;
events
{
use epoll;
worker_connections 2048;
}


http
{
upstream backend {
server 192.168.254.221:80 srun_id=real1; 
server 192.168.254.222:80 srun_id=real2; 
jvm_route $cookie_JSESSIONID|sessionid reverse;
}

include mime.types;
default_type application/octet-stream;
#charset gb2312;
charset UTF-8;
server_names_hash_bucket_size 128;
client_header_buffer_size 32k;
large_client_header_buffers 4 32k;
client_max_body_size 20m;
limit_rate 1024k;
sendfile on;
tcp_nopush on;
keepalive_timeout 60;
tcp_nodelay on;
fastcgi_connect_timeout 300;
fastcgi_send_timeout 300;
fastcgi_read_timeout 300;
fastcgi_buffer_size 64k;
fastcgi_buffers 4 64k;
fastcgi_busy_buffers_size 128k;
fastcgi_temp_file_write_size 128k;
gzip on;
#gzip_min_length 1k;
gzip_buffers 4 16k;
gzip_http_version 1.0;
gzip_comp_level 2;
gzip_types text/plain application/x-javascript text/css application/xml;
gzip_vary on;
#limit_zone crawler $binary_remote_addr 10m;
server
{
listen 80;
server_name 192.168.254.250;
index index.jsp index.htm index.html;
root /data/www/;

location / {
proxy_pass http://backend;
proxy_redirect off;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Host $http_host;
}
location ~ .*\.(gif|jpg|jpeg|png|bmp|swf)$
{
expires 30d;
}
location ~ .*\.(js|css)?$
{
expires 1h;
}
location /Nginxstatus {
stub_status on;
access_log off;
}
log_format access '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" $http_x_forwarded_for';
# access_log off;
}

}