Docker环境下的前后端分离部署与运维
发布日期:2021-06-29 16:59:06 浏览次数:2 分类:技术文章

本文共 23740 字,大约阅读时间需要 79 分钟。

Docker环境下的前后端分离部署与运维

文章目录

一、Docker虚拟机常用命令

  1. 先更新软件包

    yum -y update
  2. 安装Docker虚拟机

    yum install -y docker
  3. 运行、重启、关闭Docker虚拟机

    service docker startservice docker restartservice docker stop
  4. 搜索镜像

    docker search 镜像名称
  5. 下载镜像

    docker pull 镜像名称
  6. 查看镜像

    docker images
  7. 删除镜像

    docker rmi 镜像名称
  8. 运行容器

    docker run 启动参数  镜像名称
  9. 查看容器列表

    docker ps -a
  10. 停止、挂起、恢复容器

docker stop 容器IDdocker pause 容器IDdocker unpase 容器ID
  1. 查看容器信息

    docker inspect 容器ID
  2. 删除容器

    docker rm 容器ID
  3. 数据卷管理

    docker volume create 数据卷名称  #创建数据卷docker volume rm 数据卷名称  #删除数据卷docker volume inspect 数据卷名称  #查看数据卷
  4. 网络管理

    docker network ls 查看网络信息docker network create --subnet=网段 网络名称docker network rm 网络名称
  5. 避免VM虚拟机挂起恢复之后,Docker虚拟机断网

    vi /etc/sysctl.conf

    文件中添加net.ipv4.ip_forward=1这个配置

    #重启网络服务systemctl  restart network

二、安装PXC集群,负载均衡,双机热备

永久关闭防火墙和selinux

永久关闭防火墙和selinux关闭防火墙:systemctl stop firewalld开机不启动防火墙:systemctl disable firewalld临时关闭selinux :setenforce 0永久关闭 selinux :vim /etc/selinux/config修改 SELINUX=enforcin 为SELINUX=disable然后使用命令: reboot  进行重启
  1. 安装PXC镜像

    docker pull percona/percona-xtradb-cluster:5.7.21

    强烈推荐同学们安装5.7.21版本的PXC镜像,兼容性最好,在容器内可以执行apt-get安装各种程序包。最新版的PXC镜像内,无法执行apt-get,也就没法安装热备份工具了。

  2. 为PXC镜像改名

    docker tag percona/percona-xtradb-cluster pxc
  3. 创建net1网段

    docker network create --subnet=172.18.0.0/16 net1
  4. 创建5个数据卷

    docker volume create --name v1docker volume create --name v2docker volume create --name v3docker volume create --name v4docker volume create --name v5
  5. 创建备份数据卷(用于热备份数据)

    docker volume create --name backup
  6. 创建5节点的PXC集群

    注意,每个MySQL容器创建之后,因为要执行PXC的初始化和加入集群等工作,耐心等待1分钟左右再用客户端连接MySQL。另外,必须第1个MySQL节点启动成功,用MySQL客户端能连接上之后,再去创建其他MySQL节点。

    #创建第1个MySQL节点docker run -d -p 3306:3306 -e MYSQL_ROOT_PASSWORD=abc123456 -e CLUSTER_NAME=PXC -e XTRABACKUP_PASSWORD=abc123456 -v v1:/var/lib/mysql -v backup:/data --privileged --name=node1 --net=net1 --ip 172.18.0.2 pxc#创建第2个MySQL节点docker run -d -p 3307:3306 -e MYSQL_ROOT_PASSWORD=abc123456 -e CLUSTER_NAME=PXC -e XTRABACKUP_PASSWORD=abc123456 -e CLUSTER_JOIN=node1 -v v2:/var/lib/mysql -v backup:/data --privileged --name=node2 --net=net1 --ip 172.18.0.3 pxc#创建第3个MySQL节点docker run -d -p 3308:3306 -e MYSQL_ROOT_PASSWORD=abc123456 -e CLUSTER_NAME=PXC -e XTRABACKUP_PASSWORD=abc123456 -e CLUSTER_JOIN=node1 -v v3:/var/lib/mysql --privileged --name=node3 --net=net1 --ip 172.18.0.4 pxc#创建第4个MySQL节点docker run -d -p 3309:3306 -e MYSQL_ROOT_PASSWORD=abc123456 -e CLUSTER_NAME=PXC -e XTRABACKUP_PASSWORD=abc123456 -e CLUSTER_JOIN=node1 -v v4:/var/lib/mysql --privileged --name=node4 --net=net1 --ip 172.18.0.5 pxc#创建第5个MySQL节点docker run -d -p 3310:3306 -e MYSQL_ROOT_PASSWORD=abc123456 -e CLUSTER_NAME=PXC -e XTRABACKUP_PASSWORD=abc123456 -e CLUSTER_JOIN=node1 -v v5:/var/lib/mysql -v backup:/data --privileged --name=node5 --net=net1 --ip 172.18.0.6 pxc
  7. 安装Haproxy镜像

    docker pull haproxy:1.9.7
  8. 宿主机上编写Haproxy配置文件

    vi /home/soft/haproxy/haproxy.cfg

    配置文件如下:

    global	#工作目录	chroot /usr/local/etc/haproxy	#日志文件,使用rsyslog服务中local5日志设备(/var/log/local5),等级info	log 127.0.0.1 local5 info	#守护进程运行	daemondefaults	log	global	mode	http	#日志格式	option	httplog	#日志中不记录负载均衡的心跳检测记录	option	dontlognull    #连接超时(毫秒)	timeout connect 5000    #客户端超时(毫秒)	timeout client  50000	#服务器超时(毫秒)    timeout server  50000#监控界面	listen  admin_stats	#监控界面的访问的IP和端口	bind  0.0.0.0:8888	#访问协议    mode        http	#URI相对地址    stats uri   /dbs	#统计报告格式    stats realm     Global\ statistics	#登陆帐户信息    stats auth  admin:abc123456#数据库负载均衡listen  proxy-mysql	#访问的IP和端口	bind  0.0.0.0:3306      #网络协议	mode  tcp	#负载均衡算法(轮询算法)	#轮询算法:roundrobin	#权重算法:static-rr	#最少连接算法:leastconn	#请求源IP算法:source     balance  roundrobin	#日志格式    option  tcplog	#在MySQL中创建一个没有权限的haproxy用户,密码为空。Haproxy使用这个账户对MySQL数据库心跳检测    option  mysql-check user haproxy    server  MySQL_1 172.18.0.2:3306 check weight 1 maxconn 2000      server  MySQL_2 172.18.0.3:3306 check weight 1 maxconn 2000  	server  MySQL_3 172.18.0.4:3306 check weight 1 maxconn 2000 	server  MySQL_4 172.18.0.5:3306 check weight 1 maxconn 2000	server  MySQL_5 172.18.0.6:3306 check weight 1 maxconn 2000	#使用keepalive检测死链    option  tcpka
  9. 创建全部数据库的心跳检测用户(一个数据库操作就可以,because PXC集群是强一致性,集群具有同步事务操作的特性)

    CREATE USER 'haproxy'@'%' IDENTIFIED BY '';
  10. 挂掉节点命令

    docker stop node1
  11. 创建两个Haproxy容器

#创建第1个Haproxy负载均衡服务器docker run -it -d -p 4001:8888 -p 4002:3306 -v /home/soft/haproxy:/usr/local/etc/haproxy --name h1 --privileged --net=net1 --ip 172.18.0.7 haproxy:1.9.7#进入h1容器,启动Haproxydocker exec -it h1 bashhaproxy -f /usr/local/etc/haproxy/haproxy.cfg#创建第2个Haproxy负载均衡服务器docker run -it -d -p 4003:8888 -p 4004:3306 -v /home/soft/haproxy:/usr/local/etc/haproxy --name h2 --privileged --net=net1 --ip 172.18.0.8 haproxy:1.9.7#进入h2容器,启动Haproxydocker exec -it h2 bashhaproxy -f /usr/local/etc/haproxy/haproxy.cfg
  1. Haproxy容器内安装Keepalived,设置虚拟IP

注意事项:云主机不支持虚拟IP,另外很多公司的网络禁止创建虚拟IP(回家创建),还有**宿主机一定要关闭防火墙和SELINUX**,很多同学都因为这个而失败的,切记切记

#进入h1容器docker exec -it h1 bash#更新软件包apt-get update#安装VIMapt-get install vim#安装Keepalivedapt-get install keepalived#编辑Keepalived配置文件(参考下方配置文件)vim /etc/keepalived/keepalived.conf#启动Keepalivedservice keepalived start#宿主机执行ping命令ping 172.18.0.201

配置文件内容如下:

vrrp_instance  VI_1 {    state  MASTER    interface  eth0    virtual_router_id  51    priority  100    advert_int  1    authentication {        auth_type  PASS        auth_pass  123456    }    virtual_ipaddress {        172.18.0.201    }}
#进入h2容器docker exec -it h2 bash#更新软件包apt-get update#安装VIMapt-get install vim#安装Keepalivedapt-get install keepalived#编辑Keepalived配置文件vim /etc/keepalived/keepalived.conf#启动Keepalivedservice keepalived start#宿主机执行ping命令ping 172.18.0.201

配置文件内容如下:

vrrp_instance  VI_1 {
state MASTER interface eth0 virtual_router_id 51 priority 100 advert_int 1 authentication {
auth_type PASS auth_pass 123456 } virtual_ipaddress {
172.18.0.201 }}
  1. 宿主机安装Keepalived,实现双机热备

    #宿主机执行安装Keepalivedyum -y install keepalived#修改Keepalived配置文件vi /etc/keepalived/keepalived.conf#启动Keepalivedservice keepalived start

    Keepalived配置文件如下:

    vrrp_instance VI_1 {
    state MASTER interface ens33 virtual_router_id 51 priority 100 advert_int 1 authentication {
    auth_type PASS auth_pass 1111 } virtual_ipaddress {
    192.168.180.245 }}virtual_server 192.168.180.245 8888 {
    delay_loop 3 lb_algo rr lb_kind NAT persistence_timeout 50 protocol TCP real_server 172.18.0.201 8888 {
    weight 1 }}virtual_server 192.168.180.245 3306 {
    delay_loop 3 lb_algo rr lb_kind NAT persistence_timeout 50 protocol TCP real_server 172.18.0.201 3306 {
    weight 1 }}
  2. 热备份数据

    #进入node1容器docker exec -it node1 bash#更新软件包apt-get update#安装热备工具apt-get install percona-xtrabackup-24#全量热备innobackupex --user=root --password=abc123456 /data/backup/full
  3. 冷还原数据

    停止其余4个节点,并删除节点

    docker stop node2docker stop node3docker stop node4docker stop node5docker rm node2docker rm node3docker rm node4docker rm node5

    node1容器中删除MySQL的数据

    #删除数据rm -rf /var/lib/mysql/*#清空事务innobackupex --user=root --password=abc123456 --apply-back /data/backup/full/2018-04-15_05-09-07/#还原数据innobackupex --user=root --password=abc123456 --copy-back  /data/backup/full/2018-04-15_05-09-07/

    重新创建其余4个节点,组件PXC集群

    [外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-V4Zh4aH8-1622081902374)(课程脚本.assets/image-20210524215958474.png)]

三、PXC 特别注意事项

PXC的主节点和从节点分别代表什么意义?

PXC中的主节点和从节点跟Replication主从节点是有巨大差别的。

首先Replication集群的数据同步只能是从主节点到从节点,而且节点的身份是固定的,主节点永远是Master,从节点永远是Slave,不能互换。

但是PXC上的主节点指的是第一个启动的节点,它不仅要启动MySQL服务,还要用Galera创建PXC集群。这些工作完成之后,主节点自动降级成普通节点。其他节点启动的时候只需要启动MySQL服务,然后再加入到PXC集群即可,所以这些节点从启动到关闭,身份一直都是普通节点。

为什么Node1能启动,而其他的PXC节点启动就闪退呢?

这是因为Node1启动的时候要做跟多工作,上面已经提及了。所以你没等node1把PXC集群创建出来,你就飞快的启动其他PXC节点,它们找不到Node1启动的PXC集群,所以就自动闪退了。

正确的办法是启动Node1之后,等待10秒钟,然后用Navicat访问一下,能访问了,再去启动其他PXC节点

###如果PXC集群在运行的状态下,在宿主机上直接关机,或者停止Docker服务,为什么下次启动哪个PXC节点都会闪退?

这个要从PXC集群的节点管理说起,PXC节点的数据目录是/var/lib/mysql,好在这个目录被我们映射到数据卷上了。比如你访问v1数据卷就能看到node1的数据目录。这其中有个grastate.dat的文件,它里面有个safe_to_bootstrap参数被PXC用来记载谁是最后退出PXC集群的节点。比如node1是最后关闭的节点,那么PXC就会在把safe_to_bootstrap设置成1,代表node1节点最后退出,它的数据是最新的。下次启动必须先启动node1,然后其他节点与node1同步。

如果你在PXC节点都正常运行的状态下关闭宿主机Docker服务或者电源,那么PXC来不及判断谁是最后退出的节点,所有PXC节点一瞬间就都关上了,哪个节点的safe_to_boostrap参数就都是0。解决这个故障也很好办,那就是挑node1,把该参数改成1,然后正常启动node1,再启动其他节点就行了。

PXC集群只有一个节点,关闭了这个节点的容器,下次还能启动起来吗?

当然是可以的,因为PXC里只有一个节点,那么这个节点一定是按照主节点启动的,所以启动它的时候,它会启动MySQL服务,还创建出PXC集群。即便关闭了容器,下次再启动还是这个步骤,不会出现启动故障。如果说PXC集群是由多个节点组成的,node1停掉了,其他节点都正常运行。这时候启动node1是会出现闪退的,node1刚启动几秒钟就挂了。这是因为node2等一些节点正在现有的PXC中运行,这时候你启动node1,再创建一个同名的PXC集群,肯定会引发冲突啊。所以node1就闪退了。

遇到这种情况,正确的处理办法是,把node1容器删除。别紧张,没让你删除v1数据卷,所以数据丢不了。然后用从节点的命令方式创建一个node1,启动参数中与某个节点同步的设置就随便选择一个现在运行的PXC节点,然后Node1就能启动了。

安装Redis,配置RedisCluster集群

  1. 安装Redis镜像

    docker pull yyyyttttwwww/redis
  2. 创建net2网段

    docker network create --subnet=172.19.0.0/16 net2
  3. 创建6节点Redis容器

    docker run -it -d --name r1 -p 5001:6379 --net=net2 --ip 172.19.0.2 redis bashdocker run -it -d --name r2 -p 5002:6379 --net=net2 --ip 172.19.0.3 redis bashdocker run -it -d --name r3 -p 5003:6379 --net=net2 --ip 172.19.0.4 redis bashdocker run -it -d --name r4 -p 5004:6379 --net=net2 --ip 172.19.0.5 redis bashdocker run -it -d --name r5 -p 5005:6379 --net=net2 --ip 172.19.0.6 redis bashdocker run -it -d --name r6 -p 5006:6379 --net=net2 --ip 172.19.0.7 redis bash

    注意:redis配置文件里必须要设置bind 0.0.0.0,这是允许其他IP可以访问当前redis。如果不设置这个参数,就不能组建Redis集群。

  4. 启动6节点Redis服务器

    #进入r1节点docker exec -it r1 bashcp /home/redis/redis.conf /usr/redis/redis.confcd /usr/redis/src./redis-server ../redis.conf#进入r2节点docker exec -it r2 bashcp /home/redis/redis.conf /usr/redis/redis.confcd /usr/redis/src./redis-server ../redis.conf#进入r3节点docker exec -it r3 bashcp /home/redis/redis.conf /usr/redis/redis.confcd /usr/redis/src./redis-server ../redis.conf#进入r4节点docker exec -it r4 bashcp /home/redis/redis.conf /usr/redis/redis.confcd /usr/redis/src./redis-server ../redis.conf#进入r5节点docker exec -it r5 bashcp /home/redis/redis.conf /usr/redis/redis.confcd /usr/redis/src./redis-server ../redis.conf#进入r6节点docker exec -it r6 bashcp /home/redis/redis.conf /usr/redis/redis.confcd /usr/redis/src./redis-server ../redis.conf
  5. 创建Cluster集群

    #在r1节点上执行下面的指令cd /usr/redis/srcmkdir -p ../clustercp redis-trib.rb ../cluster/cd ../cluster#创建Cluster集群./redis-trib.rb create --replicas 1 172.19.0.2:6379 172.19.0.3:6379 172.19.0.4:6379 172.19.0.5:6379 172.19.0.6:6379 172.19.0.7:6379#选择yes

打包部署后端项目

  1. 进入人人开源后端项目,执行打包(修改配置文件,更改端口,打包三次生成三个JAR文件)

    mvn clean install -Dmaven.test.skip=true
  2. 安装Java镜像

    docker pull java
  3. 创建3节点Java容器

    #创建数据卷,上传JAR文件docker volume create j1#启动容器docker run -it -d --name j1 -v j1:/home/soft --net=host java#进入j1容器docker exec -it j1 bash#启动Java项目nohup java -jar /home/soft/renren-fast.jar#创建数据卷,上传JAR文件docker volume create j2#启动容器docker run -it -d --name j2 -v j2:/home/soft --net=host java#进入j1容器docker exec -it j2 bash#启动Java项目nohup java -jar /home/soft/renren-fast.jar#创建数据卷,上传JAR文件docker volume create j3#启动容器docker run -it -d --name j3 -v j3:/home/soft --net=host java#进入j1容器docker exec -it j3 bash#启动Java项目nohup java -jar /home/soft/renren-fast.jar
  4. 安装Nginx镜像

    docker pull nginx
  5. 创建Nginx容器,配置负载均衡

    宿主机上/home/n1/nginx.conf配置文件内容如下:

    user  nginx;worker_processes  1;error_log  /var/log/nginx/error.log warn;pid        /var/run/nginx.pid;events {    worker_connections  1024;}http {    include       /etc/nginx/mime.types;    default_type  application/octet-stream;    log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '                      '$status $body_bytes_sent "$http_referer" '                      '"$http_user_agent" "$http_x_forwarded_for"';    access_log  /var/log/nginx/access.log  main;    sendfile        on;    #tcp_nopush     on;    keepalive_timeout  65;    #gzip  on;		proxy_redirect          off;	proxy_set_header        Host $host;	proxy_set_header        X-Real-IP $remote_addr;	proxy_set_header        X-Forwarded-For $proxy_add_x_forwarded_for;	client_max_body_size    10m;	client_body_buffer_size   128k;	proxy_connect_timeout   5s;	proxy_send_timeout      5s;	proxy_read_timeout      5s;	proxy_buffer_size        4k;	proxy_buffers           4 32k;	proxy_busy_buffers_size  64k;	proxy_temp_file_write_size 64k;		upstream tomcat {		server 192.168.99.104:6001;		server 192.168.99.104:6002;		server 192.168.99.104:6003;	}	server {        listen       6101;        server_name  192.168.99.104;         location / {              proxy_pass   http://tomcat;            index  index.html index.htm;          }      }}

    创建第1个Nginx节点

    docker run -it -d --name n1 -v /home/n1/nginx.conf:/etc/nginx/nginx.conf --net=host --privileged nginx

    宿主机上/home/n2/nginx.conf配置文件内容如下:

    user  nginx;worker_processes  1;error_log  /var/log/nginx/error.log warn;pid        /var/run/nginx.pid;events {    worker_connections  1024;}http {    include       /etc/nginx/mime.types;    default_type  application/octet-stream;    log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '                      '$status $body_bytes_sent "$http_referer" '                      '"$http_user_agent" "$http_x_forwarded_for"';    access_log  /var/log/nginx/access.log  main;    sendfile        on;    #tcp_nopush     on;    keepalive_timeout  65;    #gzip  on;		proxy_redirect          off;	proxy_set_header        Host $host;	proxy_set_header        X-Real-IP $remote_addr;	proxy_set_header        X-Forwarded-For $proxy_add_x_forwarded_for;	client_max_body_size    10m;	client_body_buffer_size   128k;	proxy_connect_timeout   5s;	proxy_send_timeout      5s;	proxy_read_timeout      5s;	proxy_buffer_size        4k;	proxy_buffers           4 32k;	proxy_busy_buffers_size  64k;	proxy_temp_file_write_size 64k;		upstream tomcat {		server 192.168.99.104:6001;		server 192.168.99.104:6002;		server 192.168.99.104:6003;	}	server {        listen       6102;        server_name  192.168.99.104;         location / {              proxy_pass   http://tomcat;            index  index.html index.htm;          }      }}

    创建第2个Nginx节点

    docker run -it -d --name n2 -v /home/n2/nginx.conf:/etc/nginx/nginx.conf --net=host --privileged nginx
  6. 在Nginx容器安装Keepalived

    #进入n1节点docker exec -it n1 bash#更新软件包apt-get update#安装VIMapt-get install vim#安装Keepalivedapt-get install keepalived#编辑Keepalived配置文件(如下)vim /etc/keepalived/keepalived.conf#启动Keepalivedservice keepalived start
    vrrp_instance VI_1 {    state MASTER    interface ens33    virtual_router_id 51    priority 100    advert_int 1    authentication {        auth_type PASS        auth_pass 123456    }    virtual_ipaddress {        192.168.99.151    }}virtual_server 192.168.99.151 6201 {    delay_loop 3    lb_algo rr    lb_kind NAT    persistence_timeout 50    protocol TCP    real_server 192.168.99.104 6101 {        weight 1    }}
    #进入n1节点docker exec -it n2 bash#更新软件包apt-get update#安装VIMapt-get install vim#安装Keepalivedapt-get install keepalived#编辑Keepalived配置文件(如下)vim /etc/keepalived/keepalived.conf#启动Keepalivedservice keepalived start
    vrrp_instance VI_1 {
    state MASTER interface ens33 virtual_router_id 51 priority 100 advert_int 1 authentication {
    auth_type PASS auth_pass 123456 } virtual_ipaddress {
    192.168.99.151 }}virtual_server 192.168.99.151 6201 {
    delay_loop 3 lb_algo rr lb_kind NAT persistence_timeout 50 protocol TCP real_server 192.168.99.104 6102 {
    weight 1 }}

打包部署后端项目

  1. 在前端项目路径下执行打包指令

    npm run build
  2. build目录的文件拷贝到宿主机的/home/fn1/renren-vue、/home/fn2/renren-vue、/home/fn3/renren-vue的目录下面

  3. 创建3节点的Nginx,部署前端项目

    宿主机/home/fn1/nginx.conf的配置文件

    user  nginx;worker_processes  1;error_log  /var/log/nginx/error.log warn;pid        /var/run/nginx.pid;events {    worker_connections  1024;}http {    include       /etc/nginx/mime.types;    default_type  application/octet-stream;    log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '                      '$status $body_bytes_sent "$http_referer" '                      '"$http_user_agent" "$http_x_forwarded_for"';    access_log  /var/log/nginx/access.log  main;    sendfile        on;    #tcp_nopush     on;    keepalive_timeout  65;    #gzip  on;		proxy_redirect          off;	proxy_set_header        Host $host;	proxy_set_header        X-Real-IP $remote_addr;	proxy_set_header        X-Forwarded-For $proxy_add_x_forwarded_for;	client_max_body_size    10m;	client_body_buffer_size   128k;	proxy_connect_timeout   5s;	proxy_send_timeout      5s;	proxy_read_timeout      5s;	proxy_buffer_size        4k;	proxy_buffers           4 32k;	proxy_busy_buffers_size  64k;	proxy_temp_file_write_size 64k;		server {		listen 6501;		server_name  192.168.99.104;		location  /  {			root  /home/fn1/renren-vue;			index  index.html;		}	}}
    #启动第fn1节点docker run -it -d --name fn1 -v /home/fn1/nginx.conf:/etc/nginx/nginx.conf -v /home/fn1/renren-vue:/home/fn1/renren-vue --privileged --net=host nginx

    宿主机/home/fn2/nginx.conf的配置文件

    user  nginx;worker_processes  1;error_log  /var/log/nginx/error.log warn;pid        /var/run/nginx.pid;events {
    worker_connections 1024;}http {
    include /etc/nginx/mime.types; default_type application/octet-stream; log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; access_log /var/log/nginx/access.log main; sendfile on; #tcp_nopush on; keepalive_timeout 65; #gzip on; proxy_redirect off; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; client_max_body_size 10m; client_body_buffer_size 128k; proxy_connect_timeout 5s; proxy_send_timeout 5s; proxy_read_timeout 5s; proxy_buffer_size 4k; proxy_buffers 4 32k; proxy_busy_buffers_size 64k; proxy_temp_file_write_size 64k; server {
    listen 6502; server_name 192.168.99.104; location / {
    root /home/fn2/renren-vue; index index.html; } }}
    #启动第fn2节点docker run -it -d --name fn2 -v /home/fn2/nginx.conf:/etc/nginx/nginx.conf -v /home/fn2/renren-vue:/home/fn2/renren-vue --privileged --net=host nginx

    宿主机/home/fn3/nginx.conf的配置文件

    user  nginx;worker_processes  1;error_log  /var/log/nginx/error.log warn;pid        /var/run/nginx.pid;events {
    worker_connections 1024;}http {
    include /etc/nginx/mime.types; default_type application/octet-stream; log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; access_log /var/log/nginx/access.log main; sendfile on; #tcp_nopush on; keepalive_timeout 65; #gzip on; proxy_redirect off; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; client_max_body_size 10m; client_body_buffer_size 128k; proxy_connect_timeout 5s; proxy_send_timeout 5s; proxy_read_timeout 5s; proxy_buffer_size 4k; proxy_buffers 4 32k; proxy_busy_buffers_size 64k; proxy_temp_file_write_size 64k; server {
    listen 6503; server_name 192.168.99.104; location / {
    root /home/fn3/renren-vue; index index.html; } }}

    启动fn3节点

    #启动第fn3节点docker run -it -d --name fn3 -v /home/fn3/nginx.conf:/etc/nginx/nginx.conf -v /home/fn3/renren-vue:/home/fn3/renren-vue --privileged --net=host nginx
  4. 配置负载均衡

    宿主机/home/ff1/nginx.conf配置文件

    user  nginx;worker_processes  1;error_log  /var/log/nginx/error.log warn;pid        /var/run/nginx.pid;events {
    worker_connections 1024;}http {
    include /etc/nginx/mime.types; default_type application/octet-stream; log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; access_log /var/log/nginx/access.log main; sendfile on; #tcp_nopush on; keepalive_timeout 65; #gzip on; proxy_redirect off; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; client_max_body_size 10m; client_body_buffer_size 128k; proxy_connect_timeout 5s; proxy_send_timeout 5s; proxy_read_timeout 5s; proxy_buffer_size 4k; proxy_buffers 4 32k; proxy_busy_buffers_size 64k; proxy_temp_file_write_size 64k; upstream fn {
    server 192.168.99.104:6501; server 192.168.99.104:6502; server 192.168.99.104:6503; } server {
    listen 6601; server_name 192.168.99.104; location / {
    proxy_pass http://fn; index index.html index.htm; } }}
    #启动ff1节点docker run -it -d --name ff1 -v /home/ff1/nginx.conf:/etc/nginx/nginx.conf --net=host --privileged nginx

    宿主机/home/ff2/nginx.conf配置文件

    user  nginx;worker_processes  1;error_log  /var/log/nginx/error.log warn;pid        /var/run/nginx.pid;events {
    worker_connections 1024;}http {
    include /etc/nginx/mime.types; default_type application/octet-stream; log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; access_log /var/log/nginx/access.log main; sendfile on; #tcp_nopush on; keepalive_timeout 65; #gzip on; proxy_redirect off; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; client_max_body_size 10m; client_body_buffer_size 128k; proxy_connect_timeout 5s; proxy_send_timeout 5s; proxy_read_timeout 5s; proxy_buffer_size 4k; proxy_buffers 4 32k; proxy_busy_buffers_size 64k; proxy_temp_file_write_size 64k; upstream fn {
    server 192.168.99.104:6501; server 192.168.99.104:6502; server 192.168.99.104:6503; } server {
    listen 6602; server_name 192.168.99.104; location / {
    proxy_pass http://fn; index index.html index.htm; } }}
    #启动ff2节点docker run -it -d --name ff2 -v /home/ff2/nginx.conf:/etc/nginx/nginx.conf --net=host --privileged nginx
  5. 配置双机热备

    #进入ff1节点docker exec -it ff1 bash#更新软件包apt-get update#安装VIMapt-get install vim#安装Keepalivedapt-get install keepalived#编辑Keepalived配置文件(如下)vim /etc/keepalived/keepalived.conf#启动Keepalivedservice keepalived start
    vrrp_instance VI_1 {
    state MASTER interface ens33 virtual_router_id 52 priority 100 advert_int 1 authentication {
    auth_type PASS auth_pass 123456 } virtual_ipaddress {
    192.168.99.152 }}virtual_server 192.168.99.151 6701 {
    delay_loop 3 lb_algo rr lb_kind NAT persistence_timeout 50 protocol TCP real_server 192.168.99.104 6601 {
    weight 1 }}
    #进入ff1节点docker exec -it ff2 bash#更新软件包apt-get update#安装VIMapt-get install vim#安装Keepalivedapt-get install keepalived#编辑Keepalived配置文件(如下)vim /etc/keepalived/keepalived.conf#启动Keepalivedservice keepalived start
    vrrp_instance VI_1 {
    state MASTER interface ens33 virtual_router_id 52 priority 100 advert_int 1 authentication {
    auth_type PASS auth_pass 123456 } virtual_ipaddress {
    192.168.99.152 }}virtual_server 192.168.99.151 6701 {
    delay_loop 3 lb_algo rr lb_kind NAT persistence_timeout 50 protocol TCP real_server 192.168.99.104 6602 {
    weight 1 }}

转载地址:https://superman.blog.csdn.net/article/details/117322693 如侵犯您的版权,请留言回复原文章的地址,我们会给您删除此文章,给您带来不便请您谅解!

上一篇:boost::adl_move_swap相关用法的测试程序
下一篇:boost::metaparse::v1::impl::empty_string相关用法的测试程序

发表评论

最新留言

路过按个爪印,很不错,赞一个!
[***.219.124.196]2024年04月12日 17时23分19秒