Docke swarm 搭建 mongodb分片集群

时间:2022-12-26 23:29:46

 

服务器 ip 分配

1.1 ip分配

 192.168.5.142 、 192.168.5.143 、 192.168.5.144

 192.168.5.142 为主机 搭建 swarm集群

二、 swarm 集群搭建

2.1 跨主机网络连接

注意 每台 服务器上 进行 修改docker.service 启动项

# 首先 修改 docker.service

vi /lib/systemd/system/docker.service

# 进入到里面  ExecStart=/usr/bin/dockerd 添加

ExecStart=/usr/bin/dockerd  -H tcp://0.0.0.0:2375 -H unix:///var/run/docker.sock

然后 重启服务

systemctl daemon-reload

systemctl restart docker

 

 

# 192.168.5.142 中 运行  

docker swarm init

# 会生成一段 命令  注意保存

docker swarm join --token SWMTKN-1-3bofjagldrkvsdp9pzz4epabemgwwxq9mqh8ddwlqe3b7bglsq-9rmkub7z672dk7jdwdadrxwpb 192.168.5.142:2377

 

# 192.168.5.143 和 192.168.5.144 中 分别 运行上面 命令 把他们加入到swarm集群中

docker swarm join --token SWMTKN-1-3bofjagldrkvsdp9pzz4epabemgwwxq9mqh8ddwlqe3b7bglsq-9rmkub7z672dk7jdwdadrxwpb 192.168.5.142:2377

 

#################查看 是否swarm集群中######################

Docker network ls  查看集群网络

Docker node ls 查看集群关系

 

2.2 创建集群网络

 #  192.168.5.142  此为manager 节点

 # 创建一个名为 mongo 的 overlay 网络 

 docker network create -d overlay --attachable mongo

 

 

三、 docker 搭建 mongodb分片集群

3.1 前期文件创建

# 分别在每个服务器中 运行

# 外挂数据 文件夹创建  # 有一些文件夹可能用不到 

Mkdir –p /root/mongo/{config,shard1,shard2,shard3}

mkdir –p /root/mongo/shard1/{master,follow,zc}

mkdir –p /root/mongo/shard2/{master,follow,zc}

mkdir –p /root/mongo/shard3/{master,follow,zc}

 

3.2 搭建 mongodb 集群

192.168.5.142 中  创建一个 stack.yml 文件

vi stack.yml

# 添加以下内容

version: '3.3'

services:

  mongors1n1:

    # docker 中国的镜像加速地址

    image: registry.docker-cn.com/library/mongo

    command: mongod --shardsvr --replSet shard1 --dbpath /data/db --logpath /data/db/log --logappend  --port 27017

    networks:

      - mongo

    volumes:

      - /etc/localtime:/etc/localtime

      - /root/mongo/shard1/master:/data/db

    deploy:

      restart_policy:

        condition: on-failure

      replicas: 1

      placement:

        # 指定在服务器 manager 上启动

        constraints:

          - node.hostname==manager

  mongors2n1:

    image: registry.docker-cn.com/library/mongo

    command: mongod --shardsvr --replSet shard2 --dbpath /data/db --logpath /data/db/log --logappend --port 27017

    networks:

      - mongo

    volumes:

      - /etc/localtime:/etc/localtime

      - /root/mongo/shard2/master:/data/db

    deploy:

      restart_policy:

        condition: on-failure

      replicas: 1

      placement:

        constraints:

          - node.hostname==worker1

  mongors3n1:

    image: registry.docker-cn.com/library/mongo

    command: mongod --shardsvr --replSet shard3 --dbpath /data/db --logpath /data/db/log --logappend --port 27017

    networks:

      - mongo

    volumes:

      - /etc/localtime:/etc/localtime

      - /root/mongo/shard3/master:/data/db

    deploy:

      restart_policy:

        condition: on-failure

      replicas: 1

      placement:

        constraints:

          - node.hostname==worker2

  mongors1n2:

    image: registry.docker-cn.com/library/mongo

    command: mongod --shardsvr --replSet shard1 --dbpath /data/db --logpath /data/db/log --logappend --port 27017

    networks:

      - mongo

    volumes:

      - /etc/localtime:/etc/localtime

      - /root/mongo/shard1/follow:/data/db

    deploy:

      restart_policy:

        condition: on-failure

      replicas: 1

      placement:

        constraints:

          - node.hostname==manager

  mongors2n2:

    image: registry.docker-cn.com/library/mongo

    command: mongod --shardsvr --replSet shard2 --dbpath /data/db --logpath /data/db/log --logappend --port 27017

    networks:

      - mongo

    volumes:

      - /etc/localtime:/etc/localtime

      - /root/mongo/shard2/follow:/data/db

    deploy:

      restart_policy:

        condition: on-failure

      replicas: 1

      placement:

        constraints:

          - node.hostname==worker1

  mongors3n2:

    image: registry.docker-cn.com/library/mongo

    command: mongod --shardsvr --replSet shard3 --dbpath /data/db --logpath /data/db/log --logappend --port 27017

    networks:

      - mongo

    volumes:

      - /etc/localtime:/etc/localtime

      - /root/mongo/shard3/follow:/data/db

    deploy:

      restart_policy:

        condition: on-failure

      replicas: 1

      placement:

        constraints:

          - node.hostname==worker2

  mongors1n3:

    image: registry.docker-cn.com/library/mongo

    command: mongod --shardsvr --replSet shard1 --dbpath /data/db --logpath /data/db/log --logappend --port 27017

    networks:

      - mongo

    volumes:

      - /etc/localtime:/etc/localtime

      - /root/mongo/shard1/zc:/data/db

    deploy:

      restart_policy:

        condition: on-failure

      replicas: 1

      placement:

        constraints:

          - node.hostname==manager

  mongors2n3:

    image: registry.docker-cn.com/library/mongo

    command: mongod --shardsvr --replSet shard2 --dbpath /data/db --logpath /data/db/log --logappend --port 27017

    networks:

      - mongo

    volumes:

      - /etc/localtime:/etc/localtime

      - /root/mongo/shard2/zc:/data/db

    deploy:

      restart_policy:

        condition: on-failure

      replicas: 1

      placement:

        constraints:

          - node.hostname==worker1

  mongors3n3:

    image: registry.docker-cn.com/library/mongo

    command: mongod --shardsvr --replSet shard3 --dbpath /data/db --logpath /data/db/log --logappend --port 27017

    networks:

      - mongo

    volumes:

      - /etc/localtime:/etc/localtime

      - /root/mongo/shard3/zc:/data/db

    deploy:

      restart_policy:

        condition: on-failure

      replicas: 1

      placement:

        constraints:

          - node.hostname==worker2

  cfg1:

    image: registry.docker-cn.com/library/mongo

    command: mongod --configsvr --replSet cfgrs --smallfiles --dbpath /data/db  --logpath /data/db/log --logappend --port 27017

    networks:

      - mongo

    volumes:

      - /etc/localtime:/etc/localtime

      - /root/mongo/config:/data/db

    deploy:

      restart_policy:

        condition: on-failure

      replicas: 1

      placement:

        constraints:

          - node.hostname==manager

  cfg2:

    image: registry.docker-cn.com/library/mongo

    command: mongod --configsvr --replSet cfgrs --smallfiles --dbpath /data/db  --logpath /data/db/log --logappend --port 27017

    networks:

      - mongo

    volumes:

      - /etc/localtime:/etc/localtime

      - /root/mongo/config:/data/db

    deploy:

      restart_policy:

        condition: on-failure

      replicas: 1

      placement:

        constraints:

          - node.hostname==worker1

  cfg3:

    image: registry.docker-cn.com/library/mongo

    command: mongod --configsvr --replSet cfgrs --smallfiles --dbpath /data/db  --logpath /data/db/log --logappend --port 27017

    networks:

      - mongo

    volumes:

      - /etc/localtime:/etc/localtime

      - /root/mongo/config:/data/db

    deploy:

      restart_policy:

        condition: on-failure

      replicas: 1

      placement:

        constraints:

          - node.hostname==worker2

  mongos:

    image: registry.docker-cn.com/library/mongo

    # mongo 3.6 版默认绑定IP为 127.0.0.1,此处绑定 0.0.0.0 是允许其他容器或主机可以访问

    command: mongos --configdb cfgrs/cfg1:27017,cfg2:27017,cfg3:27017 --bind_ip 0.0.0.0 --port 27017

    networks:

      - mongo

    # 映射宿主机的 27017 端口

    ports:

      - 27017:27017

    volumes:

      - /etc/localtime:/etc/localtime

    depends_on:

      - cfg1

      - cfg2

      - cfg3

    deploy:

      restart_policy:

        condition: on-failure

      # 在集群内的每一台服务器上都启动一个容器

      mode: global

networks:

  mongo:

    external: true

 

####启动服务####

docker stack deploy -c stack.yml mongo

 

#####查看服务是否全部启动成功

docker service ls

####如下图######+

 

 

显示 全部 成功

3.3 初始化 mongodb

1)Manager】初始化 Mongo 配置集群

docker exec -it $(docker ps | grep "cfg1" | awk '{ print $1 }') bash -c "echo 'rs.initiate({_id: \"cfgrs\",configsvr: true, members: [{ _id : 0, host : \"cfg1\" },{ _id : 1, host : \"cfg2\" }, { _id : 2, host : \"cfg3\" }]})' | mongo"

 

2)Manager】初始化三个 Mongo 数据集群

####192.168.5.142#######

docker exec -it $(docker ps | grep "mongors1n1" | awk '{ print $1 }') bash -c "echo 'rs.initiate({_id : \"shard1\", members: [{ _id : 0, host : \"mongors1n1\" },{ _id : 1, host : \"mongors1n2\" },{ _id : 2, host : \"mongors1n3\", arbiterOnly: true }]})' | mongo"

 

#####192.168.5.143######

docker exec -it $(docker ps | grep "mongors2n1" | awk '{ print $1 }') bash -c "echo 'rs.initiate({_id : \"shard2\", members: [{ _id : 0, host : \"mongors2n1\" },{ _id : 1, host : \"mongors2n2\" },{ _id : 2, host : \"mongors2n3\", arbiterOnly: true }]})' | mongo"

 

######192.168.5.144########

docker exec -it $(docker ps | grep "mongors3n1" | awk '{ print $1 }') bash -c "echo 'rs.initiate({_id : \"shard3\", members: [{ _id : 0, host : \"mongors3n1\" },{ _id : 1, host : \"mongors3n2\" },{ _id : 2, host : \"mongors3n3\", arbiterOnly: true }]})' | mongo"

 

3)Manager】将三个数据集群当做分片加入 mongos

####192.168.5.142#######

docker exec -it $(docker ps | grep "mongos" | awk '{ print $1 }') bash -c "echo 'sh.addShard(\"shard1/mongors1n1:27017,mongors1n2:27017,mongors1n3:27017\")' | mongo "

 

####192.168.5.142#######

docker exec -it $(docker ps | grep "mongos" | awk '{ print $1 }') bash -c "echo 'sh.addShard(\"shard2/mongors2n1:27017,mongors2n3:27017,mongors2n3:27017\")' | mongo "

 

####192.168.5.142#######

docker exec -it $(docker ps | grep "mongos" | awk '{ print $1 }') bash -c "echo 'sh.addShard(\"shard3/mongors3n1:27017,mongors3n2:27017,mongors3n3:27017\")' | mongo "

 

3.4 日常查看命令

1)进入 mongos

# 查看分片

 Sh.status()

 

# 设置分片数据库

db.runCommand( { enablesharding :"testdb"});  ####testdb 为数据库名

设置 分片 数据库  什么字段进行分片

sh.shardCollection("testdb.table1", { id: "hashed" } ) #### table1 为表 # id 为字段 “hashed”为分片 方式

###更多详细分片 官网https://docs.mongodb.com/manual/sharding     #

 

 

3.5 分片添加 ip 192.168.5.141

1)修改  docker.service

 # 注意 每台 服务器上 进行 修改docker.service 启动项

# 首先 修改 docker.service

vi /lib/systemd/system/docker.service

# 进入到里面  ExecStart=/usr/bin/dockerd 添加

ExecStart=/usr/bin/dockerd  -H tcp://0.0.0.0:2375 -H unix:///var/run/docker.sock

然后 重启服务

systemctl daemon-reload

systemctl restart docker

2)加入集群网络

####如果忘了 加入 集群的命令  token  可查看  加入 一个worker 命令

docker swarm join-token worker