MongoDB分片环境下整体数据迁移解决方案

时间:2022-09-12 22:00:41

背景:这周请了几天假,25号早上来了,就开始搞MongoDB数据库分片集群环境的整体迁移,起初以为很容易,但是在迁移的过程中,遇到了各种问题。还好经过两天的研究,现在终于搞定!匆忙之中,整理了一下文档,由于网上关于MongoDB数据库迁移的文章较少,顾发表了一篇blog,希望后面的小伙伴少走弯路,快速上路!
由于之前测试环境的集群机器配置不一致,导致在压测的时候,对其中一台机器压力过大,严重影响性能,所以申请了两台相同配置的新机器,需要将原先分片环境中的数据全部迁移到新的机器中,旧的机器暂停服务!

MongoDB分片环境下整体数据迁移解决方案
MongoDB分片环境下整体数据迁移解决方案

一、部署架构图

MongoDB分片环境下整体数据迁移解决方案

二、基础常识

1.路由服务器和分片服务器(单个副本集)认证信息存储的位置不同。路由存储在配置服务器中,分片服务器自身进行存储。所以拿路由的认证信息不能去分片服务器进行安全认证。想对单个副本集进行操作,需针对单个副本集创建认证信息,认证后进行操作。

三、分片服务器数据迁移

1.新建节点

环境:10.20.222.63/10.20.222.64两个节点机器
新增二个节点,IP和port分别为10.20.222.63:20011, 10.20.222.64:20012,1 0.20.222.63:20013

1.1 环境部署:

1.1.1两台机器分别执行

curl https://fastdl.mongodb.org/linux/mongodb-linux-x86_64-rhel62-3.2.7.tgz > mongo.tgz
tar -zxvf mongo.tgz
mv mongodb-linux-x86_64-rhel62-3.2.7/ mongo

(将原先分片集群环境中的keyfile拷贝到mongo中,记得权限问题。chmod 600 keyfile )

1.1.2创建三个节点

mkdir shard1.1  (63)
mkdir shard1.2 (64)
mkdir shard1.3 (63)
// 分别执行下面命令:
cd shard1.1
mkdir data
mkdir logs
touch ./logs/mongodb.log
vim config.cfg
port = 20011 #20012 20013
dbpath = /home/unisound/shard1.1/data
logpath = /home/unisound/shard1.1/logs/mongodb.log
fork = true
logappend = true
shardsvr = true
replSet = shard1
keyFile = /home/unisound/mongo/keyfile

1.2启动三个节点

./bin/mongod --config config_shard11.cfg
./bin/mongod --config config_shard12.cfg
./bin/mongod --config config_shard13.cfg

至此,新建的三个节点准备完毕,下面准备添加到单个副本集中。

2.添加节点

1.从原先的三个节点找出主节点

10.20.0.44:20011,10.20.0.44:20012(主),10.20.0.44:20013

./mongo/bin/mongo --port 20012
use admin
show dbs
Error: listDatabases failed:{
"ok" : 0,
"errmsg" : "not authorized on admin to execute command { listDatabases: 1.0 }",
"code" : 13
}

添加用户

db.createUser({
user: "root",
pwd: "123456",
roles: [
{ role: "root", db: "admin"
}
]
})

认证

db.auth('root','123456')

添加节点

rs.add("10.20.222.63:20011")
rs.add("10.20.222.64:20012")
rs.add("10.20.222.63:20013",true)
shard1:PRIMARY> 
rs.status()
{
"set" : "shard1",
"date" : ISODate("2017-05-26T07:06:34.935Z"),
"myState" : 1,
"term" : NumberLong(2),
"heartbeatIntervalMillis" : NumberLong(2000),
"members" : [
{
"_id" : 0,
"name" : "10.20.0.44:20011",
"health" : 1,
"state" : 1,
"stateStr" : "PRIMARY",
"uptime" : 2136,
"optime" : {
"ts" : Timestamp(1495782392, 1),
"t" : NumberLong(2)
},
"optimeDate" : ISODate("2017-05-26T07:06:32Z"),
"electionTime" : Timestamp(1495780269, 1),
"electionDate" : ISODate("2017-05-26T06:31:09Z"),
"configVersion" : 4,
"self" : true
},
{
"_id" : 1,
"name" : "10.20.0.44:20012",
"health" : 1,
"state" : 2,
"stateStr" : "SECONDARY",
"uptime" : 2130,
"optime" : {
"ts" : Timestamp(1495782392, 1),
"t" : NumberLong(2)
},
"optimeDate" : ISODate("2017-05-26T07:06:32Z"),
"lastHeartbeat" : ISODate("2017-05-26T07:06:34.640Z"),
"lastHeartbeatRecv" : ISODate("2017-05-26T07:06:34.830Z"),
"pingMs" : NumberLong(0),
"syncingTo" : "10.20.0.44:20011",
"configVersion" : 4
},
{
"_id" : 2,
"name" : "10.20.0.44:20013",
"health" : 1,
"state" : 7,
"stateStr" : "ARBITER",
"uptime" : 2125,
"lastHeartbeat" : ISODate("2017-05-26T07:06:34.640Z"),
"lastHeartbeatRecv" : ISODate("2017-05-26T07:06:32.800Z"),
"pingMs" : NumberLong(0),
"configVersion" : 4
},
{
"_id" : 3,
"name" : "10.20.222.63:20011",
"health" : 1,
"state" : 2,
"stateStr" : "SECONDARY",
"uptime" : 151,
"optime" : {
"ts" : Timestamp(1495782392, 1),
"t" : NumberLong(2)
},
"optimeDate" : ISODate("2017-05-26T07:06:32Z"),
"lastHeartbeat" : ISODate("2017-05-26T07:06:34.640Z"),
"lastHeartbeatRecv" : ISODate("2017-05-26T07:06:34.726Z"),
"pingMs" : NumberLong(0),
"syncingTo" : "10.20.0.44:20011",
"configVersion" : 4
},
{
"_id" : 4,
"name" : "10.20.222.64:20012",
"health" : 1,
"state" : 2,
"stateStr" : "SECONDARY",
"uptime" : 83,
"optime" : {
"ts" : Timestamp(1495782392, 1),
"t" : NumberLong(2)
},
"optimeDate" : ISODate("2017-05-26T07:06:32Z"),
"lastHeartbeat" : ISODate("2017-05-26T07:06:34.640Z"),
"lastHeartbeatRecv" : ISODate("2017-05-26T07:06:32.729Z"),
"pingMs" : NumberLong(0),
"syncingTo" : "10.20.222.63:20011",
"configVersion" : 4
},
{
"_id" : 5,
"name" : "10.20.222.63:20013",
"health" : 1,
"state" : 7,
"stateStr" : "ARBITER",
"uptime" : 2,
"lastHeartbeat" : ISODate("2017-05-26T07:06:34.733Z"),
"lastHeartbeatRecv" : ISODate("2017-05-26T07:06:32.850Z"),
"pingMs" : NumberLong(0),
"configVersion" : 4
}
],
"ok" : 1
}

至此,节点添加完成,数据会自动同步过来。
在移除的时候,只能在PRIMARY节点类型上操作

rs.remove("10.20.0.44:20011")
rs.remove("10.20.0.44:20012")
rs.remove("10.20.0.44:20013")

移除成功后,数据迁移完成.

四、配置服务器数据迁移

10.20.222.63:20011, 10.20.222.64:20012,1 0.20.222.63:20013
分别创建三个目录,执行

mkdir config1       (63)
mkdir config2 (64)
mkdir config3 (63)
mkdir data
mkdir logs
touch ./logs/mongodb.log
vim config
port = 10001 #10002 10003
dbpath = /home/unisound/config1/data
replSet = config
configsvr = true
logpath = /home/unisound/config1/logs/mongodb.log
logappend = true
fork = true
keyFile = /home/unisound/mongo/keyfile

启动配置服务器

./mongo/bin/mongo --config config1/config.cfg
./mongo/bin/mongo --config config2/config.cfg
./mongo/bin/mongo --config config3/config.cfg

添加节点
找到配置服务器的主节点

./mongo/bin/mongo --port 10001
use admin
db.auth('root','123456')
rs.status()
rs.add("10.20.222.63:10001")
rs.add("10.20.222.64:10002")
rs.add("10.20.222.63:10003")
在移除的时候,只能在PRIMARY节点类型上操作
rs.remove("10.20.0.44:10001")
rs.remove("10.20.0.44:10002")
rs.remove("10.20.0.44:10003")

配置服务器迁移完成,在移除remove的时候,会自动选举其中一个作为主节点。

五、路由服务器迁移

mkdir router
mkdir data
mkdir logs
touch ./logs/mongodb.log
vim config
logpath = /home/unisound/router/logs/mongodb.log
port = 27017
configdb = config/
10.20.222.63:10001,10.20.222.64:10002,10.20.222.63:10003
fork = true
logappend = true
keyFile = /home/unisound/mongo/keyfile
./mongo/bin/mongos --config router/config.cfg
./mongo/bin/mongo
use user_center
db.auth('dev','123456')

至此,分片、配置、路由服务器数据都迁移完成。
MongoDB分片环境下整体数据迁移解决方案