Mongodb分布式集群副本集+分片

时间:2022-06-15 06:12:40

简介

1. 副本集

开启复制集后,主节点会在 local 库下生成一个集合叫 oplog.rs,这是一个有限集合,也就是大小是固定的。其中记录的是整个mongod实例一段时间内数据库的所有变更(插入/更新/删除)操作,当空间用完时新记录自动覆盖最老的记录

MongoDB复制集(副本集):由一组实列(进程)组成;包含一个Primary节点和多个Secondary节点,用户的所有写操作写入Primary ,Secondary通过oplog来同步primary的数据;可以通过心跳检测机制,一旦primary出现故障,则就会通过仲裁节点从secondary选取一个新的主节点

Primary:主节点,由选择产生,负责客户端的写操作,产生oplog日志文件

Secondary:从节点;负责客户端的读操作;

Arbiter:仲裁节点;只参与选举的投票;不会成为Primary和secondary,任意节点宕机,复制集将不能提供服务了(无法选出Primary),这时可以给复制集添加一个Arbiter节点,即使有节点宕机,仍能选出Primary**

1.1 MongoDB选举的原理

MongoDB的节点分为三种类型,分别为标准节点(host)、被动节点(passive)和仲裁节点(arbiter)

只有标准节点才有可能被选举为活跃节点(主节点),拥有选举权。被动节点有完整副本,不可能成为活跃节点,具有选举权。仲裁节点不复制数据,不可能成为活跃节点,只有选举权。说白了就是只有标准节点才有可能被选举为主节点,即使在一个复制集中说有的标准节点都宕机,被动节点和仲裁节点也不会成为主节点

标准节点与被动节点的区别:priority值高者是标准节点,低者则为被动节点

选举规则是票数高的获胜,priority是优先权01000的值,相当于额外增加01000的票数。选举结果:票数高者获胜;若票数相同,数据新者获胜。

1.2 复制过程

  1. 客户端的数据进来;

  2. 数据操作写入到日志缓冲;

  3. 数据写入到数据缓冲;

  4. 把日志缓冲中的操作日志放到OPLOG中来;

  5. 返回操作结果到客户端(异步);

  6. 后台线程进行OPLOG复制到从节点,这个频率是非常高的,比日志刷盘频率还要高,从节点会一直监听主节点,OPLOG一有变化就会进行复制操作;

  7. 后台线程进行日志缓冲中的数据刷盘,非常频繁(默认100)毫秒,也可自行设置(30-60);

    后台线程进行数据缓冲中的数据刷盘,默认是60秒;

2. 分片技术

复制集主要用来实现自动故障转移从而达到高可用的目的,然而,随着业务规模的增长和时间的推移,业务数据量会越来越大,当前业务数据可能只有几百GB不到,一台DB服务器足以搞定所有的工作,而一旦业务数据量扩充大几个TB几百个TB时,就会产生一台服务器无法存储的情况,此时,需要将数据按照一定的规则分配到不同的服务器进行存储、查询等,即为分片集群。分片集群要做到的事情就是数据分布式存储

Mongodb分布式集群副本集+分片

存储方式:数据集被拆分成数据块(chunk),每个数据块包含多个doc,数据块分布式存储在分片集群中。

2.1 角色

Config server:MongoDB负责追踪数据块在shard上的分布信息,每个分片存储哪些数据块,叫做分片的元数据,保存在config server上的数据库 config中,一般使用3台config

server,所有config server中的config数据库必须完全相同(建议将config server部署在不同的服务器,以保证稳定性);

Shard server:将数据进行分片,拆分成数据块(chunk),每个trunk块的大小默认为64M,数据块真正存放的单位;

Mongos server:数据库集群请求的入口,所有的请求都通过mongos进行协调,查看分片的元数据,查找chunk存放位置,mongos自己就是一个请求分发中心,在生产环境通常有多mongos作为请求的入口,防止其中一个挂掉所有的mongodb请求都没有办法操作。

总结:应用请求mongos来操作mongodb的增删改查,配置服务器存储数据库元信息,并且和mongos做同步,数据最终存入在shard(分片)上,为了防止数据丢失,同步在副本集中存储了一份,仲裁节点在数据存储到分片的时候决定存储到哪个节点。

2.2 分片的片键

概述:片键是文档的一个属性字段或是一个复合索引字段,一旦建立后则不可改变,片键是拆分数据的关键的依据,如若在数据极为庞大的场景下,片键决定了数据在分片的过程中数据的存储位置,直接会影响集群的性能;

注:创建片键时,需要有一个支撑片键运行的索引;

2.3 片键分类

1.递增片键:使用时间戳,日期,自增的主键,ObjectId,_id等,此类片键的写入操作集中在一个分片服务器上,写入不具有分散性,这会导致单台服务器压力较大,但分割比较容易,这台服务器可能会成为性能瓶颈;

2.哈希片键:也称之为散列索引,使用一个哈希索引字段作为片键,优点是使数据在各节点分布比较均匀,数据写入可随机分发到每个分片服务器上,把写入的压力分散到了各个服务器上。但是读也是随机的,可能会命中更多的分片,但是缺点是无法实现范围区分;

3.组合片键: 数据库中没有比较合适的键值供片键选择,或者是打算使用的片键基数太小(即变化少如星期只有7天可变化),可以选另一个字段使用组合片键,甚至可以添加冗余字段来组合;

4.标签片键:数据存储在指定的分片服务器上,可以为分片添加tag标签,然后指定相应的tag,比如让10...(T)出现在shard0000上,11...(Q)出现在shard0001或shard0002上,就可以使用tag让均衡器指定分发;

环境介绍

分布式mongodb集群副本集+分片

CentOS Linux release 7.9.2009

Mongodb:4.0.21

IP 路由服务端口 配置服务端口 分片1端口 分片2端口 分片3端
172.16.245.102 27017 27018 27001 27002 27003
172.16.245.103 27017 27018 27001 27002 27003
172.16.245.104 27017 27018 27001 27002 27003

1.获取软件包

wget https://fastdl.mongodb.org/linux/mongodb-linux-x86_64-4.0.21.tgz

2.创建路由、配置、分片等的相关目录与文件

三台服务器相同操作

mkdir -p /data/mongodb/conf
mkdir -p /data/mongodb/data/config
mkdir -p /data/mongodb/data/shard1
mkdir -p /data/mongodb/data/shard2
mkdir -p /data/mongodb/data/shard3
mkdir -p /data/mongodb/log/config.log
mkdir -p /data/mongodb/log/mongos.log
mkdir -p /data/mongodb/log/shard1.log
mkdir -p /data/mongodb/log/shard2.log
mkdir -p /data/mongodb/log/shard3.log
touch /data/mongodb/log/config.log/config.log
touch /data/mongodb/log/mongos.log/mongos.log
touch /data/mongodb/log/shard1.log/shard1.log
touch /data/mongodb/log/shard2.log/shard2.log
touch /data/mongodb/log/shard3.log/shard3.log

3. 配置服务器部署mongodb

3台服务器执行相同操作

[root@node5 conf]# vim /data/mongodb/conf/config.conf
[root@node5 conf]# cat /data/mongodb/conf/config.conf
dbpath=/data/mongodb/data/config
logpath=/data/mongodb/log/config.log/config.log
port=27018 #端口号
logappend=true
fork=true
maxConns=5000
replSet=configs #副本集名称
configsvr=true
bind_ip=0.0.0.0

4. 配置复本集

分别启动三台服务器的配置服务

[root@node5 conf]# /data/mongodb/bin/mongod -f /data/mongodb/conf/config.conf

连接mongo,只需在任意一台机器执行即可

[root@node5 conf]# /data/mongodb/bin/mongo --host 172.16.245.102 --port 27018

进入数据库以后切换数据库

 use admin

初始化副本集

 rs.initiate({_id:"configs",members:[{_id:0,host:"172.16.245.102:27018"},{_id:1,host:"172.16.245.103:27018"}, {_id:2,host:"172.16.245.104:27018"}]})

其中_id:"configs"的configs是上面config.conf配置文件里的复制集名称,把三台服务器的(指定相应的IP)配置服务组成复制集

查看状态

configs:PRIMARY> rs.status()
{
"set" : "configs", #副本集名称
"date" : ISODate("2020-12-22T06:39:04.184Z"),
"myState" : 1,
"term" : NumberLong(1),
"syncingTo" : "",
"syncSourceHost" : "",
"syncSourceId" : -1,
"configsvr" : true,
"heartbeatIntervalMillis" : NumberLong(2000),
"optimes" : {
"lastCommittedOpTime" : {
"ts" : Timestamp(1608619142, 1),
"t" : NumberLong(1)
},
"readConcernMajorityOpTime" : {
"ts" : Timestamp(1608619142, 1),
"t" : NumberLong(1)
},
"appliedOpTime" : {
"ts" : Timestamp(1608619142, 1),
"t" : NumberLong(1)
},
"durableOpTime" : {
"ts" : Timestamp(1608619142, 1),
"t" : NumberLong(1)
}
},
"lastStableCheckpointTimestamp" : Timestamp(1608619122, 1),
"electionCandidateMetrics" : {
"lastElectionReason" : "electionTimeout",
"lastElectionDate" : ISODate("2020-12-22T05:31:42.975Z"),
"electionTerm" : NumberLong(1),
"lastCommittedOpTimeAtElection" : {
"ts" : Timestamp(0, 0),
"t" : NumberLong(-1)
},
"lastSeenOpTimeAtElection" : {
"ts" : Timestamp(1608615092, 1),
"t" : NumberLong(-1)
},
"numVotesNeeded" : 2,
"priorityAtElection" : 1,
"electionTimeoutMillis" : NumberLong(10000),
"numCatchUpOps" : NumberLong(0),
"newTermStartDate" : ISODate("2020-12-22T05:31:42.986Z"),
"wMajorityWriteAvailabilityDate" : ISODate("2020-12-22T05:31:44.134Z")
},
"members" : [
{
"_id" : 0,
"name" : "172.16.245.102:27018", #副本1
"health" : 1,
"state" : 1,
"stateStr" : "PRIMARY",
"uptime" : 4383,
"optime" : {
"ts" : Timestamp(1608619142, 1),
"t" : NumberLong(1)
},
"optimeDate" : ISODate("2020-12-22T06:39:02Z"),
"syncingTo" : "",
"syncSourceHost" : "",
"syncSourceId" : -1,
"infoMessage" : "",
"electionTime" : Timestamp(1608615102, 1),
"electionDate" : ISODate("2020-12-22T05:31:42Z"),
"configVersion" : 1,
"self" : true,
"lastHeartbeatMessage" : ""
},
{
"_id" : 1,
"name" : "172.16.245.103:27018", #副本2
"health" : 1,
"state" : 2,
"stateStr" : "SECONDARY",
"uptime" : 4052,
"optime" : {
"ts" : Timestamp(1608619142, 1),
"t" : NumberLong(1)
},
"optimeDurable" : {
"ts" : Timestamp(1608619142, 1),
"t" : NumberLong(1)
},
"optimeDate" : ISODate("2020-12-22T06:39:02Z"),
"optimeDurableDate" : ISODate("2020-12-22T06:39:02Z"),
"lastHeartbeat" : ISODate("2020-12-22T06:39:02.935Z"),
"lastHeartbeatRecv" : ISODate("2020-12-22T06:39:03.044Z"),
"pingMs" : NumberLong(85),
"lastHeartbeatMessage" : "",
"syncingTo" : "172.16.245.102:27018",
"syncSourceHost" : "172.16.245.102:27018",
"syncSourceId" : 0,
"infoMessage" : "",
"configVersion" : 1
},
{
"_id" : 2,
"name" : "172.16.245.104:27018", #副本3
"health" : 1,
"state" : 2,
"stateStr" : "SECONDARY",
"uptime" : 4052,
"optime" : {
"ts" : Timestamp(1608619142, 1),
"t" : NumberLong(1)
},
"optimeDurable" : {
"ts" : Timestamp(1608619142, 1),
"t" : NumberLong(1)
},
"optimeDate" : ISODate("2020-12-22T06:39:02Z"),
"optimeDurableDate" : ISODate("2020-12-22T06:39:02Z"),
"lastHeartbeat" : ISODate("2020-12-22T06:39:03.368Z"),
"lastHeartbeatRecv" : ISODate("2020-12-22T06:39:03.046Z"),
"pingMs" : NumberLong(85),
"lastHeartbeatMessage" : "",
"syncingTo" : "172.16.245.102:27018",
"syncSourceHost" : "172.16.245.102:27018",
"syncSourceId" : 0,
"infoMessage" : "",
"configVersion" : 1
}
],
"ok" : 1,
"operationTime" : Timestamp(1608619142, 1),
"$gleStats" : {
"lastOpTime" : Timestamp(0, 0),
"electionId" : ObjectId("7fffffff0000000000000001")
},
"lastCommittedOpTime" : Timestamp(1608619142, 1),
"$clusterTime" : {
"clusterTime" : Timestamp(1608619142, 1),
"signature" : {
"hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
"keyId" : NumberLong(0)
}
}
}
configs:PRIMARY>

等几十秒左右,执行上面的命令查看状态,三台机器的配置服务就已形成复制集,其中1台为PRIMARY,其他2台为SECONDARY

5. 分片服务部署

3台服务器执行相同操作

在/data/mongodb/conf目录创建shard1.conf、shard2.conf、shard3.conf,内容如下

[root@node3 conf]# ls
config.conf mongos.conf shard1.conf shard2.conf shard3.conf
[root@node3 conf]# cat shard1.conf
dbpath=/data/mongodb/data/shard1
logpath=/data/mongodb/log/shard1.log/shard1.log
port=27001
logappend=true
fork=true
maxConns=5000
storageEngine=mmapv1
shardsvr=true
replSet=shard1
bind_ip=0.0.0.0
[root@node3 conf]# cat shard2.conf
dbpath=/data/mongodb/data/shard2
logpath=/data/mongodb/log/shard2.log/shard2.log
port=27002
logappend=true
fork=true
maxConns=5000
storageEngine=mmapv1
shardsvr=true
replSet=shard2
bind_ip=0.0.0.0
[root@node3 conf]# cat shard3.conf
dbpath=/data/mongodb/data/shard3
logpath=/data/mongodb/log/shard3.log/shard3.log
port=27003
logappend=true
fork=true
maxConns=5000
storageEngine=mmapv1
shardsvr=true
replSet=shard3
bind_ip=0.0.0.0

端口分别是27001、27002、27003,分别对应shard1.conf、shard2.conf、shard3.conf

在3台机器的相同端口形成一个分片的复制集,由于3台机器都需要这3个文件,所以根据这9个配置文件分别启动分片服务

三台机器都需要启动分片服务,节点1启动shard1 节点2启动shard1 节点2启动shard1 ....

[root@node3 conf]# /data/mongodb/bin/mongond -f /data/mongodb/conf/shard1.conf
[root@node3 conf]# /data/mongodb/bin/mongond -f /data/mongodb/conf/shard2.conf
[root@node3 conf]# /data/mongodb/bin/mongond -f /data/mongodb/conf/shard3.conf

6. 将分片配置为复制集

连接mongo,只需在任意一台机器执行即可

 mongo --host 172.16.245.103  --port 27001 

 这里以shard1为例,其他两个分片则再需对应连接到27002、27003的端口进行操作即可

进入数据库admin

use admin

初始化三个分片副本集集群

rs.initiate({_id:"shard1",members:[{_id:0,host:"172.16.245.102:27001"},{_id:1,host:"172.16.245.103:27001"},{_id:2,host:"172.16.245.104:27001"}]})

rs.initiate({_id:"shard2",members:[{_id:0,host:"172.16.245.102:27002"},{_id:1,host:"172.16.245.103:27002"},{_id:2,host:"172.16.245.104:27002"}]})

rs.initiate({_id:"shard3",members:[{_id:0,host:"172.16.245.102:27003"},{_id:1,host:"172.16.245.103:27003"},{_id:2,host:"172.16.245.104:27003"}]})

7. 路由服务部署

3台服务器执行相同操作

在/data/mongodb/conf目录创建mongos.conf,内容如下

[root@node4 conf]# cat mongos.conf
logpath=/data/mongodb/log/mongos.log/mongos.log
logappend = true
port = 27017
fork = true
configdb = configs/172.16.245.102:27018,172.16.245.103:27018,172.16.245.104:27018
maxConns=20000
bind_ip=0.0.0.0

启动mongos

分别在三台服务器启动:

[root@node4 conf]# /data/mongodb/bin/mongos -f /data/mongodb/conf/mongos.conf

8. 启动分片功能

连接mongo

mongo --host 172.16.245.102 --port 27017

mongos>use admin

添加分片,只需在一台机器执行即可

mongos>sh.addShard("shard1/172.16.245.102:27001,172.16.245.103:27001,172.16.245.104:27001")
mongos>sh.addShard("shard2/172.16.245.102:27002,172.16.245.103:27002,172.16.245.104:27002")
mongos>sh.addShard("shard3/172.16.245.102:27003,172.16.245.103:27003,172.16.245.104:27003") mongos> sh.status()
--- Sharding Status ---
sharding version: {
"_id" : 1,
"minCompatibleVersion" : 5,
"currentVersion" : 6,
"clusterId" : ObjectId("5fe184bf29ea91799b557a8b")
}
shards:
{ "_id" : "shard1", "host" : "shard1/172.16.245.102:27001,172.16.245.103:27001,172.16.245.104:27001", "state" : 1 }
{ "_id" : "shard2", "host" : "shard2/172.16.245.102:27002,172.16.245.103:27002,172.16.245.104:27002", "state" : 1 }
{ "_id" : "shard3", "host" : "shard3/172.16.245.102:27003,172.16.245.103:27003,172.16.245.104:27003", "state" : 1 }
active mongoses:
"4.0.21" : 3
autosplit:
Currently enabled: yes
balancer:
Currently enabled: yes
Currently running: no
Failed balancer rounds in last 5 attempts: 0
Migration Results for the last 24 hours:
No recent migrations
databases:
{ "_id" : "calon", "primary" : "shard1", "partitioned" : true, "version" : { "uuid" : UUID("2a4780da-8f33-4214-88f8-c9b1a3140299"), "lastMod" : 1 } }
{ "_id" : "config", "primary" : "config", "partitioned" : true }
config.system.sessions
shard key: { "_id" : 1 }
unique: false
balancing: true
chunks:
shard1 1
{ "_id" : { "$minKey" : 1 } } -->> { "_id" : { "$maxKey" : 1 } } on : shard1 Timestamp(1, 0)
{ "_id" : "test", "primary" : "shard2", "partitioned" : false, "version" : { "uuid" : UUID("d59549a4-3e68-4a7d-baf8-67a4d8372b76"), "lastMod" : 1 } }
{ "_id" : "ycsb", "primary" : "shard3", "partitioned" : true, "version" : { "uuid" : UUID("6d491868-245e-4c86-a5f5-f8fcd308b45e"), "lastMod" : 1 } }
ycsb.usertable
shard key: { "_id" : "hashed" }
unique: false
balancing: true
chunks:
shard1 2
shard2 2
shard3 2
{ "_id" : { "$minKey" : 1 } } -->> { "_id" : NumberLong("-6148914691236517204") } on : shard1 Timestamp(1, 0)
{ "_id" : NumberLong("-6148914691236517204") } -->> { "_id" : NumberLong("-3074457345618258602") } on : shard1 Timestamp(1, 1)
{ "_id" : NumberLong("-3074457345618258602") } -->> { "_id" : NumberLong(0) } on : shard2 Timestamp(1, 2)
{ "_id" : NumberLong(0) } -->> { "_id" : NumberLong("3074457345618258602") } on : shard2 Timestamp(1, 3)
{ "_id" : NumberLong("3074457345618258602") } -->> { "_id" : NumberLong("6148914691236517204") } on : shard3 Timestamp(1, 4)
{ "_id" : NumberLong("6148914691236517204") } -->> { "_id" : { "$maxKey" : 1 } } on : shard3 Timestamp(1, 5)

9.实现分片功能

设置分片chunk大小

mongos>use config
mongos>db.setting.save({"_id":"chunksize","value":1}) #设置块大小为1M是方便实验,不然需要插入海量数据

10. 启用数据库分片并进行测试

mongos> use shardbtest;
switched to db shardbtest
mongos>
mongos>
mongos> sh.enableSharding("shardbtest");
{
"ok" : 1,
"operationTime" : Timestamp(1608620190, 4),
"$clusterTime" : {
"clusterTime" : Timestamp(1608620190, 4),
"signature" : {
"hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
"keyId" : NumberLong(0)
}
}
}
mongos> sh.shardCollection("shardbtest.usertable",{"_id":"hashed"}); #为 shardbtest裤中的usertable表进行分片基于id的哈希分片
{
"collectionsharded" : "shardbtest.usertable",
"collectionUUID" : UUID("2b5a8bcf-6e31-4dac-831f-5fa414253655"),
"ok" : 1,
"operationTime" : Timestamp(1608620216, 36),
"$clusterTime" : {
"clusterTime" : Timestamp(1608620216, 36),
"signature" : {
"hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
"keyId" : NumberLong(0)
}
}
}
mongos> for(i=1;i<=3000;i++){db.usertable.insert({"id":i})} #模拟插入3000条的数据
WriteResult({ "nInserted" : 1 })

11. 查看分片验证

mongos> db.usertable.stats();
{
"sharded" : true,
"paddingFactorNote" : "paddingFactor is unused and unmaintained in 3.0. It remains hard coded to 1.0 for compatibility only.",
"userFlags" : 1,
"capped" : false,
"ns" : "shardbtest.usertable",
"count" : 3000, #总3000
"numExtents" : 9,
"size" : 144096,
"storageSize" : 516096,
"totalIndexSize" : 269808,
"indexSizes" : {
"_id_" : 122640,
"_id_hashed" : 147168
},
"avgObjSize" : 48,
"maxSize" : NumberLong(0),
"nindexes" : 2,
"nchunks" : 6,
"shards" : {
"shard3" : {
"ns" : "shardbtest.usertable",
"size" : 48656,
"count" : 1013, #shard3写入1013
"avgObjSize" : 48,
"numExtents" : 3,
"storageSize" : 172032,
"lastExtentSize" : 131072,
"paddingFactor" : 1,
"paddingFactorNote" : "paddingFactor is unused and unmaintained in 3.0. It remains hard coded to 1.0 for compatibility only.",
"userFlags" : 1,
"capped" : false,
"nindexes" : 2,
"totalIndexSize" : 89936,
"indexSizes" : {
"_id_" : 40880,
"_id_hashed" : 49056
},
"ok" : 1,
"operationTime" : Timestamp(1608620309, 1),
"$gleStats" : {
"lastOpTime" : {
"ts" : Timestamp(1608620272, 38),
"t" : NumberLong(1)
},
"electionId" : ObjectId("7fffffff0000000000000001")
},
"lastCommittedOpTime" : Timestamp(1608620309, 1),
"$configServerState" : {
"opTime" : {
"ts" : Timestamp(1608620307, 1),
"t" : NumberLong(1)
}
},
"$clusterTime" : {
"clusterTime" : Timestamp(1608620309, 1),
"signature" : {
"hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
"keyId" : NumberLong(0)
}
}
},
"shard2" : {
"ns" : "shardbtest.usertable",
"size" : 49232,
"count" : 1025, #shard2写入1025
"avgObjSize" : 48,
"numExtents" : 3,
"storageSize" : 172032,
"lastExtentSize" : 131072,
"paddingFactor" : 1,
"paddingFactorNote" : "paddingFactor is unused and unmaintained in 3.0. It remains hard coded to 1.0 for compatibility only.",
"userFlags" : 1,
"capped" : false,
"nindexes" : 2,
"totalIndexSize" : 89936,
"indexSizes" : {
"_id_" : 40880,
"_id_hashed" : 49056
},
"ok" : 1,
"operationTime" : Timestamp(1608620306, 1),
"$gleStats" : {
"lastOpTime" : {
"ts" : Timestamp(1608620272, 32),
"t" : NumberLong(1)
},
"electionId" : ObjectId("7fffffff0000000000000001")
},
"lastCommittedOpTime" : Timestamp(1608620306, 1),
"$configServerState" : {
"opTime" : {
"ts" : Timestamp(1608620307, 1),
"t" : NumberLong(1)
}
},
"$clusterTime" : {
"clusterTime" : Timestamp(1608620309, 1),
"signature" : {
"hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
"keyId" : NumberLong(0)
}
}
},
"shard1" : {
"ns" : "shardbtest.usertable",
"size" : 46208,
"count" : 962, #shard1写入962
"avgObjSize" : 48,
"numExtents" : 3,
"storageSize" : 172032,
"lastExtentSize" : 131072,
"paddingFactor" : 1,
"paddingFactorNote" : "paddingFactor is unused and unmaintained in 3.0. It remains hard coded to 1.0 for compatibility only.",
"userFlags" : 1,
"capped" : false,
"nindexes" : 2,
"totalIndexSize" : 89936,
"indexSizes" : {
"_id_" : 40880,
"_id_hashed" : 49056
},
"ok" : 1,
"operationTime" : Timestamp(1608620308, 1),
"$gleStats" : {
"lastOpTime" : {
"ts" : Timestamp(1608620292, 10),
"t" : NumberLong(1)
},
"electionId" : ObjectId("7fffffff0000000000000001")
},
"lastCommittedOpTime" : Timestamp(1608620308, 1),
"$configServerState" : {
"opTime" : {
"ts" : Timestamp(1608620307, 1),
"t" : NumberLong(1)
}
},
"$clusterTime" : {
"clusterTime" : Timestamp(1608620309, 1),
"signature" : {
"hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
"keyId" : NumberLong(0)
}
}
}
},
"ok" : 1,
"operationTime" : Timestamp(1608620309, 1),
"$clusterTime" : {
"clusterTime" : Timestamp(1608620309, 1),
"signature" : {
"hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
"keyId" : NumberLong(0)
}
}
}

11. 副本节点是否已同步数据

mongos> show dbs
admin 0.000GB
calon 0.078GB
config 0.235GB
shardbtest 0.234GB
test 0.078GB
ycsb 0.234GB
mongos> use shardbtest
switched to db shardbtest
mongos> db.usertable.stats();
{
"sharded" : true,
"paddingFactorNote" : "paddingFactor is unused and unmaintained in 3.0. It remains hard coded to 1.0 for compatibility only.",
"userFlags" : 1,
"capped" : false,
"ns" : "shardbtest.usertable",
"count" : 3000,
"numExtents" : 9,
"size" : 144096,
"storageSize" : 516096,
"totalIndexSize" : 269808,
"indexSizes" : {
"_id_" : 122640,
"_id_hashed" : 147168
},
"avgObjSize" : 48,
"maxSize" : NumberLong(0),
"nindexes" : 2,
"nchunks" : 6,
"shards" : {
"shard2" : {
"ns" : "shardbtest.usertable",
"size" : 49232,
"count" : 1025,
"avgObjSize" : 48,
"numExtents" : 3,
"storageSize" : 172032,
"lastExtentSize" : 131072,
"paddingFactor" : 1,
"paddingFactorNote" : "paddingFactor is unused and unmaintained in 3.0. It remains hard coded to 1.0 for compatibility only.",
"userFlags" : 1,
"capped" : false,
"nindexes" : 2,
"totalIndexSize" : 89936,
"indexSizes" : {
"_id_" : 40880,
"_id_hashed" : 49056
},
"ok" : 1,
"operationTime" : Timestamp(1608620886, 6),
"$gleStats" : {
"lastOpTime" : Timestamp(0, 0),
"electionId" : ObjectId("7fffffff0000000000000001")
},
"lastCommittedOpTime" : Timestamp(1608620886, 6),
"$configServerState" : {
"opTime" : {
"ts" : Timestamp(1608620888, 1),
"t" : NumberLong(1)
}
},
"$clusterTime" : {
"clusterTime" : Timestamp(1608620888, 1),
"signature" : {
"hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
"keyId" : NumberLong(0)
}
}
},
"shard3" : {
"ns" : "shardbtest.usertable",
"size" : 48656,
"count" : 1013,
"avgObjSize" : 48,
"numExtents" : 3,
"storageSize" : 172032,
"lastExtentSize" : 131072,
"paddingFactor" : 1,
"paddingFactorNote" : "paddingFactor is unused and unmaintained in 3.0. It remains hard coded to 1.0 for compatibility only.",
"userFlags" : 1,
"capped" : false,
"nindexes" : 2,
"totalIndexSize" : 89936,
"indexSizes" : {
"_id_" : 40880,
"_id_hashed" : 49056
},
"ok" : 1,
"operationTime" : Timestamp(1608620889, 1),
"$gleStats" : {
"lastOpTime" : Timestamp(0, 0),
"electionId" : ObjectId("7fffffff0000000000000001")
},
"lastCommittedOpTime" : Timestamp(1608620889, 1),
"$configServerState" : {
"opTime" : {
"ts" : Timestamp(1608620888, 1),
"t" : NumberLong(1)
}
},
"$clusterTime" : {
"clusterTime" : Timestamp(1608620889, 1),
"signature" : {
"hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
"keyId" : NumberLong(0)
}
}
},
"shard1" : {
"ns" : "shardbtest.usertable",
"size" : 46208,
"count" : 962,
"avgObjSize" : 48,
"numExtents" : 3,
"storageSize" : 172032,
"lastExtentSize" : 131072,
"paddingFactor" : 1,
"paddingFactorNote" : "paddingFactor is unused and unmaintained in 3.0. It remains hard coded to 1.0 for compatibility only.",
"userFlags" : 1,
"capped" : false,
"nindexes" : 2,
"totalIndexSize" : 89936,
"indexSizes" : {
"_id_" : 40880,
"_id_hashed" : 49056
},
"ok" : 1,
"operationTime" : Timestamp(1608620888, 1),
"$gleStats" : {
"lastOpTime" : Timestamp(0, 0),
"electionId" : ObjectId("7fffffff0000000000000001")
},
"lastCommittedOpTime" : Timestamp(1608620888, 1),
"$configServerState" : {
"opTime" : {
"ts" : Timestamp(1608620888, 1),
"t" : NumberLong(1)
}
},
"$clusterTime" : {
"clusterTime" : Timestamp(1608620888, 1),
"signature" : {
"hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
"keyId" : NumberLong(0)
}
}
}
},
"ok" : 1,
"operationTime" : Timestamp(1608620889, 1),
"$clusterTime" : {
"clusterTime" : Timestamp(1608620889, 1),
"signature" : {
"hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
"keyId" : NumberLong(0)
}
}
}
mongos>

以上就实现了mongodb复制集的高可用以及分片

Mongodb分布式集群副本集+分片的更多相关文章

  1. MongoDB 高可用集群副本集&plus;分片搭建

    MongoDB 高可用集群搭建 一.架构概况 192.168.150.129192.168.150.130192.168.150.131 参考文档:https://www.cnblogs.com/va ...

  2. MongoDB 3&period;4 分片集群副本集 认证

    连接到router所在的MongoDB Shell  我本机端口设置在50000上 mongo --port 接下来的流程和普通数据库添加用户权限一样 db.createUser({user:&quo ...

  3. 搭建高可用mongodb集群—— 副本集

    转自:http://www.lanceyan.com/tech/mongodb/mongodb_repset1.html 在上一篇文章<搭建高可用MongoDB集群(一)——配置MongoDB& ...

  4. MongoDB集群——副本集

    1. 副本集的结构及原理 副本集包括三种节点:主节点.从节点.仲裁节点.主节点负责处理客户端请求,读.写数据, 记录在其上所有操作的oplog: 从节点定期轮询主节点获取这些操作,然后对自己的数据副本 ...

  5. window配置mongodb集群&lpar;副本集&rpar;

    参数解释: dbpath:数据存放目录 logpath:日志存放路径 pidfilepath:进程文件,有利于关闭服务 logappend:以追加的方式记录日志(boolean值) replSet:副 ...

  6. kubernetes上安装MongoDB-3&period;6&period;5集群副本集方式

    一.安装部署: 想直接一步创建集群的小伙伴直接按以下步骤安装(再往后是记录自己出过的错): 1.生成docker镜像: docker build -t 144.202.127.156/library/ ...

  7. mongodb的分布式集群&lpar;4、分片和副本集的结合&rpar;

    概述 前面3篇博客讲了mongodb的分布式和集群,当中第一种的主从复制我们差点儿不用,没有什么意义,剩下的两种,我们不论单独的使用哪一个.都会出现对应的问题.比較好的一种解决方式就是.分片和副本集的 ...

  8. MongoDB分布式集群搭建(分片加副本集)

    # 环境准备 服务器 # 环境搭建 文件配置和目录添加 新建目录的操作要在三台机器中进行,为配置服务器新建数据目录和日志目录 mkdir -p $MONGODB_HOME/config/data mk ...

  9. mongodb 3&period;6 集群搭建:分片&plus;副本集

    mongodb是最常用的nosql数据库,在数据库排名中已经上升到了前六.这篇文章介绍如何搭建高可用的mongodb(分片+副本)集群. 在搭建集群之前,需要首先了解几个概念:路由,分片.副本集.配置 ...

随机推荐

  1. httpserver

    改了下 # -*- coding:utf-8 -*- from BaseHTTPServer import HTTPServer, BaseHTTPRequestHandler HOST = &quo ...

  2. Struts2通配符映射

    1.一个Web 应用可能有成百上千个 action 声明. 可以利用 struts 提供的通配符映射机制把多个彼此相似的映射关系简化为一个映射关系 2.通配符映射规则 –若找到多个匹配, 没有通配符的 ...

  3. kindle paperwhite2 root 密码修改方法

    昨天由于kindle的耗电量突然增大,开始查找原因.经过检查搜索后,确定是由于卡索引的问题导致,于是开始解决这个问题.然而在通过ssh以root身份登陆到kindle上时,始终出现登陆错误,提示密码不 ...

  4. Linux&comma;实时获取磁盘空间

    #include <iostream> #include <stdlib.h> #include <stdio.h> #include <sys/statfs ...

  5. ORACLE基本SQL语句-用户及建表篇

    一.用户相关SQL语句 /*新建用户*/create user ; 说明:SA用户名,2013密码 /*授权connect,resource给用户sa*/grant connect,resource ...

  6. 自己动手写 ASP&period;NET MVC 分页 part1

    学习编程也有一年半载了,从来没有自己动手写过东西,都是利用搜索软件找代码,最近偶发感慨,难道真的继续做码农??? 突发奇想是不是该自己动手写点东西,可是算法.逻辑思维都太弱了,只能copy网上的代码, ...

  7. HTML5的自定义属性data-&ast;详细介绍和JS操作实例

    当然高级浏览器下可通过脚本进行定义和数据存取.在项目实践中非常有用. 例如: 复制代码 代码如下: <div id = "user" data-uid = "123 ...

  8. 分析NGINX 健康检查和负载均衡机制

    nginx 是优秀的反向代理服务器,这里主要讲它的健康检查和负载均衡机制,以及这种机制带来的问题.所谓健康检查,就是当后端出现问题(具体什么叫出现问题,依赖于具体实现,各个实现定义不一样),不再往这个 ...

  9. 好用的sql

    @ 复制表结构 ; --复制表结构和数据 create table table_name_new as select * from <table_name> @ 查看表信息 select ...

  10. TOMCAT数据源连接配置

    /* *本文档简单介绍系统使用TOMCAT6.0数据源方式连接数据库的配置方法: *1,系统环境:  gdczsam4.0 + Tomcat6.0 + JDK1.5 + SQL Server2008 ...