paulwong

MongoDB健壮集群——用副本集做分片

1.    MongoDB分片+副本集

健壮的集群方案

多个配置服务器 多个mongos服务器  每个片都是副本集 正确设置w

架构图

说明:

1.   此实验环境在一台机器上通过不同portdbpath实现启动不同的mongod实例

2.   总的9mongod实例,分别做成shard1shard2shard3三组副本集,每组12

3.   Mongos进程的数量不限,建议把mongos配置在每个应用服务器本机上,这样每个应用服务器就与自身的mongos进行通信,如果服务器不工作了,并不会影响其他的应用服务器与其自己的mongos通信

4.   此实验模拟2台应用服务器(2mongos服务)

5.   生产环境中每个片都应该是副本集,这样单个服务器坏了,才不会导致片失效


 


 

 

 

 

 

 





部署环境

创建相关目录

~]# mkdir -p /data/mongodb/shard{1,2,3}/node{1,2,3}

~]# mkdir -p /data/mongodb/shard{1,2,3}/logs

~]# ls /data/mongodb/shard*

/data/mongodb/shard1:

logs  node1  node2  node3

/data/mongodb/shard2:

logs  node1  node2  node3

/data/mongodb/shard3:

logs  node1  node2  node3

~]# mkdir -p /data/mongodb/config/logs

~]# mkdir -p /data/mongodb/config/node{1,2,3}

~]# ls /data/mongodb/config/

logs  node1  node2  node3

~]# mkdir -p /data/mongodb/mongos/logs

启动配置服务

Config server

/data/mongodb/config/node1

/data/mongodb/config/logs/node1.log

10000

/data/mongodb/config/node2

/data/mongodb/config/logs/node2.log

20000

/data/mongodb/config/node3

/data/mongodb/config/logs/node3.log

30000

#按规划启动3个:跟启动单个配置服务一样,只是重复3

~]# mongod --dbpath /data/mongodb/config/node1 --logpath /data/mongodb/config/logs/node1.log --logappend --fork --port 10000

~]# mongod --dbpath /data/mongodb/config/node2 --logpath /data/mongodb/config/logs/node2.log --logappend --fork --port 20000

~]# mongod --dbpath /data/mongodb/config/node3 --logpath /data/mongodb/config/logs/node3.log --logappend --fork --port 30000

~]# ps -ef|grep mongod|grep -v grep

root      3983     1  0 11:10 ?        00:00:03 /usr/local/mongodb/bin/mongod --dbpath /data/mongodb/config/node1 --logpath /data/mongodb/config/logs/node1.log --logappend --fork --port 10000

root      4063     1  0 11:13 ?        00:00:02 /usr/local/mongodb/bin/mongod --dbpath /data/mongodb/config/node2 --logpath /data/mongodb/config/logs/node2.log --logappend --fork --port 20000

root      4182     1  0 11:17 ?        00:00:03 /usr/local/mongodb/bin/mongod --dbpath /data/mongodb/config/node3 --logpath /data/mongodb/config/logs/node3.log --logappend --fork --port 30000

启动路由服务

Mongos server

——

/data/mongodb/mongos/logs/node1.log

40000

——

/data/mongodb/mongos/logs/node2.log

50000

#mongos的数量不受限制,通常应用一个服务器运行一个mongos

~]#mongos --port 40000 --configdb 192.168.211.217:10000,192.168.211.217:20000,192.168.211.217:30000 --logpath /data/mongodb/mongos/logs/mongos1.log  --logappend --fork

~]#mongos --port 50000 --configdb 192.168.211.217:10000,192.168.211.217:20000,192.168.211.217:30000 --logpath /data/mongodb/mongos/logs/mongos2.log  --logappend –fork

~]# ps -ef|grep mongos|grep -v grep

root      4421     1  0 11:29 ?        00:00:00 /usr/local/mongodb/bin/mongos --port 40000 --configdb 192.168.211.217:10000,192.168.211.217:20000,192.168.211.217:30000 --logpath /data/mongodb/mongos/logs/mongos1.log --logappend --fork

root      4485     1  0 11:29 ?        00:00:00 /usr/local/mongodb/bin/mongos --port 50000 --configdb 192.168.211.217:10000,192.168.211.217:20000,192.168.211.217:30000 --logpath /data/mongodb/mongos/logs/mongos2.log --logappend  --fork

配置副本集

按规划,配置启动shard1shard2shard3三组副本集

#此处以shard1为例说明配置方法

#启动三个mongod进程

~]#mongod --replSet shard1 --dbpath /data/mongodb/shard1/node1 --logpath /data/mongodb/shard1/logs/node1.log --logappend --fork --port 10001

~]#mongod --replSet shard1 --dbpath /data/mongodb/shard1/node2 --logpath /data/mongodb/shard1/logs/node2.log --logappend --fork --port 10002

~]#mongod --replSet shard1 --dbpath /data/mongodb/shard1/node3 --logpath /data/mongodb/shard1/logs/node3.log --logappend --fork --port 10003

#初始化Replica Set:shard1

~]# /usr/local/mongodb/bin/mongo --port 10001

MongoDB shell version: 2.6.6

connecting to: 127.0.0.1:10001/test

> use admin

switched to db admin

> rsconf={

...   "_id" : "shard1",

...   "members" : [

...       {

...           "_id" : 0,

...           "host" : "192.168.211.217:10001"

...       }

...   ]

... }

{

        "_id" : "shard1",

        "members" : [

                {

                        "_id" : 0,

                        "host" : "192.168.211.217:10001"

                }

        ]

}

>  rs.initiate(rsconf)

{

        "info" : "Config now saved locally.  Should come online in about a minute.",

        "ok" : 1

}

> rs.add("192.168.211.217:10002")

{ "ok" : 1 }

shard1:PRIMARY> rs.add("192.168.211.217:10003")

{ "ok" : 1 }

shard1:PRIMARY>  rs.conf()

{

        "_id" : "shard1",

        "version" : 3,

        "members" : [

                {

                        "_id" : 0,

                        "host" : "192.168.211.217:10001"

                },

                {

                        "_id" : 1,

                        "host" : "192.168.211.217:10002"

                },

                {

                        "_id" : 2,

                        "host" : "192.168.211.217:10003"

                }

        ]

}

Shard2shard3shard1配置副本集

#最终副本集配置如下:

shard3:PRIMARY> rs.conf()

{

        "_id" : "shard3",

        "version" : 3,

        "members" : [

                {

                        "_id" : 0,

                        "host" : "192.168.211.217:30001"

                },

                {

                        "_id" : 1,

                        "host" : "192.168.211.217:30002"

                },

                {

                        "_id" : 2,

                        "host" : "192.168.211.217:30003"

                }

        ]

}

~]# /usr/local/mongodb/bin/mongo --port 20001

MongoDB shell version: 2.6.6

connecting to: 127.0.0.1:20001/test

shard2:PRIMARY> rs.conf()

{

        "_id" : "shard2",

        "version" : 3,

        "members" : [

                {

                        "_id" : 0,

                        "host" : "192.168.211.217:20001"

                },

                {

                        "_id" : 1,

                        "host" : "192.168.211.217:20002"

                },

                {

                        "_id" : 2,

                        "host" : "192.168.211.217:20003"

                }

        ]

}

~]# /usr/local/mongodb/bin/mongo --port 10001

MongoDB shell version: 2.6.6

connecting to: 127.0.0.1:10001/test

shard1:PRIMARY> rs.conf()

{

        "_id" : "shard1",

        "version" : 3,

        "members" : [

                {

                        "_id" : 0,

                        "host" : "192.168.211.217:10001"

                },

                {

                        "_id" : 1,

                        "host" : "192.168.211.217:10002"

                },

                {

                        "_id" : 2,

                        "host" : "192.168.211.217:10003"

                }

        ]

}

目前mongo相关进程端口情况如下:


#此时,刚好与环境规划列表对应

添加(副本集)分片

#连接到mongs,并切换到admin这里必须连接路由节点

~]# /usr/local/mongodb/bin/mongo --port 40000

MongoDB shell version: 2.6.6

connecting to: 127.0.0.1:40000/test

mongos> use admin

switched to db admin

mongos> db

admin

mongos> db.runCommand({"addShard":"shard1/192.168.211.217:10001"})

{ "shardAdded" : "shard1", "ok" : 1 }

mongos> db.runCommand({"addShard":"shard2/192.168.211.217:20001"})

{ "shardAdded" : "shard2", "ok" : 1 }

mongos> db.runCommand({"addShard":"shard3/192.168.211.217:30001"})

{ "shardAdded" : "shard3", "ok" : 1 }

mongos> db.runCommand({listshards:1})

mongos> db.runCommand({listshards:1})

{

        "shards" : [

                {

                        "_id" : "shard1",

                        "host" : "shard1/192.168.211.217:10001,192.168.211.217:10002,192.168.211.217:10003"

                },

                {

                        "_id" : "shard2",

                        "host" : "shard2/192.168.211.217:20001,192.168.211.217:20002,192.168.211.217:20003"

                },

                {

                        "_id" : "shard3",

                        "host" : "shard3/192.168.211.217:30001,192.168.211.217:30002,192.168.211.217:30003"

                }

        ],

        "ok" : 1

}

激活dbcollections分片

激活数据库分片,命令

> db.runCommand( { enablesharding : “” } );

执行以上命令,可以让数据库跨shard,如果不执行这步,数据库只会存放在一个shard

一旦激活数据库分片,数据库中不同的collection将被存放在不同的shard

但一个collection仍旧存放在同一个shard上,要使单个collection也分片,还需单独对collection作些操作

#如:激活test数据库分片功能,连接mongos进程

~]# /usr/local/mongodb/bin/mongo --port 50000

MongoDB shell version: 2.6.6

connecting to: 127.0.0.1:50000/test

mongos> use admin

switched to db admin

mongos> db.runCommand({"enablesharding":"test"})

{ "ok" : 1 }

要使单个collection也分片存储,需要给collection指定一个分片key,通过以下命令操作

> db.runCommand( { shardcollection : “”,key : });

注:  a. 分片的collection系统会自动创建一个索引(也可用户提前创建好)

         b. 分片的collection只能有一个在分片key上的唯一索引,其它唯一索引不被允许

#collectiontest.yujx分片

mongos> use admin

switched to db admin

mongos> db.runCommand({"shardcollection":"test.yujx","key":{"_id":1}})

{ "collectionsharded" : "test.yujx", "ok" : 1 }

生成测试数据

mongos> use test

#生成测试数据,此处只是为了说明问题,并不一定要生成那么多行

mongos> for(var i=1;i<=888888;i++) db.yujx.save({"id":i,"a":123456789,"b":888888888,"c":100000000})

mongos> db.yujx.count()

271814             #此实验使用了这么多行测试数据

查看集合分片

mongos> db.yujx.stats()

{

        "sharded" : true,

        "systemFlags" : 1,

        "userFlags" : 1,

        "ns" : "test.yujx",

        "count" : 271814,

        "numExtents" : 19,

        "size" : 30443168,

        "storageSize" : 51773440,

        "totalIndexSize" : 8862784,

        "indexSizes" : {

                "_id_" : 8862784

        },

        "avgObjSize" : 112,

        "nindexes" : 1,

        "nchunks" : 4,

        "shards" : {

                "shard1" : {

                        "ns" : "test.yujx",

                        "count" : 85563,

                        "size" : 9583056,

                        "avgObjSize" : 112,

                        "storageSize" : 11182080,

                        "numExtents" : 6,

                        "nindexes" : 1,

                        "lastExtentSize" : 8388608,

                        "paddingFactor" : 1,

                        "systemFlags" : 1,

                        "userFlags" : 1,

                        "totalIndexSize" : 2796192,

                        "indexSizes" : {

                                "_id_" : 2796192

                        },

                        "ok" : 1

                },

                "shard2" : {

                        "ns" : "test.yujx",

                        "count" : 180298,

                        "size" : 20193376,

                        "avgObjSize" : 112,

                        "storageSize" : 37797888,

                        "numExtents" : 8,

                        "nindexes" : 1,

                        "lastExtentSize" : 15290368,

                        "paddingFactor" : 1,

                        "systemFlags" : 1,

                        "userFlags" : 1,

                        "totalIndexSize" : 5862192,

                        "indexSizes" : {

                                "_id_" : 5862192

                        },

                        "ok" : 1

                },

                "shard3" : {

                        "ns" : "test.yujx",

                        "count" : 5953,

                        "size" : 666736,

                        "avgObjSize" : 112,

                        "storageSize" : 2793472,

                        "numExtents" : 5,

                        "nindexes" : 1,

                        "lastExtentSize" : 2097152,

                        "paddingFactor" : 1,

                        "systemFlags" : 1,

                        "userFlags" : 1,

                        "totalIndexSize" : 204400,

                        "indexSizes" : {

                                "_id_" : 204400

                        },

                        "ok" : 1

                }

        "ok" : 1

}

#此时我们连接各个分片的primary查询自身拥有的记录数据:


可以发现分别与上面的集合分片状态显示的一致

补充提示:secondary节点默认是无法执行查询操作,需要执行setSlaveOK操作

查看数据库分片

mongos> db.printShardingStatus()


#或者连接mongosconfig数据库查询

mongos> db.shards.find()

{ "_id" : "shard1", "host" : "shard1/192.168.211.217:10001,192.168.211.217:10002,192.168.211.217:10003" }

{ "_id" : "shard2", "host" : "shard2/192.168.211.217:20001,192.168.211.217:20002,192.168.211.217:20003" }

{ "_id" : "shard3", "host" : "shard3/192.168.211.217:30001,192.168.211.217:30002,192.168.211.217:30003" }

mongos> db.databases.find()

{ "_id" : "admin", "partitioned" : false, "primary" : "config" }

{ "_id" : "test", "partitioned" : true, "primary" : "shard3" }

mongos> db.chunks.find()

{ "_id" : "test.yujx-_id_MinKey", "lastmod" : Timestamp(2, 0), "lastmodEpoch" : ObjectId("54b8b475a13b3af589cffc62"), "ns" : "test.yujx", "min" : { "_id" : { "$minKey" : 1 } }, "max" : { "_id" : ObjectId("54b8b58ddb2797bbb973a718") }, "shard" : "shard1" }

{ "_id" : "test.yujx-_id_ObjectId('54b8b58ddb2797bbb973a718')", "lastmod" : Timestamp(3, 1), "lastmodEpoch" : ObjectId("54b8b475a13b3af589cffc62"), "ns" : "test.yujx", "min" : { "_id" : ObjectId("54b8b58ddb2797bbb973a718") }, "max" : { "_id" : ObjectId("54b8b599db2797bbb973be59") }, "shard" : "shard3" }

{ "_id" : "test.yujx-_id_ObjectId('54b8b599db2797bbb973be59')", "lastmod" : Timestamp(4, 1), "lastmodEpoch" : ObjectId("54b8b475a13b3af589cffc62"), "ns" : "test.yujx", "min" : { "_id" : ObjectId("54b8b599db2797bbb973be59") }, "max" : { "_id" : ObjectId("54b8b6cfdb2797bbb9767ea2") }, "shard" : "shard2" }

{ "_id" : "test.yujx-_id_ObjectId('54b8b6cfdb2797bbb9767ea2')", "lastmod" : Timestamp(4, 0), "lastmodEpoch" : ObjectId("54b8b475a13b3af589cffc62"), "ns" : "test.yujx", "min" : { "_id" : ObjectId("54b8b6cfdb2797bbb9767ea2") }, "max" : { "_id" : { "$maxKey" : 1 } }, "shard" : "shard1" }

单点故障分析

 由于这是为了了解入门mongodb做的实验,而故障模拟太浪费时间,所以这里就不一一列出,关于故障场景分析,可以参考:
http://blog.itpub.net/27000195/viewspace-1404402/

posted on 2015-12-18 14:03 paulwong 阅读(843) 评论(0)  编辑  收藏 所属分类: MONGODB


只有注册用户登录后才能发表评论。


网站导航: