博客
关于我
强烈建议你试试无所不能的chatGPT,快点击我
Mongodb(三)Mongodb复制
阅读量:7221 次
发布时间:2019-06-29

本文共 18494 字,大约阅读时间需要 61 分钟。

(一)MongDB复制(副本集)

MongDB复制是将数据同步到多个服务器的过程。
复制提供了数据的冗余备份,并在服务器上存储数据副本,提高数据的可用性,保证数据的安全性,
复制允许你从硬件和服务的中断中恢复数据,能随时应对数据丢失、机器损坏带来的风险。
而且复制还能提高读取能力,用户读取服务器和写入服务器在不同的地方,提高整个系统的负载。

1、复制的特性:        保障数据的安全性        数据的高可用性7*24        灾难恢复        无需停机维护        分布式读取数据2、复制的原理Mongodb复制集由一组Mongod实例(进程)组成。mongodb复制至少需要二个节点,一个是Primary主节点用于处理客户端的请求,其余都是从节点Secondary用于复制主节点上的数据.Primary会记录那些写的操作到一个日志oplog中,其他的Secondary会定期轮询主节点这个日志,去操作他们的数据库来保持数据库的一致。![](https://s1.51cto.com/107?x-oss-process=image/watermark,size_16,text_QDUxQ1RP5Y2a5a6i,color_FFFFFF,t_100,g_se,x_10,y_10,shadow_90,type_ZmFuZ3poZW5naGVpdGk=)客户端从主节点读取数据,在客户端写入数据到主节点时, 主节点与从节点进行数据交互保障数据的一致性。MongoDB(3.2以上)中,官方已经不推荐使用主从模式,取而代之的,是使用复制集(Replica Set)的方式做主备处理。3、复制集(Replica Set),在复制集中,有且只有一个主节点(primary),可以包含一个或多个从节点(secondary),主从节点直接会通过心跳检测来确定节点是否健康或存活。所有的读写操作都是在主节点上进行的,如果要实现读写分离,需要进行相应的处理,这个最后会说。从节点会根据oplog(也就是操作日志)来复制主节点的数据。

原理:而 MongoDB 复制集,集群中的任何节点都可能成为 Master 节点。一旦 Master 节点故障,则会在其余节点中选举出一个新的 Master 节点。 并引导剩余节点连接到新的 Master 节点。这个过程对于应用是透明的。

在生产环境,复制集至少包括三个节点,其中一个必须为主节点,一个从节点,一个仲裁节点。其中每一个节点都是mongod进程对应的实例,节点间通过心跳检查对方的状态。

primary节点:负责数据库的读写操作。
secondary节点:备份primary节点上的数据,可以有多个。
arbiter节点:主节点故障时,参与复制集剩下节点中选举一个新的primary节点。

(二)、mongodb环境配置

1、主机配置
192.168.4.203 db1 ##primary节点
192.168.4.97 db2 ##secondary节点
192.168.4.200 db3 ##arbriter节点
2、安装(略)请参考https://blog.51cto.com/liqingbiao/2401942

3、primary节点的相关操作。           3.1、相关主节点的配置
[root@otrs ~]# vim /usr/local/mongodb/conf/master.conf bind_ip = 0.0.0.0port    = 27017dbpath  = /data/mongodb/dblogpath = /data/mongodb/mongodb.loglogappend = truefork    = truejournal=truemaxConns=1000auth  = truekeyFile = /usr/local/mongodb/mongodb-keyfilestorageEngine=wiredTigeroplogSize=2048replSet=RSkeyFile = /usr/local/mongodb/mongodb-keyfile   ##### 集群的私钥的完整路径,只对于Replica Set 架构有效(noauth = true时不用配置此项)
3.2、在服务器上生成私钥,并设置成600,把该私钥scp到其它远程服务器上。
[root@otrs ~]# openssl rand -base64 745 > /usr/local/mongodb/mongodb-keyfile[root@otrs ~]# chmod 600 /usr/local/mongodb/mongodb-keyfile [root@otrs ~]# cat /usr/local/mongodb/mongodb-keyfile sJy/0RquRGAu7Qk1xT5P7VqDVjHKGdFIu0EQSRa98+pxAEfD43Ix+hrKVhmfk6agX8SAwl/2wkgeMFQBznKQNzE/EBFKos6VJgzi47RkUcXI3XV6igXbJNLzsjYsktkZipKDLtfpQrvse4nZy9PRQusg9HpYLlr3tVKYA9TNmAJtUXA36NDOGBAEbHzfEvvcsh4vmfxFAB+qtMwEer01MC11mKzXGN1nmL9D3hbmqCgC2F8d8RFeqTY5A73b81jTj16wqQw2PuAPHncy6MaQX0ytNO5uWiYDcOxUwOA/LVbTaP8jOHwcEfpl6FY8NT66P2GXINkfKMjaTMIrhXJVgMGkJz0O4aJv8RYZaKCpLmiMpNsyxbMLyngvx5AmDWgPqAHkuQf8O6HcA676hzhBSdDoB8Rr6Yx4NvzQorKq5g/hjmk+9IpDixuI+qjZAwWVuvPceiONigJqwZnryIkvGm3pwl2SmfieKdTRJ5lbpaEz3N5JVgBlM2L6jxj3egnLHn0V+1GH81Iwkw9AXpbn+I9KLrfivI6iuVT6xKu0Zu0ERtUZ442lgIpPIGiiY2HRM3MgyOLU0SWBcI0/t3+N4L2Kxkm0806Nl3/LdtxaPkGTqcSdJl39i96c8qmZThsnUPMQrIA7QHtBhal5e2rRQ7N5gbC+aFXCnEfNqbfPN13ljZfvMj+pzRDwfLutXpMFKSHaAkpF29wYL5nlbnN0CKxKBZDD1gJncR0aYWt2s4z3IP5TOgYER+zVFfhUlS6Y5JsSgM57wrUDkF3VGvkwGQMs+8g5/3WxgEOzwcJV32QO98HLQR5QE0md108KWpy38LZYUgGzADcYepEeqGj/BPspnuQy7n4GzKyWZWK7Q4Sl9TLdVQR8XDUAl8lOtnDkar/qYfEHb/Bt7tZb/ANZQyvpyTvEIHZvyPZ5xzAtoDduV+cQRyx+G1X/smHagm1oyo0HNr25CIaTjk2atQq4USnN2daq5f/OEw==[root@otrs ~]# scp /usr/local/mongodb/mongodb-keyfile root@192.168.4.97:/usr/local/mongodb/mongodb-keyfile                                                                               100% 1012     1.0KB/s   00:00

3.3、创建认证用户。由于配置文件启用认证模式,现在需要以非认证的模式来创建用户。

#```
#######先以非验证模式启动数据库
[root@otrs ~]# /usr/local/mongodb/bin/mongod --fork --dbpath=/data/mongodb/db --logpath=/data/mongodb/mongodb.log
about to fork child process, waiting until server is ready for connections.
forked process: 28389
child process started successfully, parent exiting
[root@otrs ~]# netstat -lntp|grep mongod
tcp 0 0 127.0.0.1:27017 0.0.0.0:* LISTEN 28389/mongod
############创建root用户并配置管理员权限
[root@otrs ~]# mongo
MongoDB shell version v4.0.9
connecting to: mongodb://127.0.0.1:27017/?gssapiServiceName=mongodb
Implicit session: session { "id" : UUID("4df82588-c03c-49e5-8839-92dafd7ff8ce") }
MongoDB server version: 4.0.9
Welcome to the MongoDB shell.
For interactive help, type "help".
For more comprehensive documentation, see
Questions? Try the support group
Server has startup warnings:
2019-06-14T16:02:56.903+0800 I CONTROL [initandlisten]
2019-06-14T16:02:56.903+0800 I CONTROL [initandlisten] WARNING: Access control is not enabled for the database.
2019-06-14T16:02:56.903+0800 I CONTROL [initandlisten]
Read and write access to data and configuration is unrestricted.
2019-06-14T16:02:56.903+0800 I CONTROL [initandlisten]

show dbs

admin 0.000GB
config 0.000GB
local 0.000GB
use admin
switched to db admin
db.createUser({ user: "root", pwd: "root", roles: [ { role: "root", db: "admin" } ] });
"user" : "root",
"roles" : [
{
"role" : "root",
"db" : "admin"
}
]
}

3.4、启动以验证权限的配置文件

[root@otrs ~]# mongod --config /usr/local/mongodb/conf/master.conf

about to fork child process, waiting until server is ready for connections.
forked process: 28698
child process started successfully, parent exiting
[root@otrs ~]# ps -ef|grep mongod
root 28698 1 19 16:19 ? 00:00:01 mongod --config /usr/local/mongodb/conf/master.conf
root 28731 28040 0 16:19 pts/0 00:00:00 grep --color=auto mongod

4、配置secondary节点4.1、新建目录编辑配置文件

[root@otrs004097 opt]# mkdir /data/mongodb/standard

[root@otrs004097 opt]# cat /usr/local/mongodb/conf/standard.conf
bind_ip = 0.0.0.0
port = 27017
dbpath = /data/mongodb/standard
logpath = /data/mongodb/mongodb.log
logappend = true
fork = true
journal=true
maxConns=1000
auth = true
keyFile = /usr/local/mongodb/mongodb-keyfile
storageEngine=wiredTiger
oplogSize=2048
replSet=RS

4.2、启动服务

[root@otrs004097 opt]# /usr/local/mongodb/bin/mongod --fork --config /usr/local/mongodb/conf/standard.conf

[root@otrs004097 opt]# netstat -lntp|grep mongod
tcp 0 0 0.0.0.0:27017 0.0.0.0:* LISTEN 1045/mongod

5、配置arbriter节点

[root@NginxServer01 mongodb]# cat /usr/local/mongodb/conf/arbiter.conf

bind_ip = 0.0.0.0
port = 27017
dbpath = /data/mongodb/arbiter
logpath = /data/mongodb/mongodb.log
logappend = true
fork = true
journal=true
maxConns=1000
auth = true
keyFile = /usr/local/mongodb/mongodb-keyfile
storageEngine=wiredTiger
oplogSize=2048
replSet=RS
[root@NginxServer01 mongodb]# /usr/local/mongodb/bin/mongod --config /usr/local/mongodb/conf/arbiter.conf
about to fork child process, waiting until server is ready for connections.
forked process: 26321
child process started successfully, parent exiting
[root@NginxServer01 mongodb]# netstat -lntp|grep mongod
tcp 0 0 0.0.0.0:27017 0.0.0.0:* LISTEN 26321/mongod

6、在primary节点服务器上,初始化复制集primary节点、添加secondary节点和arbriter节点。6.1、初始化primary节点

[root@otrs ~]# mongo -uroot -p

MongoDB shell version v4.0.9
Enter password:
#########查看副本集的状态,发现并未初始化,接下来进行初始化。
rs.status()
{
"ok" : 0,
"errmsg" : "no replset config has been received",
"code" : 94,
"codeName" : "NotYetInitialized"
}

 

#########进行config的配置

config={"_id":"RS","members":[ {"_id":0,"host":"192.168.4.203:27017"},{"_id":1,"host":"192.168.4.97:27017"}]}
{
"_id" : "RS",
"members" : [
{
"_id" : 0,
"host" : "192.168.4.203:27017"
},
{
"_id" : 1,
"host" : "192.168.4.97:27017"
}
]
}

 

rs.initiate(config); #######初始化

{ "ok" : 1 }

 

6.2、查看副本集的状态

RS:SECONDARY> rs.status(){    "set" : "RS",    "date" : ISODate("2019-06-14T09:01:19.722Z"),    "myState" : 2,    "term" : NumberLong(0),    "syncingTo" : "",    "syncSourceHost" : "",    "syncSourceId" : -1,    "heartbeatIntervalMillis" : NumberLong(2000),    "optimes" : {        "lastCommittedOpTime" : {            "ts" : Timestamp(0, 0),            "t" : NumberLong(-1)        },        "appliedOpTime" : {            "ts" : Timestamp(1560502874, 1),            "t" : NumberLong(-1)        },        "durableOpTime" : {            "ts" : Timestamp(1560502874, 1),            "t" : NumberLong(-1)        }    },    "lastStableCheckpointTimestamp" : Timestamp(0, 0),    "members" : [        {            "_id" : 0,            "name" : "192.168.4.203:27017",            "health" : 1,            "state" : 2,            "stateStr" : "SECONDARY",            "uptime" : 2525,            "optime" : {                "ts" : Timestamp(1560502874, 1),                "t" : NumberLong(-1)            },            "optimeDate" : ISODate("2019-06-14T09:01:14Z"),            "syncingTo" : "",            "syncSourceHost" : "",            "syncSourceId" : -1,            "infoMessage" : "could not find member to sync from",            "configVersion" : 1,            "self" : true,            "lastHeartbeatMessage" : ""        },        {            "_id" : 1,            "name" : "192.168.4.97:27017",            "health" : 1,            "state" : 2,            "stateStr" : "SECONDARY",            "uptime" : 5,            "optime" : {                "ts" : Timestamp(1560502874, 1),                "t" : NumberLong(-1)            },            "optimeDurable" : {                "ts" : Timestamp(1560502874, 1),                "t" : NumberLong(-1)            },            "optimeDate" : ISODate("2019-06-14T09:01:14Z"),            "optimeDurableDate" : ISODate("2019-06-14T09:01:14Z"),            "lastHeartbeat" : ISODate("2019-06-14T09:01:19.645Z"),            "lastHeartbeatRecv" : ISODate("2019-06-14T09:01:19.489Z"),            "pingMs" : NumberLong(1),            "lastHeartbeatMessage" : "",            "syncingTo" : "",            "syncSourceHost" : "",            "syncSourceId" : -1,            "infoMessage" : "",            "configVersion" : 1        }    ],    "ok" : 1}

6.3、由初始化结果查看,两个都是secondary节点,由于数据库同步没有完成,等同步完成的时候会变成primary节点。等两分钟再次查看下

RS:PRIMARY> rs.status(){    "set" : "RS",    "date" : ISODate("2019-06-14T09:11:17.382Z"),    "myState" : 1,    "term" : NumberLong(1),    "syncingTo" : "",    "syncSourceHost" : "",    "syncSourceId" : -1,    "heartbeatIntervalMillis" : NumberLong(2000),    "optimes" : {        "lastCommittedOpTime" : {            "ts" : Timestamp(1560503477, 1),            "t" : NumberLong(1)        },        "readConcernMajorityOpTime" : {            "ts" : Timestamp(1560503477, 1),            "t" : NumberLong(1)        },        "appliedOpTime" : {            "ts" : Timestamp(1560503477, 1),            "t" : NumberLong(1)        },        "durableOpTime" : {            "ts" : Timestamp(1560503477, 1),            "t" : NumberLong(1)        }    },    "lastStableCheckpointTimestamp" : Timestamp(1560503427, 1),    "members" : [        {            "_id" : 0,            "name" : "192.168.4.203:27017",            "health" : 1,            "state" : 1,            "stateStr" : "PRIMARY",            "uptime" : 3123,            "optime" : {                "ts" : Timestamp(1560503477, 1),                "t" : NumberLong(1)            },            "optimeDate" : ISODate("2019-06-14T09:11:17Z"),            "syncingTo" : "",            "syncSourceHost" : "",            "syncSourceId" : -1,            "infoMessage" : "",            "electionTime" : Timestamp(1560502885, 1),            "electionDate" : ISODate("2019-06-14T09:01:25Z"),            "configVersion" : 1,            "self" : true,            "lastHeartbeatMessage" : ""        },        {            "_id" : 1,            "name" : "192.168.4.97:27017",            "health" : 1,            "state" : 2,            "stateStr" : "SECONDARY",            "uptime" : 603,            "optime" : {                "ts" : Timestamp(1560503467, 1),                "t" : NumberLong(1)            },            "optimeDurable" : {                "ts" : Timestamp(1560503467, 1),                "t" : NumberLong(1)            },            "optimeDate" : ISODate("2019-06-14T09:11:07Z"),            "optimeDurableDate" : ISODate("2019-06-14T09:11:07Z"),            "lastHeartbeat" : ISODate("2019-06-14T09:11:15.995Z"),            "lastHeartbeatRecv" : ISODate("2019-06-14T09:11:16.418Z"),            "pingMs" : NumberLong(0),            "lastHeartbeatMessage" : "",            "syncingTo" : "192.168.4.203:27017",            "syncSourceHost" : "192.168.4.203:27017",            "syncSourceId" : 0,            "infoMessage" : "",            "configVersion" : 1        }    ],    "ok" : 1,    "operationTime" : Timestamp(1560503477, 1),    "$clusterTime" : {        "clusterTime" : Timestamp(1560503477, 1),        "signature" : {            "hash" : BinData(0,"iR63R/X7QanbrWvuDJNkpdPgcVY="),            "keyId" : NumberLong("6702308864978583553")        }    }}

6.4、在secondary服务器上查看同步日志

[root@otrs004097 opt]# tail  -f /data/mongodb/mongodb.log  2019-06-14T17:01:15.870+0800 I REPL     [replexec-0] This node is 192.168.4.97:27017 in the config2019-06-14T17:01:15.870+0800 I REPL     [replexec-0] transition to STARTUP2 from STARTUP2019-06-14T17:01:15.871+0800 I REPL     [replexec-0] Starting replication storage threads2019-06-14T17:01:15.872+0800 I REPL     [replexec-2] Member 192.168.4.203:27017 is now in state SECONDARY2019-06-14T17:01:15.872+0800 I STORAGE  [replexec-0] createCollection: local.temp_oplog_buffer with generated UUID: 2e5c6683-a67b-4a16-bd9b-8672ee4db9002019-06-14T17:01:15.880+0800 I REPL     [replication-0] Starting initial sync (attempt 1 of 10)2019-06-14T17:01:15.881+0800 I STORAGE  [replication-0] Finishing collection drop for local.temp_oplog_buffer (2e5c6683-a67b-4a16-bd9b-8672ee4db900).2019-06-14T17:01:15.882+0800 I STORAGE  [replication-0] createCollection: local.temp_oplog_buffer with generated UUID: c25fa3cf-cae9-430b-b514-f13c3ab1e2472019-06-14T17:01:15.889+0800 I REPL     [replication-0] sync source candidate: 192.168.4.203:270172019-06-14T17:01:15.889+0800 I REPL     [replication-0] Initial syncer oplog truncation finished in: 0ms2019-06-14T17:01:15.889+0800 I REPL     [replication-0] ******2019-06-14T17:01:15.889+0800 I REPL     [replication-0] creating replication oplog of size: 2048MB...2019-06-14T17:01:15.889+0800 I STORAGE  [replication-0] createCollection: local.oplog.rs with generated UUID: f35caf0a-f2da-4cf7-b16b-34a94f1e25a72019-06-14T17:01:15.892+0800 I STORAGE  [replication-0] Starting OplogTruncaterThread local.oplog.rs2019-06-14T17:01:15.892+0800 I STORAGE  [replication-0] The size storer reports that the oplog conta××× 0 records totaling to 0 bytes2019-06-14T17:01:15.892+0800 I STORAGE  [replication-0] Scanning the oplog to determine where to place markers for truncation2019-06-14T17:01:15.905+0800 I REPL     [replication-0] ******2019-06-14T17:01:15.905+0800 I STORAGE  [replication-0] dropAllDatabasesExceptLocal 1
6.5、添加arbriter仲裁节点
RS:PRIMARY> rs.addArb("192.168.4.45:27017"){"ok" : 1,"operationTime" : Timestamp(1560504144, 1),"$clusterTime" : {"clusterTime" : Timestamp(1560504144, 1),"signature" : {    "hash" : BinData(0,"um2WmD60Gh9q/43qUff8yN2abIw="),    "keyId" : NumberLong("6702308864978583553")}}}
 

##########查看复制集的状态

RS:PRIMARY> rs.status()
{
"set" : "RS",
"date" : ISODate("2019-06-14T09:40:04.334Z"),
"myState" : 1,
"term" : NumberLong(1),
"syncingTo" : "",
"syncSourceHost" : "",
"syncSourceId" : -1,
"heartbeatIntervalMillis" : NumberLong(2000),
"optimes" : {
"lastCommittedOpTime" : {
"ts" : Timestamp(1560505197, 1),
"t" : NumberLong(1)
},
"readConcernMajorityOpTime" : {
"ts" : Timestamp(1560505197, 1),
"t" : NumberLong(1)
},
"appliedOpTime" : {
"ts" : Timestamp(1560505197, 1),
"t" : NumberLong(1)
},
"durableOpTime" : {
"ts" : Timestamp(1560505197, 1),
"t" : NumberLong(1)
}
},
"lastStableCheckpointTimestamp" : Timestamp(1560505167, 1),
"members" : [
{
"_id" : 0,
"name" : "192.168.4.203:27017",
"health" : 1,
"state" : 1,
"stateStr" : "PRIMARY",
"uptime" : 4850,
"optime" : {
"ts" : Timestamp(1560505197, 1),
"t" : NumberLong(1)
},
"optimeDate" : ISODate("2019-06-14T09:39:57Z"),
"syncingTo" : "",
"syncSourceHost" : "",
"syncSourceId" : -1,
"infoMessage" : "",
"electionTime" : Timestamp(1560502885, 1),
"electionDate" : ISODate("2019-06-14T09:01:25Z"),
"configVersion" : 2,
"self" : true,
"lastHeartbeatMessage" : ""
},
{
"_id" : 1,
"name" : "192.168.4.97:27017",
"health" : 1,
"state" : 2,
"stateStr" : "SECONDARY",
"uptime" : 2330,
"optime" : {
"ts" : Timestamp(1560505197, 1),
"t" : NumberLong(1)
},
"optimeDurable" : {
"ts" : Timestamp(1560505197, 1),
"t" : NumberLong(1)
},
"optimeDate" : ISODate("2019-06-14T09:39:57Z"),
"optimeDurableDate" : ISODate("2019-06-14T09:39:57Z"),
"lastHeartbeat" : ISODate("2019-06-14T09:40:02.717Z"),
"lastHeartbeatRecv" : ISODate("2019-06-14T09:40:02.767Z"),
"pingMs" : NumberLong(0),
"lastHeartbeatMessage" : "",
"syncingTo" : "192.168.4.203:27017",
"syncSourceHost" : "192.168.4.203:27017",
"syncSourceId" : 0,
"infoMessage" : "",
"configVersion" : 2
},
{
"_id" : 2,
"name" : "192.168.4.45:27017",
"health" : 1,
"state" : 7,
"stateStr" : "ARBITER",
"uptime" : 11,
"lastHeartbeat" : ISODate("2019-06-14T09:40:02.594Z"),
"lastHeartbeatRecv" : ISODate("2019-06-14T09:40:04.186Z"),
"pingMs" : NumberLong(0),
"lastHeartbeatMessage" : "",
"syncingTo" : "",
"syncSourceHost" : "",
"syncSourceId" : -1,
"infoMessage" : "",
"configVersion" : 2
}
],
"ok" : 1,
"operationTime" : Timestamp(1560505197, 1),
"$clusterTime" : {
"clusterTime" : Timestamp(1560505197, 1),
"signature" : {
"hash" : BinData(0,"Ff00RyXUvxDPc5nzFQYXGZIlnBc="),
"keyId" : NumberLong("6702308864978583553")
}
}
}

备注:"health" : 1,——————是1,表示健康状态"stateStr" : "PRIMARY",——————主"stateStr" : "SECONDARY",——————从"stateStr" : "ARBITER",——————仲裁

7、查看primary节点、secondary节点、arbriter节点的数据的变化

7.1、主节点Parimary操作,创建表并插入数据。
8.RS:PRIMARY> use lqb
switched to db lqb
RS:PRIMARY> db.object.×××ert([{"language":"C"},{"language":"C++"}])
BulkWriteResult({
"writeErrors" : [ ],
"writeConcernErrors" : [ ],
"nInserted" : 2,
"nUpserted" : 0,
"nMatched" : 0,
"nModified" : 0,
"nRemoved" : 0,
"upserted" : [ ]
})
RS:PRIMARY> db.object.find()
{ "_id" : ObjectId("5d0370424e7535b767bb7098"), "language" : "C" }
{ "_id" : ObjectId("5d0370424e7535b767bb7099"), "language" : "C++" }

7.2、从节点操作。查看有数据同步到

RS:SECONDARY> use lqb;
switched to db lqb
RS:SECONDARY> rs.slaveOk();
RS:SECONDARY> db.object.find()
{ "_id" : ObjectId("5d0370424e7535b767bb7099"), "language" : "C++" }
{ "_id" : ObjectId("5d0370424e7535b767bb7098"), "language" : "C" }

7.3、仲裁节点aribriter。仲裁节点不会存储数据

RS:ARBITER> use lqb;
switched to db lqb
RS:ARBITER> show tables
Warning: unable to run listCollections, attempting to approximate collection names by parsing connectionStatus
RS:ARBITER> db.object.find()
Error: error: {
"ok" : 0,
"errmsg" : "not authorized on lqb to execute command { find: \"object\", filter: {}, lsid: { id: UUID(\"d2d7e624-8f30-468a-a3b0-79728b0cabbd\") }, $readPreference: { mode: \"secondaryPreferred\" }, $db: \"lqb\" }",
"code" : 13,

8、结论:结论一:当主节点挂了得时候,从节点会变成主节点,当原先的主节点恢复的时候,会变成从节点,主节点不变。结论二:主节点能写,从节点不能写。结论三:主从节点的角色换了得话,原先的读写也就换了。

转载于:https://blog.51cto.com/liqingbiao/2409237

你可能感兴趣的文章
Kafka 简介
查看>>
MySQL 用户连接与用户线程
查看>>
RabbitMq、ActiveMq、Kafka和Redis做Mq对比
查看>>
C# 图片处理(压缩、剪裁,转换,优化)
查看>>
Linux bridge-utils tunctl 使用
查看>>
Leetcode Pascal's Triangle II
查看>>
运行shell脚本报错 '\357\273\277': command not found 解决的方法
查看>>
android studio 0.8.1使用和遇到问题解决
查看>>
云服务器ECS选购集锦之六区域选择帮助
查看>>
云虚机选购指南之二云虚拟主机试用帮助文档
查看>>
女友眼中的IT男
查看>>
Excel连接
查看>>
java基础-多线程学习
查看>>
WPF打印原理,自定义打印
查看>>
HTML5 5
查看>>
箭头css
查看>>
Python入门,以及简单爬取网页文本内容
查看>>
顺丰科技笔试回忆
查看>>
excel技巧
查看>>
通用防SQL注入漏洞程序(Global.asax方式)
查看>>