您正在查看: EOS-新手教程 分类下的文章

常规保证EOS智能合约资金安全

方法一:多重签名智能合约

EOS权限体系通过权重(Weight)和阈值(Threshold)来限制合约账户,可以实现多个人共同管理一个账户资金,假设智能合约账户有5个人的公钥,每个公钥的权重为1,权限的总阈值为3,这就表示必须要这5个人中至少3个人签名,才可以转移资金和修改合约代码。

方法二:移交智能合约权限

这种方式也比较安全,我们可以修改合约权限为eosio.prods账户,直接将智能合约权限移交给官方21个节点来控制,未来如果需要转移资金或修改合约,则需要申请节点仲裁。

方法三:设置合约黑洞公钥

这种方式是最接近区块链定义的,我们可以将智能合约权限修改为黑洞公钥,官方在超级节点竞选时临时用过的这个公钥:EOS1111111111111111111111111111111114T1Anm,它的公钥是0值加检验数据生成的,任何人都不知道它的私钥,未来合约也不可能再被转移资金或修改。

搭建EOS MongoDB 分片集群

由于EOS blocks数据同步到Mongo数据库后,体积很大,查询缓慢,并且机器配置比较低,同时运行nodeos写入MongoDB有点遭不住,so,部署下mongo集群,并做分片配置


1. 安装 MongoDB

准备了三台分别1T存储的云主机。三台主机在同一内网,IP分别如下

172.31.32.31 172.31.32.29 172.31.32.30
mongos mongos mongos
config server config server config server
shard server1 主节点 shard server1 副节点 shard server1 仲裁
shard server2 仲裁 shard server2 主节点 shard server2 副节点
shard server3 副节点 shard server3 仲裁 shard server3 主节点

端口分配:

占用 端口
mongos 20000
config 21000
shard1 27001
shard2 27002
shard3 27003

分别在每台机器建立conf、mongos、config、shard1、shard2、shard3六个目录,
因为mongos不存储数据,只需要建立日志文件目录即可。/mnt/data为机器挂载的1T存储盘

mkdir -p /mnt/data/mongodb/conf \
mkdir -p /mnt/data/mongodb/mongos/log \
mkdir -p /mnt/data/mongodb/config/data \
mkdir -p /mnt/data/mongodb/config/log \
mkdir -p /mnt/data/mongodb/shard1/data \
mkdir -p /mnt/data/mongodb/shard1/log \
mkdir -p /mnt/data/mongodb/shard2/data \
mkdir -p /mnt/data/mongodb/shard2/log \
mkdir -p /mnt/data/mongodb/shard3/data \
mkdir -p /mnt/data/mongodb/shard3/log

安装 mongodb-server

sudo apt install mongodb-server

配置环境变量

vi /etc/profile
# MongoDB 环境变量内容
export MONGODB_HOME=~/opt/mongodb
export PATH=$MONGODB_HOME/bin:$PATH

使立即生效

source /etc/profile

2. config server配置服务器

mongodb3.4以后要求配置服务器也创建副本集,不然集群搭建不成功。
(三台机器)添加配置文件 vi /mnt/data/mongodb/conf/config.conf

## Profile content
pidfilepath = /mnt/data/mongodb/config/log/configsrv.pid
dbpath = /mnt/data/mongodb/config/data
logpath = /mnt/data/mongodb/config/log/congigsrv.log
logappend = true

bind_ip = 0.0.0.0
port = 21000
fork = true

#declare this is a config db of a cluster;
configsvr = true

#Replica set name
replSet = configs

#Set the maximum number of connections
maxConns = 20000

启动三台服务器的config server

mongod -f /mnt/data/mongodb/conf/config.conf

登录任意一台配置服务器,初始化配置副本集

连接 MongoDB

mongo --port 21000

config 变量

config = {
    _id : "configs",
    members : [
    {_id : 0, host : "172.31.32.31:21000" },
    {_id : 1, host : "172.31.32.29:21000" },
    {_id : 2, host : "172.31.32.30:21000" }
    ]
}

初始化副本集

rs.initiate(config)

其中,"_id" : "configs"应与配置文件中配置的 replicaction.replSetName 一致,"members" 中的 "host" 为三个节点的 ip 和 port
响应内容如下

> config = {
...     _id : "configs",
...     members : [
...     {_id : 0, host : "172.31.32.31:21000" },
...     {_id : 1, host : "172.31.32.29:21000" },
...     {_id : 2, host : "172.31.32.30:21000" }
...     ]
... }
{
    "_id" : "configs",
    "members" : [
        {
            "_id" : 0,
            "host" : "172.31.32.31:21000"
        },
        {
            "_id" : 1,
            "host" : "172.31.32.29:21000"
        },
        {
            "_id" : 2,
            "host" : "172.31.32.30:21000"
        }
    ]
}
> rs.initiate(config)
{
    "ok" : 1,
    "operationTime" : Timestamp(1535116186, 1),
    "$gleStats" : {
        "lastOpTime" : Timestamp(1535116186, 1),
        "electionId" : ObjectId("000000000000000000000000")
    },
    "$clusterTime" : {
        "clusterTime" : Timestamp(1535116186, 1),
        "signature" : {
            "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
            "keyId" : NumberLong(0)
        }
    }
}
configs:SECONDARY> rs.status()

此时会发现终端上的输出已经有了变化。

//从单个一个
>
//变成了
configs:SECONDARY>

查询状态

configs:SECONDARY> rs.status()

输出如下

configs:SECONDARY> rs.status()
{
    "set" : "configs",
    "date" : ISODate("2018-08-24T13:10:23.181Z"),
    "myState" : 1,
    "term" : NumberLong(1),
    "configsvr" : true,
    "heartbeatIntervalMillis" : NumberLong(2000),
    "optimes" : {
        "lastCommittedOpTime" : {
            "ts" : Timestamp(1535116209, 1),
            "t" : NumberLong(1)
        },
        "readConcernMajorityOpTime" : {
            "ts" : Timestamp(1535116209, 1),
            "t" : NumberLong(1)
        },
        "appliedOpTime" : {
            "ts" : Timestamp(1535116209, 1),
            "t" : NumberLong(1)
        },
        "durableOpTime" : {
            "ts" : Timestamp(1535116209, 1),
            "t" : NumberLong(1)
        }
    },
    "members" : [
        {
            "_id" : 0,
            "name" : "172.31.32.31:21000",
            "health" : 1,
            "state" : 2,
            "stateStr" : "SECONDARY",
            "uptime" : 37,
            "optime" : {
                "ts" : Timestamp(1535116209, 1),
                "t" : NumberLong(1)
            },
            "optimeDurable" : {
                "ts" : Timestamp(1535116209, 1),
                "t" : NumberLong(1)
            },
            "optimeDate" : ISODate("2018-08-24T13:10:09Z"),
            "optimeDurableDate" : ISODate("2018-08-24T13:10:09Z"),
            "lastHeartbeat" : ISODate("2018-08-24T13:10:22.372Z"),
            "lastHeartbeatRecv" : ISODate("2018-08-24T13:10:21.182Z"),
            "pingMs" : NumberLong(0),
            "syncingTo" : "172.31.32.30:21000",
            "configVersion" : 1
        },
        {
            "_id" : 1,
            "name" : "172.31.32.29:21000",
            "health" : 1,
            "state" : 2,
            "stateStr" : "SECONDARY",
            "uptime" : 37,
            "optime" : {
                "ts" : Timestamp(1535116209, 1),
                "t" : NumberLong(1)
            },
            "optimeDurable" : {
                "ts" : Timestamp(1535116209, 1),
                "t" : NumberLong(1)
            },
            "optimeDate" : ISODate("2018-08-24T13:10:09Z"),
            "optimeDurableDate" : ISODate("2018-08-24T13:10:09Z"),
            "lastHeartbeat" : ISODate("2018-08-24T13:10:22.372Z"),
            "lastHeartbeatRecv" : ISODate("2018-08-24T13:10:22.128Z"),
            "pingMs" : NumberLong(0),
            "syncingTo" : "172.31.32.30:21000",
            "configVersion" : 1
        },
        {
            "_id" : 2,
            "name" : "172.31.32.30:21000",
            "health" : 1,
            "state" : 1,
            "stateStr" : "PRIMARY",
            "uptime" : 195,
            "optime" : {
                "ts" : Timestamp(1535116209, 1),
                "t" : NumberLong(1)
            },
            "optimeDate" : ISODate("2018-08-24T13:10:09Z"),
            "infoMessage" : "could not find member to sync from",
            "electionTime" : Timestamp(1535116196, 1),
            "electionDate" : ISODate("2018-08-24T13:09:56Z"),
            "configVersion" : 1,
            "self" : true
        }
    ],
    "ok" : 1,
    "operationTime" : Timestamp(1535116209, 1),
    "$gleStats" : {
        "lastOpTime" : Timestamp(1535116186, 1),
        "electionId" : ObjectId("7fffffff0000000000000001")
    },
    "$clusterTime" : {
        "clusterTime" : Timestamp(1535116209, 1),
        "signature" : {
            "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
            "keyId" : NumberLong(0)
        }
    }
}
configs:PRIMARY> 

3. 配置分片副本集

3.1 设置第一个分片副本集

(三台机器)设置第一个分片副本集
配置文件vi /mnt/data/mongodb/conf/shard1.conf

pidfilepath = /mnt/data/mongodb/shard1/log/shard1.pid
dbpath = /mnt/data/mongodb/shard1/data
logpath = /mnt/data/mongodb/shard1/log/shard1.log
logappend = true

bind_ip = 0.0.0.0
port = 27001
fork = true
replSet = shard1

#declare this is a shard db of a cluster;
shardsvr = true
maxConns = 20000

启动三台服务器的shard1 server

mongod -f /mnt/data/mongodb/conf/shard1.conf

登陆任意一台服务器,初始化副本集(除了172.31.32.30)
连接 MongoDB

mongo --port 27001

使用admin数据库

use admin

定义副本集配置

config = {
    _id : "shard1",
     members : [
         {_id : 0, host : "172.31.32.31:27001" },
         {_id : 1, host : "172.31.32.29:27001" },
         {_id : 2, host : "172.31.32.30:27001" , arbiterOnly: true }
     ]
 }

初始化副本集配置

rs.initiate(config)

响应内容如下

> use admin
switched to db admin
> config = {
...     _id : "shard1",
...      members : [
...          {_id : 0, host : "172.31.32.31:27001" },
...          {_id : 1, host : "172.31.32.29:27001" },
...          {_id : 2, host : "172.31.32.30:27001" , arbiterOnly: true }
...      ]
...  }
{
        "_id" : "shard1",
        "members" : [
                {
                        "_id" : 0,
                        "host" : "172.31.32.31:27001"
                },
                {
                        "_id" : 1,
                        "host" : "172.31.32.29:27001"
                },
                {
                        "_id" : 2,
                        "host" : "172.31.32.30:27001",
                        "arbiterOnly" : true
                }
        ]
}
> rs.initiate(config)
{ "ok" : 1 }

此时会发现终端上的输出已经有了变化。

//从单个一个
>
//变成了
shard1:SECONDARY>

查询状态

shard1:SECONDARY> rs.status()

响应内容如下

shard1:SECONDARY> rs.status()
{
        "set" : "shard1",
        "date" : ISODate("2018-08-24T13:24:09.626Z"),
        "myState" : 1,
        "term" : NumberLong(1),
        "heartbeatIntervalMillis" : NumberLong(2000),
        "optimes" : {
                "lastCommittedOpTime" : {
                        "ts" : Timestamp(1535117039, 2),
                        "t" : NumberLong(1)
                },
                "readConcernMajorityOpTime" : {
                        "ts" : Timestamp(1535117039, 2),
                        "t" : NumberLong(1)
                },
                "appliedOpTime" : {
                        "ts" : Timestamp(1535117039, 2),
                        "t" : NumberLong(1)
                },
                "durableOpTime" : {
                        "ts" : Timestamp(1535117039, 2),
                        "t" : NumberLong(1)
                }
        },
        "members" : [
                {
                        "_id" : 0,
                        "name" : "172.31.32.31:27001",
                        "health" : 1,
                        "state" : 1,
                        "stateStr" : "PRIMARY",
                        "uptime" : 396,
                        "optime" : {
                                "ts" : Timestamp(1535117039, 2),
                                "t" : NumberLong(1)
                        },
                        "optimeDate" : ISODate("2018-08-24T13:23:59Z"),
                        "infoMessage" : "could not find member to sync from",
                        "electionTime" : Timestamp(1535117038, 1),
                        "electionDate" : ISODate("2018-08-24T13:23:58Z"),
                        "configVersion" : 1,
                        "self" : true
                },
                {
                        "_id" : 1,
                        "name" : "172.31.32.29:27001",
                        "health" : 1,
                        "state" : 2,
                        "stateStr" : "SECONDARY",
                        "uptime" : 21,
                        "optime" : {
                                "ts" : Timestamp(1535117039, 2),
                                "t" : NumberLong(1)
                        },
                        "optimeDurable" : {
                                "ts" : Timestamp(1535117039, 2),
                                "t" : NumberLong(1)
                        },
                        "optimeDate" : ISODate("2018-08-24T13:23:59Z"),
                        "optimeDurableDate" : ISODate("2018-08-24T13:23:59Z"),
                        "lastHeartbeat" : ISODate("2018-08-24T13:24:08.467Z"),
                        "lastHeartbeatRecv" : ISODate("2018-08-24T13:24:04.801Z"),
                        "pingMs" : NumberLong(0),
                        "syncingTo" : "172.31.32.31:27001",
                        "configVersion" : 1
                },
                {
                        "_id" : 2,
                        "name" : "172.31.32.30:27001",
                        "health" : 1,
                        "state" : 7,
                        "stateStr" : "ARBITER",
                        "uptime" : 21,
                        "lastHeartbeat" : ISODate("2018-08-24T13:24:08.467Z"),
                        "lastHeartbeatRecv" : ISODate("2018-08-24T13:24:04.781Z"),
                        "pingMs" : NumberLong(0),
                        "configVersion" : 1
                }
        ],
        "ok" : 1
}

3.2 设置第二个分片副本集

设置第二个分片副本集
配置文件 vi /mnt/data/mongodb/conf/shard2.conf

pidfilepath = /mnt/data/mongodb/shard2/log/shard2.pid
dbpath = /mnt/data/mongodb/shard2/data
logpath = /mnt/data/mongodb/shard2/log/shard2.log
logappend = true

bind_ip = 0.0.0.0
port = 27002
fork = true

replSet=shard2

#declare this is a shard db of a cluster;
shardsvr = true
maxConns=20000

启动三台服务器的shard2 server

mongod -f /mnt/data/mongodb/conf/shard2.conf

连接 MongoDB

mongo --port 27002

使用admin数据库

use admin

定义副本集配置

config = {
    _id : "shard2",
     members : [
         {_id : 0, host : "172.31.32.31:27002"  , arbiterOnly: true },
         {_id : 1, host : "172.31.32.29:27002" },
         {_id : 2, host : "172.31.32.30:27002" }
     ]
 }

初始化副本集配置

rs.initiate(config)

响应内容如下

> use admin
switched to db admin
> config = {
...     _id : "shard2",
...      members : [
...          {_id : 0, host : "172.31.32.31:27002"  , arbiterOnly: true },
...          {_id : 1, host : "172.31.32.29:27002" },
...          {_id : 2, host : "172.31.32.30:27002" }
...      ]
...  }
{
        "_id" : "shard2",
        "members" : [
                {
                        "_id" : 0,
                        "host" : "172.31.32.31:27002",
                        "arbiterOnly" : true
                },
                {
                        "_id" : 1,
                        "host" : "172.31.32.29:27002"
                },
                {
                        "_id" : 2,
                        "host" : "172.31.32.30:27002"
                }
        ]
}
> rs.initiate(config)
{ "ok" : 1 }
rs.status()

查看状态信息返回

shard2:SECONDARY> rs.status()
{
        "set" : "shard2",
        "date" : ISODate("2018-08-24T13:34:17.459Z"),
        "myState" : 1,
        "term" : NumberLong(1),
        "heartbeatIntervalMillis" : NumberLong(2000),
        "optimes" : {
                "lastCommittedOpTime" : {
                        "ts" : Timestamp(1535117651, 1),
                        "t" : NumberLong(1)
                },
                "readConcernMajorityOpTime" : {
                        "ts" : Timestamp(1535117651, 1),
                        "t" : NumberLong(1)
                },
                "appliedOpTime" : {
                        "ts" : Timestamp(1535117651, 1),
                        "t" : NumberLong(1)
                },
                "durableOpTime" : {
                        "ts" : Timestamp(1535117651, 1),
                        "t" : NumberLong(1)
                }
        },
        "members" : [
                {
                        "_id" : 0,
                        "name" : "172.31.32.31:27002",
                        "health" : 1,
                        "state" : 7,
                        "stateStr" : "ARBITER",
                        "uptime" : 39,
                        "lastHeartbeat" : ISODate("2018-08-24T13:34:15.701Z"),
                        "lastHeartbeatRecv" : ISODate("2018-08-24T13:34:15.286Z"),
                        "pingMs" : NumberLong(0),
                        "configVersion" : 1
                },
                {
                        "_id" : 1,
                        "name" : "172.31.32.29:27002",
                        "health" : 1,
                        "state" : 1,
                        "stateStr" : "PRIMARY",
                        "uptime" : 153,
                        "optime" : {
                                "ts" : Timestamp(1535117651, 1),
                                "t" : NumberLong(1)
                        },
                        "optimeDate" : ISODate("2018-08-24T13:34:11Z"),
                        "infoMessage" : "could not find member to sync from",
                        "electionTime" : Timestamp(1535117629, 1),
                        "electionDate" : ISODate("2018-08-24T13:33:49Z"),
                        "configVersion" : 1,
                        "self" : true
                },
                {
                        "_id" : 2,
                        "name" : "172.31.32.30:27002",
                        "health" : 1,
                        "state" : 2,
                        "stateStr" : "SECONDARY",
                        "uptime" : 39,
                        "optime" : {
                                "ts" : Timestamp(1535117651, 1),
                                "t" : NumberLong(1)
                        },
                        "optimeDurable" : {
                                "ts" : Timestamp(1535117651, 1),
                                "t" : NumberLong(1)
                        },
                        "optimeDate" : ISODate("2018-08-24T13:34:11Z"),
                        "optimeDurableDate" : ISODate("2018-08-24T13:34:11Z"),
                        "lastHeartbeat" : ISODate("2018-08-24T13:34:15.701Z"),
                        "lastHeartbeatRecv" : ISODate("2018-08-24T13:34:16.331Z"),
                        "pingMs" : NumberLong(0),
                        "syncingTo" : "172.31.32.29:27002",
                        "configVersion" : 1
                }
        ],
        "ok" : 1
}

3.3 设置第三个分片副本集

设置第三个分片副本集
配置文件 vi /mnt/data/mongodb/conf/shard3.conf

pidfilepath = /mnt/data/mongodb/shard3/log/shard3.pid
dbpath = /mnt/data/mongodb/shard3/data
logpath = /mnt/data/mongodb/shard3/log/shard3.log
logappend = true

bind_ip = 0.0.0.0
port = 27003
fork = true
replSet=shard3

#declare this is a shard db of a cluster;
shardsvr = true
maxConns=20000

启动三台服务器的shard3 server

mongod -f /mnt/data/mongodb/conf/shard3.conf

登陆任意一台服务器,初始化副本集(除了172.31.32.31)

mongo --port 27003

使用admin数据库

use admin

定义副本集配置

config = {
    _id : "shard3",
     members : [
         {_id : 0, host : "172.31.32.31:27003" },
         {_id : 1, host : "172.31.32.29:27003" , arbiterOnly: true},
         {_id : 2, host : "172.31.32.30:27003" }
     ]
 }

初始化副本集配置

rs.initiate(config)

响应内容如下

> use admin
switched to db admin
> config = {
...     _id : "shard3",
...      members : [
...          {_id : 0, host : "172.31.32.31:27003" },
...          {_id : 1, host : "172.31.32.29:27003" , arbiterOnly: true},
...          {_id : 2, host : "172.31.32.30:27003" }
...      ]
...  }
{
    "_id" : "shard3",
    "members" : [
        {
            "_id" : 0,
            "host" : "172.31.32.31:27003"
        },
        {
            "_id" : 1,
            "host" : "172.31.32.29:27003",
            "arbiterOnly" : true
        },
        {
            "_id" : 2,
            "host" : "172.31.32.30:27003"
        }
    ]
}
> rs.initiate(config)
{ "ok" : 1 }

查看状态

rs.status()

返回信息如下

shard3:SECONDARY> rs.status()
{
    "set" : "shard3",
    "date" : ISODate("2018-08-24T13:45:19.572Z"),
    "myState" : 1,
    "term" : NumberLong(1),
    "heartbeatIntervalMillis" : NumberLong(2000),
    "optimes" : {
        "lastCommittedOpTime" : {
            "ts" : Timestamp(1535118313, 1),
            "t" : NumberLong(1)
        },
        "readConcernMajorityOpTime" : {
            "ts" : Timestamp(1535118313, 1),
            "t" : NumberLong(1)
        },
        "appliedOpTime" : {
            "ts" : Timestamp(1535118313, 1),
            "t" : NumberLong(1)
        },
        "durableOpTime" : {
            "ts" : Timestamp(1535118313, 1),
            "t" : NumberLong(1)
        }
    },
    "members" : [
        {
            "_id" : 0,
            "name" : "172.31.32.31:27003",
            "health" : 1,
            "state" : 2,
            "stateStr" : "SECONDARY",
            "uptime" : 97,
            "optime" : {
                "ts" : Timestamp(1535118313, 1),
                "t" : NumberLong(1)
            },
            "optimeDurable" : {
                "ts" : Timestamp(1535118313, 1),
                "t" : NumberLong(1)
            },
            "optimeDate" : ISODate("2018-08-24T13:45:13Z"),
            "optimeDurableDate" : ISODate("2018-08-24T13:45:13Z"),
            "lastHeartbeat" : ISODate("2018-08-24T13:45:18.293Z"),
            "lastHeartbeatRecv" : ISODate("2018-08-24T13:45:18.675Z"),
            "pingMs" : NumberLong(0),
            "syncingTo" : "172.31.32.30:27003",
            "configVersion" : 1
        },
        {
            "_id" : 1,
            "name" : "172.31.32.29:27003",
            "health" : 1,
            "state" : 7,
            "stateStr" : "ARBITER",
            "uptime" : 97,
            "lastHeartbeat" : ISODate("2018-08-24T13:45:18.293Z"),
            "lastHeartbeatRecv" : ISODate("2018-08-24T13:45:18.602Z"),
            "pingMs" : NumberLong(0),
            "configVersion" : 1
        },
        {
            "_id" : 2,
            "name" : "172.31.32.30:27003",
            "health" : 1,
            "state" : 1,
            "stateStr" : "PRIMARY",
            "uptime" : 126,
            "optime" : {
                "ts" : Timestamp(1535118313, 1),
                "t" : NumberLong(1)
            },
            "optimeDate" : ISODate("2018-08-24T13:45:13Z"),
            "infoMessage" : "could not find member to sync from",
            "electionTime" : Timestamp(1535118232, 1),
            "electionDate" : ISODate("2018-08-24T13:43:52Z"),
            "configVersion" : 1,
            "self" : true
        }
    ],
    "ok" : 1
}

3.4 配置路由服务器 mongos

(三台机器)先启动配置服务器和分片服务器,后启动路由实例启动路由实例:
修改配置文件vi /mnt/data/mongodb/conf/mongos.conf

pidfilepath = /mnt/data/mongodb/mongos/log/mongos.pid
logpath = /mnt/data/mongodb/mongos/log/mongos.log
logappend = true

bind_ip = 0.0.0.0
port = 20000
fork = true

#The configuration server that listens can only have 1 or 3 configs to configure the replica set name of the server.
configdb = configs/172.31.32.31:21000,172.31.32.29:21000,172.31.32.30:21000

maxConns = 20000

启动三台服务器的mongos server

mongos -f /mnt/data/mongodb/conf/mongos.conf

4. 串联路由服务器

目前搭建了mongodb配置服务器、路由服务器,各个分片服务器,不过应用程序连接到mongos路由服务器并不能使用分片机制,还需要在程序里设置分片配置,让分片生效。
登陆任意一台mongos

mongo --port 20000

使用admin数据库

use admin

串联路由服务器与分配副本集

sh.addShard("shard1/172.31.32.31:27001,172.31.32.29:27001,172.31.32.30:27001");
sh.addShard("shard2/172.31.32.31:27002,172.31.32.29:27002,172.31.32.30:27002");
sh.addShard("shard3/172.31.32.31:27003,172.31.32.29:27003,172.31.32.30:27003");

查看集群状态

sh.status()

响应内容如下

mongos> sh.addShard("shard1/172.31.32.31:27001,172.31.32.29:27001,172.31.32.30:27001");
{
    "shardAdded" : "shard1",
    "ok" : 1,
    "$clusterTime" : {
        "clusterTime" : Timestamp(1535118917, 6),
        "signature" : {
            "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
            "keyId" : NumberLong(0)
        }
    },
    "operationTime" : Timestamp(1535118917, 6)
}
mongos> sh.addShard("shard2/172.31.32.31:27002,172.31.32.29:27002,172.31.32.30:27002");
{
    "shardAdded" : "shard2",
    "ok" : 1,
    "$clusterTime" : {
        "clusterTime" : Timestamp(1535118917, 10),
        "signature" : {
            "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
            "keyId" : NumberLong(0)
        }
    },
    "operationTime" : Timestamp(1535118917, 10)
}
mongos> sh.addShard("shard3/172.31.32.31:27003,172.31.32.29:27003,172.31.32.30:27003");
{
    "shardAdded" : "shard3",
    "ok" : 1,
    "$clusterTime" : {
        "clusterTime" : Timestamp(1535118918, 5),
        "signature" : {
            "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
            "keyId" : NumberLong(0)
        }
    },
    "operationTime" : Timestamp(1535118918, 5)
}
mongos> sh.status()
--- Sharding Status --- 
  sharding version: {
    "_id" : 1,
    "minCompatibleVersion" : 5,
    "currentVersion" : 6,
    "clusterId" : ObjectId("5b8003a67629bf9778831c77")
  }
  shards:
        {  "_id" : "shard1",  "host" : "shard1/172.31.32.29:27001,172.31.32.31:27001",  "state" : 1 }
        {  "_id" : "shard2",  "host" : "shard2/172.31.32.29:27002,172.31.32.30:27002",  "state" : 1 }
        {  "_id" : "shard3",  "host" : "shard3/172.31.32.30:27003,172.31.32.31:27003",  "state" : 1 }
  active mongoses:
        "3.6.3" : 3
  autosplit:
        Currently enabled: yes
  balancer:
        Currently enabled:  yes
        Currently running:  no
        Failed balancer rounds in last 5 attempts:  0
        Migration Results for the last 24 hours: 
                No recent migrations
  databases:
        {  "_id" : "config",  "primary" : "config",  "partitioned" : true }

5. 启用集合分片生效

目前配置服务、路由服务、分片服务、副本集服务都已经串联起来了,但我们的目的是希望插入数据,数据能够自动分片。连接在mongos上,准备让指定的数据库、指定的集合分片生效。
登陆任意一台mongos

mongo --port 20000

使用admin数据库

use admin

指定eos数据库分片生效

db.runCommand( { enablesharding :"eos"});

指定数据库里需要分片的集合和片键,哈希id 分片

db.runCommand( { shardcollection : "eos.blocks",key : {"_id": "hashed"} } );

6. 修改eos config.ini

三台机器其中一台,运行nodeos并同步blocks数据到mongodb

plugin = eosio::mongo_db_plugin
mongodb-uri = mongodb://localhost:20000/eos

重置数据

nodeos --data-dir /mnt/data/data --hard-replay-blockchain --mongodb-wipe

需要清空数据的话加下 --mongodb-wipe
然后查看任意一台mongo数据库

mongo --port 20000
mongos> show databases;
admin   0.000GB
config  0.001GB
eos     0.071GB

已经正确分片存储

参考:segmentfault

配置EOS同步主网数据到Mysql

EOS编译安装,请参考编译EOS主网EOS-Mainnet代码并支持sql_db_plugin

一. 修改Config配置

先运行下nodeos,将会自动创建~/.local/share/eosio/nodeos/config目录和config.ini文件。
修改config.ini中如下内容

//添加 (2018-8-18 此时可用,如果有想分享的节点,请私聊我)
p2p-peer-address = fullnode.eoslaomao.com:443
p2p-peer-address = mars.fnp2p.eosbixin.com:443
//修改 可忽略
agent-name = "BcSkill"
//如果需要返回错误信息,修改
verbose-http-errors = true
//添加插件支持
plugin = eosio::chain_plugin
plugin = eosio::net_plugin
plugin = eosio::sql_db_plugin
//修改mongodb插件相关配置
sql_db-uri = mysql://db=eos user=root host=127.0.0.1 password='pwd'

二. 安装配置并启动MySql

1. 安装MySql

先安装MySql 参考 Ubuntu 安装MySql 8.0

2. 配置MySql

  • 接下来我们安装 soci。soci 是 C++ 连接 MySQL 的 Library。Ubuntu 可以快速安装,命令如下:
    sudo apt-get -y install libsoci-dev
  • 安装 mysql-client
    sudo apt-get -y install mysql-client
    sudo apt-get install libmysqlclient-dev

由于链上数据较大,比如1000W块左右的数据,离线压缩包大约14GB,同步到MySql大概需要200GB左右。所以需要将MySql数据单独磁盘存储。
修改MySql数据存储位置,/mnt/data为我这边挂载的存储盘
新建目录mkdir /mnt/data/MySql
并参考 Ubuntu16.04下修改MySQL数据的默认存储位置

配置Eos数据库

  • 进入mysql
    mysql -p
  • 创建数据库
    CREATE DATABASE eos DEFAULT CHARACTER SET utf8mb4 COLLATE utf8mb4_unicode_ci;
  • 查看数据库
    mysql > show databases;

三. 下载主网离线数据包

参考 EOS 主网数据更新,离线数据包
因为使用离线包可以加快追上主网的进度,如果从零自己同步到主网进度,难以想象要多久...
假设已经将离线数据下载并解压到了 /mnt/data/data目录下。

四. 启动nodeos开始同步数据

nodeos --data-dir /mnt/data/data --hard-replay-blockchain --replay-blockchain
第一次运行 必须添加 --replay-blockchain参数,不然不会创建数据库表,参考下面代码

如果需要清空mysql数据库的话,添加--sql_db-block-start=0 --replay-blockchain
相关代码如下eos\plugins\sql_db_plugin\sql_db_plugin.cpp

void sql_db_plugin::plugin_initialize(const variables_map& options)
{
    ilog("initialize");
    try {
        std::string uri_str = options.at(SQL_DB_URI_OPTION).as<std::string>();
        if (uri_str.empty())
        {
            wlog("db URI not specified => eosio::sql_db_plugin disabled.");
            return;
        }
        ilog("connecting to ${u}", ("u", uri_str));
        uint32_t block_num_start = options.at(BLOCK_START_OPTION).as<uint32_t>();
        auto db = std::make_unique<database>(uri_str, block_num_start);

        if (options.at(HARD_REPLAY_OPTION).as<bool>() ||
                options.at(REPLAY_OPTION).as<bool>() ||
                options.at(RESYNC_OPTION).as<bool>() ||
                !db->is_started())
        {
            if (block_num_start == 0) {
                ilog("Resync requested: wiping database");
                if( options.at( RESYNC_OPTION ).as<bool>() ||
                        options.at( REPLAY_OPTION ).as<bool>()) {
                    ilog( "Resync requested: wiping database" );
                    db->wipe();
                }
            }
        }

此时已开始,同步中

五. 服务器建议配置

内存:32GB
存储:1T+

编译EOS主网EOS-Mainnet代码并支持sql_db_plugin

涉及的代码仓库

EOSIO官方将不再收录sql_db_plugin,第三方自己维护(github),v1.2.0已删除sql_db_plugin,官方只维护mongo_db_plugin。

合并代码,提交到自己的仓库

基于EOS-Mainnet对应的分支合并NebulaProtocol中sql_db_plugin相关代码。
由于我熟悉Windows TortoiseGit工具,所以我在Windows上准备好代码,提交到github自己的仓库,(也方便自己维护),再在Ubuntu上拉取分支代码。

步骤尽量的简单,方便新手按部就班,老鸟绕飞~

github 上新建仓库https://github.com/cppfuns/Pure-EOS.git

新建目录Pure-EOS,获取EOS-Mainnet代码

git clone https://github.com/EOS-Mainnet/eos.git

添加NebulaProtocol仓库

进入Pure-EOS目录,右键TortoiseGit->Settings,添加NebulaProtocol仓库

获取所有仓库更新信息到本地

再次在Pure-EOS目录,右键TortoiseGit->fetch,获取所有仓库更新信息到本地

切换分支

切换EOS-Mainnet当前最新的release分支,当前为mainnet-1.1.6
再次在Pure-EOS目录,右键TortoiseGit->Switch/Checkout...,选择mainnet-1.1.6分支

合并代码


再次在Pure-EOS目录,右键TortoiseGit->Show log,找到NebulaProtocol/sql_plugin分支

右键Merge to mainnet-1.1.6

点击确定后,如果有冲突,自行解决,

提交代码

此时已经将NebulaProtocol中sql_db_plugin分支代码,合并到了EOS-Mainnet的最新分支mainnet-1.1.6上了。提交本地的mainnet-1.1.6分支到github 上新建仓库https://github.com/cppfuns/Pure-EOS.git

获取代码

在Ubuntu上直接获取此仓库

git clone https://github.com/cppfuns/Pure-EOS.git

切换分支

cd eos源码目录
git checkout mainnet-1.1.6

开始编译代码

./eosio_build.sh -s EOS

执行安装

./eosio_install.sh

检测sql_db_plugin

编译完成后,执行

nodeos --help | grep "sql_db-uri"

如果输出–sql_db-uri arg Sql DB URI connection string If not内容,证明编译的EOS已支持sql_db_plugin

配置EOS同步主网数据到mongoDB

EOS编译安装,请参考 编译EOS主网EOS-Mainnet代码

一. 修改Config配置

先运行下nodeos,将会自动创建~/.local/share/eosio/nodeos/config目录和config.ini文件。
修改config.ini中如下内容

//添加 (2018-8-18 此时可用,如果有想分享的节点,请私聊我)
p2p-peer-address = fullnode.eoslaomao.com:443
p2p-peer-address = mars.fnp2p.eosbixin.com:443
//修改 可忽略
agent-name = "BcSkill"
//如果需要返回错误信息,修改
verbose-http-errors = true
//添加插件支持
plugin = eosio::chain_plugin
plugin = eosio::net_plugin
//修改mongodb插件相关配置
plugin = eosio::mongo_db_plugin
mongodb-uri = mongodb://127.0.0.1:27017/EOS
mongodb-filter-on = *
#mongodb-filter-out = spammer::
mongodb-filter-out = eosio:onblock:
mongodb-filter-out = gu2tembqgage::
mongodb-filter-out = blocktwitter::
mongodb-queue-size = 2048
abi-serializer-max-time-ms = 5000
mongodb-block-start = 1
mongodb-store-block-states = false
mongodb-store-blocks = false
mongodb-store-transactions = false
mongodb-store-transaction-traces = true
mongodb-store-action-traces = true

read-mode = read-only

参考:MongoDB Filtering and Optimizations
参考:issues/5797

二. 安装配置并启动MongoDB

1. 安装MongoDB

先安装MongoDB 参考(Ubuntu 安装 Mongodb 3+

2. 配置MongoDB

进入到MongoDB bin 目录,可以添加到环境变量,方便操作。

cd ~/opt/mongodb/bin

由于链上数据较大,比如1000W块左右的数据,离线压缩包大约14GB,同步到MongoDB大概需要200GB左右。所以需要将MongoDB数据单独磁盘存储。
修改MongoDB数据存储位置,/mnt/data为我这边挂载的存储盘
新建目录mkdir /mnt/data/mongo/db

3. 启动MongoDB

mongod --dbpath /mnt/data/mongo/db

这时MongoDB服务会默认监听27017端口

三. 下载主网离线数据包

参考 EOS 主网数据更新,离线数据包
因为使用离线包可以加快追上主网的进度,如果从零自己同步到主网进度,难以想象要多久...
假设已经将离线数据下载并解压到了 /mnt/data/data目录下。

四. 启动nodeos开始同步数据

nodeos --data-dir /mnt/data/data --hard-replay-blockchain

如果需要清空mongo数据库的话,添加--mongodb-wipe
相关代码如下eos\plugins\mongo_db_plugin\mongo_db_plugin.cpp

void mongo_db_plugin::plugin_initialize(const variables_map& options)
{
   try {
      if( options.count( "mongodb-uri" )) {
         ilog( "initializing mongo_db_plugin" );
         my->configured = true;

         if( options.at( "replay-blockchain" ).as<bool>() || options.at( "hard-replay-blockchain" ).as<bool>() || options.at( "delete-all-blocks" ).as<bool>() ) {
            if( options.at( "mongodb-wipe" ).as<bool>()) {
               ilog( "Wiping mongo database on startup" );
               my->wipe_database_on_startup = true;
            } else if( options.count( "mongodb-block-start" ) == 0 ) {
               EOS_ASSERT( false, chain::plugin_config_exception, "--mongodb-wipe required with --replay-blockchain, --hard-replay-blockchain, or --delete-all-blocks"
                                 " --mongodb-wipe will remove all EOS collections from mongodb." );
            }
         }

此时已开始,同步中

五. 服务器建议配置

内存:32GB
存储:1T+