您正在查看: EOS-新手教程 分类下的文章

Hyperion History API 详细部署

安装java

sudo apt-get install openjdk-8-jdk

安装elasticsearch

下载

https://www.elastic.co/cn/downloads/elasticsearch

wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-7.4.0-amd64.deb

安装

sudo dpkg -i elasticsearch-7.4.0-amd64.deb

程序位置 /usr/share/elasticsearch/
配置文件 /etc/elasticsearch/elasticsearch.yml

启动es

service elasticsearch start

测试

curl http://localhost:9200/

返回

{
  "name" : "fscshare",
  "cluster_name" : "elasticsearch",
  "cluster_uuid" : "jwNw7I-sSvevCWg9p-ibsg",
  "version" : {
    "number" : "7.4.0",
    "build_flavor" : "default",
    "build_type" : "deb",
    "build_hash" : "22e1767283e61a198cb4db791ea66e3f11ab9910",
    "build_date" : "2019-09-27T08:36:48.569419Z",
    "build_snapshot" : false,
    "lucene_version" : "8.2.0",
    "minimum_wire_compatibility_version" : "6.8.0",
    "minimum_index_compatibility_version" : "6.0.0-beta1"
  },
  "tagline" : "You Know, for Search"
}

及安装正确。

停止es (按需要)

cat /var/run/elasticsearch/elasticsearch.pid  && echo # 获得pid
sudo kill -SIGTERM 81937

Kibana (按需要)

为了能够可视化es中的数据并操作es

下载

https://www.elastic.co/cn/downloads/kibana

wget https://artifacts.elastic.co/downloads/kibana/kibana-7.4.0-amd64.deb

安装

sudo dpkg -i kibana-7.4.0-amd64.deb

位置 /usr/share/kibana/
启动 /usr/share/kibana/bin/kibana
访问 http://localhost:5601

安装RabbitMQ

sudo apt-get update -y
sudo apt-get install -y rabbitmq-server
sudo service rabbitmq-server start
辅助方法
# stop the local node
sudo service rabbitmq-server stop
# start it back
sudo service rabbitmq-server start
# check on service status as observed by service manager

sudo service rabbitmq-server status
RabbitMQ 添加用户

测试配置如下

sudo rabbitmq-plugins enable rabbitmq_management
sudo rabbitmqctl add_vhost /hyperion
sudo rabbitmqctl add_user my_user my_password
sudo rabbitmqctl set_user_tags my_user administrator
sudo rabbitmqctl set_permissions -p /hyperion my_user ".*" ".*" ".*"

安装Redis

sudo apt update
sudo apt install redis-server

辅助方法

sudo service redis restart
sudo systemctl status redis

安装 Node.js v12.x

# Using Ubuntu
curl -sL https://deb.nodesource.com/setup_12.x | sudo -E bash -
sudo apt-get install -y nodejs

安装PM2

sudo npm install pm2@latest -g

安装Nodeos 1.8.4 w/ 并开启 state_history_plugin and chain_api_plugin

wget https://github.com/eosio/eos/releases/download/v1.8.4/eosio_1.8.4-1-ubuntu-18.04_amd64.deb
sudo apt install ./eosio_1.8.4-1-ubuntu-18.04_amd64.deb
nodeos --config-dir ~/eosio/chain/config  --genesis-json ~/eosio/chain/config/genesis.json --data-dir ~/eosio/chain/data -e -p eosio --plugin eosio::chain_api_plugin --plugin eosio::state_history_plugin --disable-replay-opts  --chain-state-history --trace-history --delete-all-blocks

Clone & Install Hyperion-History-API

git clone https://github.com/bcskill/Hyperion-History-API.git
cd Hyperion-History-API
npm install

修改配置

cp example-ecosystem.config.js ecosystem.config.js

vi ecosystem.config.js

module.exports = {
    apps: [
        {
            name: "Indexer",
            script: "./launcher.js",
            node_args: ["--max-old-space-size=8192"],
            autorestart: false,
            kill_timeout: 3600,
            env: {
                AMQP_HOST: '127.0.0.1:5672',
                AMQP_USER: 'my_user',
                AMQP_PASS: 'my_password',
                REDIS_HOST: '127.0.0.1',
                REDIS_PORT: '6379',
                ES_HOST: '127.0.0.1:9200',
                NODEOS_HTTP: 'http://127.0.0.1:8888',
                NODEOS_WS: 'ws://127.0.0.1:8080',
                START_ON: 0,
                STOP_ON: 0,
                REWRITE: 'false',
                BATCH_SIZE: 5000,
                LIVE_READER: 'false',
                LIVE_ONLY: 'false',
                FETCH_BLOCK: 'false',
                FETCH_TRACES: 'false',
                CHAIN: 'eos',
                CREATE_INDICES: 'v1',
                PREVIEW: 'false',
                DISABLE_READING: 'false',
                READERS: 1,
                DESERIALIZERS: 1,
                DS_MULT: 1,
                ES_INDEXERS_PER_QUEUE: 1,
                ES_ACT_QUEUES: 1,
                READ_PREFETCH: 50,
                BLOCK_PREFETCH: 100,
                INDEX_PREFETCH: 500,
                ENABLE_INDEXING: 'true',
                PROC_DELTAS: 'true',
                INDEX_DELTAS: 'true',
                INDEX_ALL_DELTAS: 'false',
                ABI_CACHE_MODE: 'false',
                ACCOUNT_STATE: 'false',
                VOTERS_STATE: 'false',
                USERRES_STATE: 'false',
                DELBAND_STATE: 'false',
                REPAIR_MODE: 'false',
                DEBUG: 'false'
            }
        },
        {
            name: 'API',
            script: "./api/api-loader.js",
            exec_mode: 'cluster',
            merge_logs: true,
            instances: 4,
            autorestart: true,
            exp_backoff_restart_delay: 100,
            watch: ["api"],
            env: {
                AMQP_HOST: "localhost:5672",
                AMQP_USER: "my_user",
                AMQP_PASS: "my_password",
                REDIS_HOST: '127.0.0.1',
                REDIS_PORT: '6379',
                SERVER_PORT: '7000',
                SERVER_NAME: 'example.com',
                SERVER_ADDR: '127.0.0.1',
                NODEOS_HTTP: 'http://127.0.0.1:8888',
                ES_HOST: '127.0.0.1:9200',
                CHAIN: 'eos'
            }
        }
    ]
};

启动

Starting

pm2 start --only Indexer --update-env
pm2 logs Indexer

Stopping

// Stop reading and wait for queues to flush

pm2 trigger Indexer stop
Force stop
pm2 stop Indexer

Starting the API node

pm2 start --only API --update-env
pm2 logs API

测试

curl -X GET "http://127.0.0.1:7000/health" -H "accept: */*"

返回

{
    "health": [{
        "service": "RabbitMq",
        "status": "OK",
        "time": 1571055356654
    }, {
        "service": "Redis",
        "status": "OK",
        "time": 1571055356655
    }, {
        "service": "Elasticsearch",
        "status": "OK",
        "time": 1571055356656
    }]
}

参考

https://github.com/eosrio/Hyperion-History-API
https://www.jianshu.com/p/7200cd17d8cb
https://www.rabbitmq.com/install-debian.html
https://wangxin1248.github.io/linux/2018/07/ubuntu18.04-install-redis.html
https://pm2.keymetrics.io/docs/usage/quick-start/
https://github.com/EOSIO/eos/issues/6334

max_transaction_lifetime和max-transaction-time介绍

max-transaction-time

我们推送交易,一般是先推送到同步节点。每次推送类似于HTTP的post请求,不可能长时间的一直处于推送状态,也就是有个最大的推送时间,当超过这个时间,则推送失败。
一般是部署合约时,由于wasm文件过大,容易出现此问题。

max_transaction_lifetime

交易在创建时可以自行修改过期时间,但是这个过期时间不是随意过大,是有上限的,在genesis.json种配置的max_transaction_lifetime

通过合约实现EOS HD 类分层地址

使用场景

有一些充值场景,需要用到类似于BTC HD地址功能,简单来说就是一个账户需要支持多个子地址,然后对应的子地址收账后,都能汇集到该主账户。

EOS分层地址实现

账户体系扩展表 userresext(可发起转账和收款)

index key value
yes account_name bcskillsurou
address 5a6595ecc9cee07ae00e76c926a113a4fd6be324be9e4ec854a1a150c3d80c9e

分层账户地址体系表 hduserres(注意:只用于收款),表数据包含账户体系扩展表数据

index key value
yes address 5a6595ecc9cee07ae00e76c926a113a4fd6be324be9e4ec854a1a150c3d80c9e
account_name bcskillsurou

地址A向地址B转账时,transfer扩展 action执行时接收(from_address,to_address,quantity,memo)

根据from_address查询userresext,找到对应得账户名,根据hduserres找到对用得分层地址。

EOS1.8硬分叉升级后同步节点更新

由于1.8.*和之前的版本的数据不兼容,所以主网升级后,同步节点需要做以下调整

  1. 确保其现有节点正在运行最新的稳定版本(1.7)的nodeos,然后关闭nodeos。
  2. 进行备份并删除数据目录中的blocks/reversible目录,state-history目录和state目录。
  3. 用新版本替换其旧版本的nodeos。
  4. 启动新的Nodeos 1.8发行版,并使其完全从创世开始重播,并赶上与网络同步的步伐。该节点应接收块,并且LIB应该前进。在激活第一个协议升级功能之前,运行v1.8和v1.7的节点将继续在同一网络中共存。

将nodeos从v1.7升级到v1.8时,需要从创世重播。之后,v1.8节点可以照常快速启动和停止,而无需重播。v1.7节点生成的状态目录将与nodeos的v1.8不兼容。版本1便携式快照(由v1.7生成)将与要求版本2便携式快照的v1.8不兼容。

注意点

链程序 需要选用 1.8.1或者1.8.4,中间两个版本有问题,
发帖时,未找到有提供1.8 相关的离线数据包,所以目前只能自己hard-repaly,或者使用快照

感谢eos beijing BP的技术小伙伴回答

received a go away message from xxxx, reason = authentication failure

主动连接方提示:

info  2019-09-03T07:20:45.197 thread-0  net_plugin.cpp:2293           handle_message       ] received a go away message from 17.1.0.9:7860, reason = authentication failure

被连接方提示:

error 2019-09-04T09:30:24.112 thread-0  net_plugin.cpp:2715           authenticate_peer    ] Peer web0_92:6886 - 5a05387 sent a handshake with a timestamp skewed by more than 1 second.
error 2019-09-04T09:30:24.112 thread-0  net_plugin.cpp:2251           handle_message       ] Peer not authenticated.  Closing connection.

由于BP节点config设置了对等网络验证

allowed-connection = producers
allowed-connection = specified

此时被链接方已经添加了主动连接方的peer-key

peer-key = "FSC71Uiuk23RACZ2PKDZQUeAgyA4w8g9DYurjtBktNes382zn1tMP"

跟下代码
https://github.com/EOSIO/eos/blob/be804bf63c5092a123c3e1a468559a8164bcd3be/plugins/net_plugin/net_plugin.cpp#L2788

namespace sc = std::chrono;
      sc::system_clock::duration msg_time(msg.time);
      auto time = sc::system_clock::now().time_since_epoch();
      if(time - msg_time > peer_authentication_interval) {
         fc_elog( logger, "Peer ${peer} sent a handshake with a timestamp skewed by more than ${time}.",
                  ("peer", msg.p2p_address)("time", "1 second")); // TODO Add to_variant for std::chrono::system_clock::duration
         return false;
      }

Peer clock may be no more than 1 second skewed from our clock, including network latency.
对等时钟与我们的时钟相差不超过1秒,包括网络延迟。

主动连接方发起连接时(当前机器本地时间)到被连接方处理时(被连方本地时间)不能超过1秒(peer_authentication_interval
如果超时就直接拒绝

解决方案

各节点开启定时同步时间服务

sudo apt-get install chrony

参考:https://blog.csdn.net/kinglyjn/article/details/53606791