您正在查看: Ethereum 分类下的文章

Beam-SNARK 的工作原理

零知识简洁非交互式知识论证 (Beam-SNARK) 是一种开创性的方法,它允许人们在不透露任何其他信息的情况下证明陈述的真实性。但这为什么有用呢?

零知识证明有广泛的应用场景,例如:

1. 关于私人数据的证明陈述:

  • 确认某人的银行余额超过某个限额,但不透露具体金额。
  • 核实银行在过去一年内没有与特定实体进行过交易。
  • 匹配 DNA 样本但不透露完整的基因图谱。
  • 显示高于一定值的信用评分但不透露详细信息。

2. 匿名授权

  • 证明用户有权访问网站的限制区域,而无需分享其身份(例如登录凭据)。
  • 确认在授权地区的居住权但不透露具体位置。
  • 在不透露身份的情况下验证有效地铁通票的所有权。

3. 匿名支付:

  • 无需关联身份即可进行付款。
  • 不披露收入而纳税。

4. 外包计算:

  • 委托复杂的计算,同时确保结果正确,无需重复工作。
  • 将区块链模型从通用计算转变为一方计算、其他方验证的模型。

零知识证明的底层数学和密码学简直就是奇迹。自 1985 年开创性的论文“交互式证明系统的知识复杂性”以来,该领域已经活跃了四十多年。非交互式证明的引入在区块链环境中尤为关键。

在任何零知识证明系统中,都有两个关键参与者:

  • 证明者:想要让验证者相信某个陈述的真实性的人。
  • 验证者:无需获取任何额外知识即可检查证明者主张的有效性的人。

该系统必须满足三个核心属性:

  1. 完整性:如果陈述是真实的,则证明者可以说服验证者。
  2. 健全性:作弊的证明者无法让验证者相信错误的陈述。
  3. 零知识:交互仅揭示陈述是否真实,而不揭示其他任何内容。

Beam-SNARK 将这些原理应用于通用计算,为实际应用提供了一个优雅的解决方案。

证明的媒介

为了理解 Beam-SNARK,让我们从一个简单的例子开始,而不深入研究零知识或交互性。

假设我们有一个 10 位的数组,并且我们想要向验证者(例如,程序)证明所有位都设置为 1。

假设我们有一个长度为 10 的位数组,并且我们想要向验证者(例如程序)证明所有这些位都设置为 1。

验证者每次只能检查一位。为了验证该声明,验证者可以按随机顺序检查位:

  • 一次成功检查后,验证者对该声明的信心为 10%。
  • 如果某个位为 0,则该断言立即被推翻。
  • 为了获得更高的置信度(例如 50% 或 95%),验证者必须执行更多检查,与阵列的大小成正比。这种方法对于大型数据集来说不切实际。

相反,我们可以利用具有独特属性的多项式。多项式在图形上显示为曲线,由数学方程定义。

上图曲线对应多项式:f(x) = x³ — 6x² + 11x — 6。多项式的次数由其 x 的最大指数决定,在本例中为 3。

多项式有一个优点,即如果我们有两个次数最多为 d 的不相等多项式,它们最多只能在 d 个点处相交。例如,让我们稍微修改一下原始多项式 x³ — 6x² + 10x — 5,并将其可视化为绿色:

如此微小的变化会产生截然不同的结果。事实上,不可能找到两个不相等的多项式,它们共享一条连续的曲线块(单点块的情况除外)。

此属性源自查找公共点的方法。如果我们想找到两个多项式的交点,我们需要使它们相等。例如,要找到多项式与x轴的交点(即f ( x ) = 0),我们使x ³ — 6 x ² + 11 x — 6 = 0相等,并且该等式的解将是这些公共点:x = 1、x = 2 和x = 3,您也可以清楚地看到,在上图上蓝色曲线与x轴线相交的位置,情况确实如此。

同样,我们可以将原始多项式和修改后的多项式相等来找到它们的交点。

所得到的多项式是 1 阶的,显然有一个解x = 1。因此只有一个交点:

对于任意次数为d 的多项式,任何此类方程的结果始终是次数最多为d的另一个多项式,因为没有乘法可以产生更高的次数。例如:5 x ³ + 7 x ² — x + 2 = 3 x ³ — x ² + 2 x — 5,简化为 2 x ³ + 8 x ² — 3 x + 7 = 0。代数基本定理告诉我们,次数为d 的多项式最多可以有d 个解(更多内容见下文),因此最多有d 个共享点。

因此,我们可以得出结论,在任意点处对任何多项式的求值类似于对其唯一身份的表示。让我们在x = 10 处求值示例多项式。

事实上,在所有要评估的x选择中,只有最多 3 个选择在这些多项式中具有相同的评估,而所有其他选择都会有所不同。

这就是为什么如果证明者声称知道某个多项式(无论其度数有多大),而验证者也知道的话,他们可以遵循一个简单的协议:

  • 验证者为x选择一个随机值,并在本地评估多项式
  • 验证者将x提供给证明者,并要求其计算相关多项式
  • 证明者在x 处评估多项式,并将结果提供给验证者
  • 验证者检查本地结果是否等于证明者的结果,如果是,则该语句具有很高的置信度

例如,如果我们考虑x的整数范围从 1 到 1⁰⁷⁷,则评估不同的点数为 1⁰⁷⁷ — d 。因此, x意外“击中”任何d个共享点的概率等于(这被认为是可以忽略不计的):

注意:与低效的位校验协议相比,新协议仅需要一轮,并且对该声明具有压倒性的信心(假设 d 足够小于范围的上限,则几乎为 100%)。

这就是为什么多项式是 Beam -SNARK的核心,尽管也可能存在其他证明媒介。

原文:https://medium.com/@Moonchain_com/why-and-how-beam-snark-works-94f703cf1413

如何查看infura是否稳定

背景

一些服务为了稳定和省去自己维护同步节点,常会使用infura 类似的三方节点服务,但有时服务报访问不稳定,需要排查下infura官方服务是否有异常

查看对应类型网关,一段时间内是否有异常

https://status.infura.io/

查看对外通知

https://status.infura.io/history

解决方式

使用多个三方RPC提供方,通过类似 eRPC服务实现高容错

Go 语言实现 MetaMask 身份验证的演示

该项目演示了如何使用 MetaMask、Phantom 或任何其他支持以太坊网络的浏览器钱包对用户进行身份验证。它提供了一个简单的 Web 界面,允许用户连接他们的 MetaMask 钱包并显示他们的以太坊地址。

工作原理

此 Go 服务集成了 MetaMask 身份验证,使用以太坊区块链来验证用户。该服务提供两个主要 API 端点:/nonce和/auth。以下是该流程的工作原理概述:

1. 用户连接到 MetaMask:

  • 前端提示用户连接他们的 MetaMask 钱包。
  • 一旦用户批准连接,就会检索用户的以太坊帐户(钱包地址)。

2. 请求 Nonce(服务器端):

  • 一旦连接,前端就会向服务器发送请求,以通过端点获取唯一的随机数/nonce。
  • 服务器生成一个随机数并将其与用户的以太坊地址相关联。
  • 该随机数被发送回给客户端。

3. 用户签署 Nonce(客户端):

  • 前端使用 MetaMask 请求用户使用其私钥签署 nonce。
  • MetaMask 提供签名,该签名在身份验证请求中发送回服务器。

4. 服务器验证签名:

  • 服务器使用公共以太坊地址验证签名。
  • 如果签名匹配,则认证成功。
  • 然后,服务器可以发出会话令牌(或类似令牌)来管理用户会话。

安全说明:

Nonce 值用于防止重放攻击,确保每次身份验证尝试都是唯一的。
成功身份验证后安全地管理会话令牌(或其他形式的会话管理)非常重要。

github: https://github.com/fmiskovic/eth-auth

eRPC — 容错 evm rpc 代理

介绍

eRPC 是一种容错 EVM RPC 代理和永久缓存解决方案。它在构建时充分考虑了读取密集型用例,例如数据索引和高负载前端使用。

doc: https://docs.erpc.cloud/
github: https://github.com/erpc/erpc

为什么选择 eRPC?

以下是构建 eRPC 的主要原因:

  • 通过本地缓存来降低 RPC 使用和出站流量的总体成本。
  • 在一个或多个提供商中断的情况下为 RPC 消费者提供容错且可靠的源。
  • 为内部团队和项目以及上游 RPC 第三方公司提供对 RPC 使用情况的全球可观察性。

特征

  • 通过跟踪响应时间、错误率、区块链同步状态等实现跨多个上游的故障转移。
  • 为每个项目、网络或上游提供自我施加的速率限制,以避免滥用和无意的 DDoS。
  • Prometheus 指标收集和 Grafana 仪表板用于监控RPC 端点的成本、使用情况和健康状况。

eRPC 可以在两个主要领域提供帮助:

  • 缓存已进行的 RPC 调用(eth_getLogs、eth_call、eth_getBlockByNumber 等)
  • 对 RPC 节点的上游压力进行速率限制以避免致命错误

与更传统的 LB 解决方案(ALB、K8S 服务等)相比,eRPC 将提供以 EVM 为中心的功能,例如:

  • EVM 感知健康检查(例如落后多少个区块)
  • EVM 感知回退(例如,如果 4xx 是由于缺少块而导致的,则尝试另一个上游)
  • EVM 感知方法过滤器(例如,某些方法转到节点 A,其他方法转到节点 B)

缓存存储类型

  1. memory: 主要用于本地测试,或者不需要缓存太多数据
  2. redis:当您需要使用驱逐策略(例如一定量的内存)临时存储缓存数据时,Redis 很有用
  3. postgresql:当您需要永久存储缓存数据(无需 TTL,即永远)时很有用
  4. dynamodb:当您需要可扩展(与 Postgres 相比)的永久缓存并且更省存储成本

配置相关

  • 数据库:配置缓存和数据库。
  • 项目:定义具有不同速率限制预算的多个项目。
  • 网络:为每个网络配置故障安全策略。
  • 上游:使用故障安全策略、速率限制器、允许/拒绝方法等配置上游。
  • 速率限制器:配置各种自我强加的预算,以防止对上游造成压力。
  • 故障安全:解释用于网络和上游的不同策略,例如重试、超时和对冲。

配置实例

# 日志级别有助于调试或错误检测:
# - debug: 实际请求和响应的信息,以及有关速率限制的决策等.
# - info: 通常会打印成功路径,并且可能会对每个请求打印 1 个日志,以表明成功或失败.
# - warn: 这些问题不会导致最终用户出现问题,但可能表示数据降级或缓存数据库出现故障等问题.
# - error: 这些问题会对最终用户产生影响,例如配置错误.
logLevel: warn

# ERPC 中有各种数据库用例,例如缓存、动态配置、速率限制持久性等.
database:
  # `evmJsonRpcCache` 定义缓存 JSON-RPC 调用的目标,面向任何 EVM 架构上游.
  # 该数据库在关键路径上是非阻塞的,并且被用作尽力而为.
  # 确保存储要求满足你的使用情况,例如在 Arbitrum 上缓存 7000 万个区块 + 1000 万个交易 + 1000 万条记录需要 200GB 的存储空间.
  evmJsonRpcCache:
    # Refer to "Database" section for more details.
    # 请注意,如果表、模式和索引不存在,将自动创建.
    driver: postgresql
    postgresql:
      connectionUri: >-
        postgres://YOUR_USERNAME_HERE:YOUR_PASSWORD_HERE@your.postgres.hostname.here.com:5432/your_database_name
      table: rpc_cache

# eRPC 监听请求的主服务器.
server:
  listenV4: true
  httpHostV4: "0.0.0.0"
  listenV6: false
  httpHostV6: "[::]"
  httpPort: 4000
  maxTimeout: 30s

# 可选的 Prometheus 指标服务器.
metrics:
  enabled: true
  listenV4: true
  hostV4: "0.0.0.0"
  listenV6: false
  hostV6: "[::]"
  port: 4001

# 每个项目都是网络和上游的集合。
# 例如“后端”、“索引器”、“前端”,如果您只想使用 1 个项目,则可以将其命名为“main”
# 多个项目的主要目的是不同的故障安全策略(更积极且成本更高,或成本更低且更容易出错)
projects:
  - id: main

    # 您可以选择为每个项目定义一个自行设定的速率限制预算
    # 如果您想限制每秒的请求数或每日限额,这将非常有用。
    rateLimitBudget: frontend-budget

    # 此数组配置特定于网络(又称特定于链)的功能。
    # 对于每个网络,“架构”和相应的网络 ID(例如 evm.chainId)都是必需的。
    # 请记住,定义网络是可选的,因此仅当您想覆盖默认值时才提供这些。
    networks:
      - architecture: evm
        evm:
          chainId: 1
        # 有关更多详细信息,请参阅“故障安全”部分。
        # 在网络级别,“超时”适用于请求的整个生命周期(包括多次重试)
        failsafe:
          timeout:
            duration: 30s
          retry:
            maxCount: 3
            delay: 500ms
            backoffMaxDelay: 10s
            backoffFactor: 0.3
            jitter: 500ms
          # 强烈建议在网络级别定义“对冲”,因为如果上游 A 对某个特定请求的响应速度较慢,
          # 它可以向上游 B 启动一个新的并行对冲请求,以响应速度更快的一方为准。
          hedge:
            delay: 3000ms
            maxCount: 2
          circuitBreaker:
            failureThresholdCount: 30
            failureThresholdCapacity: 100
            halfOpenAfter: 60s
            successThresholdCount: 8
            successThresholdCapacity: 10
      - architecture: evm
        evm:
          chainId: 42161
        failsafe:
          timeout:
            duration: 30s
          retry:
            maxCount: 5
            delay: 500ms
            backoffMaxDelay: 10s
            backoffFactor: 0.3
            jitter: 200ms
          hedge:
            delay: 1000ms
            maxCount: 2

    # 每个上游支持 1 个或多个网络(chains)
    upstreams:
      - id: blastapi-chain-42161
        type: evm
        endpoint: https://arbitrum-one.blastapi.io/xxxxxxx-xxxxxx-xxxxxxx
        # 定义处理上游请求时使用哪个upstream
        rateLimitBudget: global-blast
        # chainId 是可选的,将从端点(eth_chainId)检测,但建议明确设置它,以便更快地初始化。
        evm:
          chainId: 42161
        # 哪些方法绝不能发送到上游:
        ignoreMethods:
          - "alchemy_*"
          - "eth_traceTransaction"
        # 请参阅“故障保护”部分以了解更多详细信息:
        failsafe:
          timeout:
            duration: 15s
          retry:
            maxCount: 2
            delay: 1000ms
            backoffMaxDelay: 10s
            backoffFactor: 0.3
            jitter: 500ms
      - id: blastapi-chain-1
        type: evm
        endpoint: https://eth-mainnet.blastapi.io/xxxxxxx-xxxxxx-xxxxxxx
        rateLimitBudget: global-blast
        evm:
          chainId: 1
        failsafe:
          timeout:
            duration: 15s
          retry:
            maxCount: 2
            delay: 1000ms
            backoffMaxDelay: 10s
            backoffFactor: 0.3
            jitter: 500ms
      - id: quiknode-chain-42161
        type: evm
        endpoint: https://xxxxxx-xxxxxx.arbitrum-mainnet.quiknode.pro/xxxxxxxxxxxxxxxxxxxxxxxx/
        rateLimitBudget: global-quicknode
        # 您可以禁用自动忽略不受支持的方法,而是明确定义它们.
        # 如果提供程序(例如 dRPC)与“不支持的方法”响应不一致,这将很有用.
        autoIgnoreUnsupportedMethods: false
        # 要允许自动批处理上游请求,请使用以下设置.
        # 请记住,如果“supportsBatch”为 false,您仍然可以向 eRPC 发送批量请求
        # 但它们将作为单独的请求发送到上游.
        jsonRpc:
          supportsBatch: true
          batchMaxSize: 10
          batchMaxWait: 100ms
        evm:
          chainId: 42161
        failsafe:
          timeout:
            duration: 15s
          retry:
            maxCount: 2
            delay: 1000ms
            backoffMaxDelay: 10s
            backoffFactor: 0.3
            jitter: 500ms

        # “id” 是区分日志和指标的唯一标识符.
      - id: alchemy-multi-chain-example
        # 对于某些已知提供商(例如 Alchemy),您可以使用自定义协议名称
        # 它允许单个上游导入该提供商支持的“所有链”。
        # 请注意,这些链在 repo 中是硬编码的,因此如果它们支持新的链,则必须更新 eRPC。
        endpoint: alchemy://XXXX_YOUR_ALCHEMY_API_KEY_HERE_XXXX
        rateLimitBudget: global
        failsafe:
          timeout:
            duration: 15s
          retry:
            maxCount: 2
            delay: 1000ms
            backoffMaxDelay: 10s
            backoffFactor: 0.3
            jitter: 500ms

# 速率限制器允许您为上游创建“共享”预算。
# 例如上游 A 和 B 可以使用相同的预算,这意味着它们两者加起来不得超过定义的限制。
rateLimiters:
  budgets:
    - id: default-budget
      rules:
        - method: "*"
          maxCount: 10000
          period: 1s
    - id: global-blast
      rules:
        - method: "*"
          maxCount: 1000
          period: 1s
    - id: global-quicknode
      rules:
        - method: "*"
          maxCount: 300
          period: 1s
    - id: frontend-budget
      rules:
        - method: "*"
          maxCount: 500
          period: 1s

部署测试

1. 新建docker-compose.yml

version: "3"

services:
  erpc:
    image: ghcr.io/erpc/erpc:0.0.26
    container_name: zksaas-server-erpc
    restart: always
    volumes:
      - ./erpc.yaml:/root/erpc.yaml
    logging:
      options:
        max-size: '500m'
        max-file: 3
    ports:
      - 4000:4000
      - 4001:4001
    depends_on:
      - redis
    networks:
      default:
      proxy:
        ipv4_address: 172.18.0.4

  monitoring:
    build: ./monitoring
    ports:
      - "3000:3000"  # Grafana
      - "9090:9090"  # Prometheus
    environment:
      - SERVICE_ENDPOINT=host.docker.internal
      - SERVICE_PORT=4001
    volumes:
      - ./monitoring/prometheus:/etc/prometheus
      - ./monitoring/grafana/grafana.ini:/etc/grafana/grafana.ini
      - ./monitoring/grafana/dashboards:/etc/grafana/dashboards
      - prometheus_data:/prometheus
      - grafana_data:/var/lib/grafana
    logging:
      options:
        max-size: '500m'
        max-file: 3

  redis:
    container_name: zksaas-erpc-redis
    image: redis:6.2.5
    restart: always
    ports:
      - "6379:6379"
    logging:
      options:
        max-size: '500m'
        max-file: 3
    networks:
      default:
      proxy:
        ipv4_address: 172.18.0.5

  # postgresql:
    # container_name: erpc-postgresql
    # image: postgres:13.4
    # restart: always
    # environment:
      # POSTGRES_USER: erpc
      # POSTGRES_PASSWORD: erpc
      # POSTGRES_DB: erpc
    # ports:
      # - "5432:5432"
    # networks:
      # erpc:

networks:
  default:
  proxy:
    external: true

volumes:
  prometheus_data:
  grafana_data:

配置中使用了固定ip,简化了部署流程,尤其对docker不熟悉情况下

2. 创建 erpc.yaml

根据上面模板,调整自己的节点配置,将erpc.yaml放到与docker-compose.yml同级目录

# Log level helps in debugging or error detection:
# - debug: information down to actual request and responses, and decisions about rate-liming etc.
# - info: usually prints happy paths and might print 1 log per request indicating of success or failure.
# - warn: these problems do not cause end-user problems, but might indicate degredataion or an issue such as cache databse being down.
# - error: these are problems that have end-user impact, such as misconfigurations.
logLevel: warn

# There are various use-cases of database in erpc, such as caching, dynamic configs, rate limit persistence, etc.
database:
  # `evmJsonRpcCache` defines the destination for caching JSON-RPC cals towards any EVM architecture upstream.
  # This database is non-blocking on critical path, and is used as best-effort.
  # Make sure the storage requirements meet your usage, for example caching 70m blocks + 10m txs + 10m traces on Arbitrum needs 200GB of storage.
  evmJsonRpcCache:
    # Refer to "Database" section for more details.
    # Note that table, schema and indexes will be created automatically if they don't exist.
    driver: redis
    redis:
      addr: 172.18.0.5:6379
      password: 
      db: 0

# The main server for eRPC to listen for requests.
server:
  listenV4: true
  httpHostV4: "0.0.0.0"
  listenV6: false
  httpHostV6: "[::]"
  httpPort: 4000
  maxTimeout: 30s

# Optional Prometheus metrics server.
metrics:
  enabled: true
  listenV4: true
  hostV4: "0.0.0.0"
  listenV6: false
  hostV6: "[::]"
  port: 4001

# Each project is a collection of networks and upstreams.
# For example "backend", "indexer", "frontend", and you want to use only 1 project you can name it "main"
# The main purpose of multiple projects is different failsafe policies (more aggressive and costly, or less costly and more error-prone)
projects:
  - id: main
    healthCheck:
      scoreMetricsWindowSize: 1h
    # Optionally you can define a self-imposed rate limite budget for each project
    # This is useful if you want to limit the number of requests per second or daily allowance.
    rateLimitBudget: project-main-limit

    # This array configures network-specific (a.k.a chain-specific) features.
    # For each network "architecture" and corresponding network id (e.g. evm.chainId) is required.
    # Remember defining networks is OPTIONAL, so only provide these only if you want to override defaults.
    networks:
      - architecture: evm
        evm:
          chainId: 20241024
          finalityDepth: 5

        # A network-level rate limit budget applied to all requests despite upstreams own rate-limits.
        # For example even if upstreams can handle 1000 RPS, and network-level is limited to 100 RPS,
        # the request will be rate-limited to 100 RPS.
        rateLimitBudget: project-main-network-20241024-limiter

        # Refer to "Failsafe" section for more details.
        # On network-level "timeout" is applied for the whole lifecycle of the request (including however many retries)
        failsafe:
          timeout:
            duration: 30s
          # On network-level retry policy applies to the incoming request to eRPC,
          # this is additional to the retry policy set on upstream level.
          retry:
            # Total retries besides the initial request:
            maxCount: 3
            # Min delay between retries:
            delay: 500ms
            # Maximum delay between retries:
            backoffMaxDelay: 10s
            # Multiplier for each retry for exponential backoff:
            backoffFactor: 0.3
            # Random jitter to avoid thundering herd,
            # e.g. add between 0 to 500ms to each retry delay:
            jitter: 500ms
          # Defining a "hedge" is highly-recommended on network-level because if upstream A is being slow for
          # a specific request, it can start a new parallel hedged request to upstream B, for whichever responds faster.
          hedge:
            # Delay means how long to wait before starting a simultaneous hedged request.
            # e.g. if upstream A did not respond within 500ms, a new request towards upstream B will be started,
            # and whichever responds faster will be returned to the client.
            delay: 500ms
            # In total how many hedges to start.
            # e.g. if maxCount is 2, and upstream A did not respond within 500ms,
            # a new request towards upstream B will be started. If B also did not respond,
            # a new request towards upstream C will be started.
            maxCount: 1
          circuitBreaker:
            failureThresholdCount: 30
            failureThresholdCapacity: 100
            halfOpenAfter: 60s
            successThresholdCount: 8
            successThresholdCapacity: 10

    # Each upstream supports 1 or more networks (chains)
    upstreams:
      - id: zksaas-mainnet-20241024-rpc-1
        type: evm
        endpoint: http://172.18.39.154:8123
        rateLimitBudget: project-main-upstream-20241024-limiter
        # You can disable auto-ignoring unsupported methods, and instead define them explicitly.
        # This is useful if provider (e.g. dRPC) is not consistent with "unsupported method" responses.
        autoIgnoreUnsupportedMethods: false
        # To allow auto-batching requests towards the upstream, use these settings.
        # Remember if "supportsBatch" is false, you still can send batch requests to eRPC
        # but they will be sent to upstream as individual requests.
        jsonRpc:
          supportsBatch: true
          batchMaxSize: 10
          batchMaxWait: 100ms
        evm:
          chainId: 20241024
          nodeType: full # Optional. Can be "full" or "archive"
        # Which methods must never be sent to this upstream:
        #ignoreMethods:
        #  - "optimism_*"
        #  - "debug_traceTransaction"

        # Explicitly allowed methods will take precedence over ignoreMethods.
        # For example if you only want eth_getLogs to be served, set ignore methods to "*" and allowMethods to "eth_getLogs".
        #allowMethods:
        #  - "eth_getLogs"


        failsafe:
          timeout:
            # Upstream-level timeout applies each request sent towards the upstream,
            # e.g. if retry policy is set to 2 retries total time will be 30s for:
            duration: 15s
          # Upstream-level retry policy applies each request sent towards the upstream,
          # this is additional to the retry policy set on network level.
          # For example if network has 2 retries and upstream has 2 retries,
          # total retries will be 4.
          retry:
            maxCount: 2
            delay: 1000ms
            backoffMaxDelay: 10s
            backoffFactor: 0.3
            jitter: 500ms
          circuitBreaker:
            # These two variables indicate how many failures and capacity to tolerate before opening the circuit.
            failureThresholdCount: 30
            failureThresholdCapacity: 100
            # How long to wait before trying to re-enable the upstream after circuit breaker was opened.
            halfOpenAfter: 60s
            # These two variables indicate how many successes are required in half-open state before closing the circuit,
            # and putting the upstream back in available upstreams.
            successThresholdCount: 8
            successThresholdCapacity: 10

      - id: zksaas-mainnet-20241024-rpc-2
        type: evm
        endpoint: http://172.18.39.155:8123
        rateLimitBudget: project-main-upstream-20241024-limiter
        # You can disable auto-ignoring unsupported methods, and instead define them explicitly.
        # This is useful if provider (e.g. dRPC) is not consistent with "unsupported method" responses.
        autoIgnoreUnsupportedMethods: false
        # To allow auto-batching requests towards the upstream, use these settings.
        # Remember if "supportsBatch" is false, you still can send batch requests to eRPC
        # but they will be sent to upstream as individual requests.
        jsonRpc:
          supportsBatch: true
          batchMaxSize: 10
          batchMaxWait: 100ms
        evm:
          chainId: 20241024
          nodeType: full # Optional. Can be "full" or "archive"
        # Which methods must never be sent to this upstream:
        #ignoreMethods:
        #  - "optimism_*"
        #  - "debug_traceTransaction"

        # Explicitly allowed methods will take precedence over ignoreMethods.
        # For example if you only want eth_getLogs to be served, set ignore methods to "*" and allowMethods to "eth_getLogs".
        #allowMethods:
        #  - "eth_getLogs"

        failsafe:
          timeout:
            # Upstream-level timeout applies each request sent towards the upstream,
            # e.g. if retry policy is set to 2 retries total time will be 30s for:
            duration: 15s
          # Upstream-level retry policy applies each request sent towards the upstream,
          # this is additional to the retry policy set on network level.
          # For example if network has 2 retries and upstream has 2 retries,
          # total retries will be 4.
          retry:
            maxCount: 2
            delay: 1000ms
            backoffMaxDelay: 10s
            backoffFactor: 0.3
            jitter: 500ms
          circuitBreaker:
            # These two variables indicate how many failures and capacity to tolerate before opening the circuit.
            failureThresholdCount: 30
            failureThresholdCapacity: 100
            # How long to wait before trying to re-enable the upstream after circuit breaker was opened.
            halfOpenAfter: 60s
            # These two variables indicate how many successes are required in half-open state before closing the circuit,
            # and putting the upstream back in available upstreams.
            successThresholdCount: 8
            successThresholdCapacity: 10

      - id: zksaas-mainnet-20241024-rpc-3
        type: evm
        endpoint: http://172.18.34.68:8123
        rateLimitBudget: project-main-upstream-20241024-limiter
        # You can disable auto-ignoring unsupported methods, and instead define them explicitly.
        # This is useful if provider (e.g. dRPC) is not consistent with "unsupported method" responses.
        autoIgnoreUnsupportedMethods: false
        # To allow auto-batching requests towards the upstream, use these settings.
        # Remember if "supportsBatch" is false, you still can send batch requests to eRPC
        # but they will be sent to upstream as individual requests.
        jsonRpc:
          supportsBatch: true
          batchMaxSize: 10
          batchMaxWait: 100ms
        evm:
          chainId: 20241024
          nodeType: full # Optional. Can be "full" or "archive"
        # Which methods must never be sent to this upstream:
        #ignoreMethods:
        #  - "optimism_*"
        #  - "debug_traceTransaction"

        # Explicitly allowed methods will take precedence over ignoreMethods.
        # For example if you only want eth_getLogs to be served, set ignore methods to "*" and allowMethods to "eth_getLogs".
        #allowMethods:
        #  - "eth_getLogs"


        failsafe:
          timeout:
            # Upstream-level timeout applies each request sent towards the upstream,
            # e.g. if retry policy is set to 2 retries total time will be 30s for:
            duration: 15s
          # Upstream-level retry policy applies each request sent towards the upstream,
          # this is additional to the retry policy set on network level.
          # For example if network has 2 retries and upstream has 2 retries,
          # total retries will be 4.
          retry:
            maxCount: 2
            delay: 1000ms
            backoffMaxDelay: 10s
            backoffFactor: 0.3
            jitter: 500ms
          circuitBreaker:
            # These two variables indicate how many failures and capacity to tolerate before opening the circuit.
            failureThresholdCount: 30
            failureThresholdCapacity: 100
            # How long to wait before trying to re-enable the upstream after circuit breaker was opened.
            halfOpenAfter: 60s
            # These two variables indicate how many successes are required in half-open state before closing the circuit,
            # and putting the upstream back in available upstreams.
            successThresholdCount: 8
            successThresholdCapacity: 10

# Rate limiter allows you to create "shared" budgets for upstreams.
# For example upstream A and B can use the same budget, which means both of them together must not exceed the defined limits.
rateLimiters:
  budgets:
    - id: project-main-limit
      rules:
        - method: "*"
          maxCount: 10000000
          period: 1s

    - id: project-main-network-20241024-limiter
      rules:
        - method: "*"
          maxCount: 10000000
          period: 1s

    - id: project-main-upstream-20241024-limiter
      rules:
        - method: "*"
          maxCount: 10000000
          period: 1s

上面配置中,主要关心的测试数据如下

3. 启动

docker-compose up -d

性能对比分析

当前测试链基于Polygon CDK,本身性能瓶颈较大

RPC 先前 当前 提升比例
eth_blockNumber 4706.53 14688.22 312%
eth_gasPrice 3029.96 16766.31 553%
eth_getBalance 4771.16 15127.23 317%
eth_getTransactionReceipt 2837.64 7885.16 277%
eth_getTransactionCount 4883.25 15010.97 307%

总结

  • ERPC 尤其对于低频数据,效果提升明显,目前测试带宽也有受限,可能性能没有最大
  • 通过跟踪响应时间、错误率、区块链同步状态等实现跨多个上游的故障转移
  • 对于存储,支持redis 优先缓存较新区间,较高性能。以及postgresql永久存储 可以根据实际场景进行选择

ethereum beaconchain 轻量级区块浏览器

github: https://github.com/ethpandaops/dora
用例:https://beaconlight.ephemery.dev/

相比https://github.com/gobitfly/eth2-beaconchain-explorer 要简洁很多