背景
一些服务为了稳定和省去自己维护同步节点,常会使用infura 类似的三方节点服务,但有时服务报访问不稳定,需要排查下infura官方服务是否有异常
查看对应类型网关,一段时间内是否有异常
查看对外通知
https://status.infura.io/history
解决方式
使用多个三方RPC提供方,通过类似 eRPC服务实现高容错
一些服务为了稳定和省去自己维护同步节点,常会使用infura 类似的三方节点服务,但有时服务报访问不稳定,需要排查下infura官方服务是否有异常
https://status.infura.io/history
使用多个三方RPC提供方,通过类似 eRPC服务实现高容错
在 Solidity 中,msg.sender是一个全局变量,表示调用或发起智能合约函数调用的地址。全局变量tx.origin是签署交易的钱包。
在 Solana 中,没有与 等效的东西msg.sender。
有一个等效的tx.origin,但是你应该知道 Solana 交易可以有多个签名者,所以我们可以将其视为具有“多个 tx.origins”。
要在 Solana 中获取“ tx.origin”地址,需要通过将 Signer 帐户添加到函数上下文来设置它,并在调用函数时将调用者的帐户传递给它。
让我们看一个如何在 Solana 中访问交易签名者地址的示例:
use anchor_lang::prelude::*;
declare_id!("Hf96fZsgq9R6Y1AHfyGbhi9EAmaQw2oks8NqakS6XVt1");
#[program]
pub mod day14 {
use super::*;
pub fn initialize(ctx: Context<Initialize>) -> Result<()> {
let the_signer1: &mut Signer = &mut ctx.accounts.signer1;
// Function logic....
msg!("The signer1: {:?}", *the_signer1.key);
Ok(())
}
}
#[derive(Accounts)]
pub struct Initialize<'info> {
#[account(mut)]
pub signer1: Signer<'info>,
}
从上面的代码片段来看,Signer<'info>用于验证账户结构signer1中的账户Initialize<'info>是否签署了交易。
在initialize函数中,signer1帐户从上下文中可变地引用并分配给the_signer1变量。
最后,我们使用宏并传入来记录signer1的公钥(地址),它取消引用并访问指向的实际值上的字段或方法。msg!*the_signer1.keykeythe_signer1
接下来就是针对上面的程序编写测试:
describe("Day14", () => {
// Configure the client to use the local cluster.
anchor.setProvider(anchor.AnchorProvider.env());
const program = anchor.workspace.Day14 as Program<Day14>;
it("Is signed by a single signer", async () => {
// Add your test here.
const tx = await program.methods.initialize().accounts({
signer1: program.provider.publicKey
}).rpc();
console.log("The signer1: ", program.provider.publicKey.toBase58());
});
});
测试中,我们将钱包账户作为 signer 传入账户signer1,然后调用初始化函数,最后在控制台上打印钱包账户,验证和程序中的一致。
练习:运行测试后,你从shell_1(命令终端)和shell_3 (日志终端)的输出中注意到了什么?
在 Solana 中,我们还可以让多个签名者签署一笔交易,你可以将其视为批量处理一堆签名并将其发送至一笔交易中。一个用例是在一笔交易中执行多重签名交易。
为此,我们只需在程序中的帐户结构中添加更多签名者结构,然后确保在调用函数时传递必要的帐户:
use anchor_lang::prelude::*;
declare_id!("Hf96fZsgq9R6Y1AHfyGbhi9EAmaQw2oks8NqakS6XVt1");
#[program]
pub mod day14 {
use super::*;
pub fn initialize(ctx: Context<Initialize>) -> Result<()> {
let the_signer1: &mut Signer = &mut ctx.accounts.signer1;
let the_signer2: &mut Signer = &mut ctx.accounts.signer2;
msg!("The signer1: {:?}", *the_signer1.key);
msg!("The signer2: {:?}", *the_signer2.key);
Ok(())
}
}
#[derive(Accounts)]
pub struct Initialize<'info> {
pub signer1: Signer<'info>,
pub signer2: Signer<'info>,
}
上述示例与单个签名者示例有些相似,但有一个明显的区别。在本例中,我们signer2在结构中添加了另一个签名者帐户()Initialize,并在初始化函数中记录了两个签名者的公钥。
与单个签名者相比,使用多个签名者调用初始化函数有所不同。以下测试显示了如何使用多个签名者调用函数:
describe("Day14", () => {
// Configure the client to use the local cluster.
anchor.setProvider(anchor.AnchorProvider.env());
const program = anchor.workspace.Day14 as Program<Day14>;
// generate a signer to call our function
let myKeypair = anchor.web3.Keypair.generate();
it("Is signed by multiple signers", async () => {
// Add your test here.
const tx = await program.methods
.initialize()
.accounts({
signer1: program.provider.publicKey,
signer2: myKeypair.publicKey,
})
.signers([myKeypair])
.rpc();
console.log("The signer1: ", program.provider.publicKey.toBase58());
console.log("The signer2: ", myKeypair.publicKey.toBase58());
});
});
那么上述测试有什么不同?首先是signers()方法,它接受一个签名者数组,这些签名者将对交易进行签名作为参数。但是数组中只有一个签名者,而不是两个。Anchor 会自动将提供商中的钱包账户作为签名者传递,因此我们不需要再次将其添加到签名者数组中。
第二个变化是变量,它存储模块随机生成的myKeypair密钥对(用于访问帐户的公钥和相应的私钥anchor.web3) 。在测试中,我们将密钥对(存储在myKeypair变量中)的公钥分配给signer2帐户,这就是为什么它作为参数传递到.signers([myKeypair])方法中的原因。
多次运行测试,你会注意到signer1pubkey 没有改变,但signer2pubkey 发生了变化。这是因为分配给该signer1账户的钱包账户(在测试中)来自提供商,这也是你本地机器中的 Solana 钱包账户,并且分配给的账户signer2是每次运行时随机生成的anchor test —skip-local-validator。
练习:创建另一个需要三个签名者(提供者钱包账户和两个随机生成的账户)的函数(您可以随意称呼它),并为其编写测试。
这是 Solidity 中常用的模式,用于将函数的访问权限限制为合约所有者。使用#[access_control]Anchor 中的属性,我们还可以实现唯一所有者模式,即将 Solana 程序中函数的访问权限限制为 PubKey(所有者的地址)。
以下是如何在 Solana 中实现“onlyOwner”功能的示例:
use anchor_lang::prelude::*;
declare_id!("Hf96fZsgq9R6Y1AHfyGbhi9EAmaQw2oks8NqakS6XVt1");
// NOTE: Replace with your wallet's public key
const OWNER: &str = "8os8PKYmeVjU1mmwHZZNTEv5hpBXi5VvEKGzykduZAik";
#[program]
pub mod day14 {
use super::*;
#[access_control(check(&ctx))]
pub fn initialize(ctx: Context<OnlyOwner>) -> Result<()> {
// Function logic...
msg!("Holla, I'm the owner.");
Ok(())
}
}
fn check(ctx: &Context<OnlyOwner>) -> Result<()> {
// Check if signer === owner
require_keys_eq!(
ctx.accounts.signer_account.key(),
OWNER.parse::<Pubkey>().unwrap(),
OnlyOwnerError::NotOwner
);
Ok(())
}
#[derive(Accounts)]
pub struct OnlyOwner<'info> {
signer_account: Signer<'info>,
}
// An enum for custom error codes
#[error_code]
pub enum OnlyOwnerError {
#[msg("Only owner can call this function!")]
NotOwner,
}
在上述代码的上下文中,OWNER变量存储与我的本地 Solana 钱包关联的公钥(地址)。在测试之前,请务必将 OWNER 变量替换为您的钱包的公钥。您可以通过运行solana address命令轻松检索您的公钥。
该#[access_control]属性在运行主指令之前执行给定的访问控制方法。调用初始化函数时,访问控制方法 ( check) 先于初始化函数执行。该check方法接受引用的上下文作为参数,然后检查交易的签名者是否等于OWNER变量的值。require_keys_eq!宏确保两个公钥值相等,如果相等,则执行初始化函数,否则,将使用自定义错误进行恢复NotOwner。
在下面的测试中,我们调用初始化函数并使用所有者的密钥对签署交易:
import * as anchor from "@coral-xyz/anchor";
import { Program } from "@coral-xyz/anchor";
import { Day14 } from "../target/types/day14";
describe("day14", () => {
// Configure the client to use the local cluster.
anchor.setProvider(anchor.AnchorProvider.env());
const program = anchor.workspace.Day14 as Program<Day14>;
it("Is called by the owner", async () => {
// Add your test here.
const tx = await program.methods
.initialize()
.accounts({
signerAccount: program.provider.publicKey,
})
.rpc();
console.log("Transaction hash:", tx);
});
});
我们调用了初始化函数,并将提供程序中的钱包账户(本地 Solana 钱包账户signerAccount)传递给具有结构体的Signer<'info>,以验证钱包账户是否确实签署了交易。 另请记住,Anchor 使用提供程序中的钱包账户秘密签署任何交易。
运行测试anchor test --skip-local-validator,如果一切正确完成,测试应该通过:
使用非所有者的其他密钥对来调用初始化函数并签署交易将引发错误,因为函数调用仅限于所有者:
describe("day14", () => {
// Configure the client to use the local cluster.
anchor.setProvider(anchor.AnchorProvider.env());
const program = anchor.workspace.Day14 as Program<Day14>;
let Keypair = anchor.web3.Keypair.generate();
it("Is NOT called by the owner", async () => {
// Add your test here.
const tx = await program.methods
.initialize()
.accounts({
signerAccount: Keypair.publicKey,
})
.signers([Keypair])
.rpc();
console.log("Transaction hash:", tx);
});
});
这里我们生成了一个随机密钥对并用它来签署交易。让我们再次运行测试:
正如预期的那样,我们得到了一个错误,因为签名者的公钥不等于所有者的公钥。
要在程序中更改所有者,需要将分配给所有者的公钥存储在链上。不过,有关 Solana 中“存储”的讨论将在未来的教程中介绍。
所有者可以重新部署字节码。
练习:升级类似上述程序,以获得新的所有者。
发起的两笔跨链交易 (L1->L2)
https://sepolia.etherscan.io/tx/0xd2db346f588d3eb9c90b0a9da2e8382d5c3275916288131f3701ae62bae56c44
https://sepolia.etherscan.io/tx/0x13a2ac45a973524d704463fe16d4d8e3410efc09cf6d3e53b6052c8a17ece4a4
bridge_db=# select * FROM sync.block;
id | block_num | block_hash | parent_hash | network_id | received_at
----+-----------+--------------------------------------------------------------------+--------------------------------------------------------------------+------------+------------------------
0 | | \x5c7830 | | | 1970-01-01 00:00:00+00
1 | 7092388 | \xe984c5267a0e675df842c9ad91a1d0825080cfc581791250955a7a7dd41b71ab | \x5fcd8aa7d9b8650488fcfcf93db2c2dfb03100d02e2e0f927b0ee8d758dfb2b4 | 0 | 2024-11-17 01:21:24+00
2 | 7092528 | \xcd588c08198efdbe40a4bfc731f906594dd0e80f78acff528a84bf36c8341f03 | \xec136b1e2d7fe0fd4fdb4c4c11cb1b5cc578c97e26d6073652a1d31702b12999 | 0 | 2024-11-17 01:51:36+00
bridge_db=# SELECT * FROM sync.deposit;
leaf_type | network_id | orig_net | orig_addr | amount | dest_net | dest_addr | block_id | deposit_cnt | tx_hash | metadata | id | ready_for_claim
-----------+------------+----------+--------------------------------------------+------------------------+----------+--------------------------------------------+----------+-------------+--------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----+-----------------
0 | 0 | 0 | \x3ec3d234625cde1e0f3267014e26e193610e50ac | 1000000000000000000000 | 1 | \xc2df13b6ad0753e0547a318f65f99ac62aec6e2b | 1 | 0 | \xd2db346f588d3eb9c90b0a9da2e8382d5c3275916288131f3701ae62bae56c44 | \x000000000000000000000000000000000000000000000000000000000000006000000000000000000000000000000000000000000000000000000000000000a0000000000000000000000000000000000000000000000000000000000000001200000000000000000000000000000000000000000000000000000000000000084d4158546f6b656e00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000034d41580000000000000000000000000000000000000000000000000000000000 | 1 | f
0 | 0 | 0 | \x3ec3d234625cde1e0f3267014e26e193610e50ac | 2000000000000000000000 | 1 | \xc2df13b6ad0753e0547a318f65f99ac62aec6e2b | 2 | 1 | \x13a2ac45a973524d704463fe16d4d8e3410efc09cf6d3e53b6052c8a17ece4a4 | \x000000000000000000000000000000000000000000000000000000000000006000000000000000000000000000000000000000000000000000000000000000a0000000000000000000000000000000000000000000000000000000000000001200000000000000000000000000000000000000000000000000000000000000084d4158546f6b656e00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000034d41580000000000000000000000000000000000000000000000000000000000 | 2 | f
等待一段时间 ready_for_claim变为true
leaf_type | network_id | orig_net | orig_addr | amount | dest_net | dest_addr | block_id | deposit_cnt | tx_hash | metadata | id | ready_for_claim
-----------+------------+----------+--------------------------------------------+------------------------+----------+--------------------------------------------+----------+-------------+--------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----+-----------------
0 | 0 | 0 | \x3ec3d234625cde1e0f3267014e26e193610e50ac | 1000000000000000000000 | 1 | \xc2df13b6ad0753e0547a318f65f99ac62aec6e2b | 1 | 0 | \xd2db346f588d3eb9c90b0a9da2e8382d5c3275916288131f3701ae62bae56c44 | \x000000000000000000000000000000000000000000000000000000000000006000000000000000000000000000000000000000000000000000000000000000a0000000000000000000000000000000000000000000000000000000000000001200000000000000000000000000000000000000000000000000000000000000084d4158546f6b656e00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000034d41580000000000000000000000000000000000000000000000000000000000 | 1 | t
0 | 0 | 0 | \x3ec3d234625cde1e0f3267014e26e193610e50ac | 2000000000000000000000 | 1 | \xc2df13b6ad0753e0547a318f65f99ac62aec6e2b | 2 | 1 | \x13a2ac45a973524d704463fe16d4d8e3410efc09cf6d3e53b6052c8a17ece4a4 | \x000000000000000000000000000000000000000000000000000000000000006000000000000000000000000000000000000000000000000000000000000000a0000000000000000000000000000000000000000000000000000000000000001200000000000000000000000000000000000000000000000000000000000000084d4158546f6b656e00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000034d41580000000000000000000000000000000000000000000000000000000000 | 2 | t
然后交易会被claimtxman模块,会调用L2桥合约进行相应资产的转出
注:首次部署时,跨链桥claim地址需要在genesis allocs中初始化一定代币,用于首次跨链的手续费
2024-11-18T08:36:32.892Z ERROR ethtxmanager/ethtxmanager.go:270 failed to estimate gas: execution reverted (0x09bde339), data: 1489ed1000000000000000000000000000000000000000000000000000000000000000010000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000520000000000000000000000000000000000000000000000000000000000000000cbec6bce507ef87ba7cb1adaca9d65f2791bfc03cc77c6602c4e6f7be7eb23e100000000000000000000000047ce1498ae40c634d60314f447816a2184602a7a20227cbcef731b6cbdc0edd5850c63dc7fbc27fb58d12cd4d08298799cf66a0512c230867d3375a1f4669e7267dad2c31ebcddbaccea6abd67798ceae35ae7611c665b6069339e6812d015e239594aa71c4e217288e374448c358f6459e057c91ad2ef514570b5dea21508e214430daadabdd23433820000fe98b1c6fa81d5c512b86fbf87bd7102775f8ef1da7e8014dc7aab225503237c7927c032e589e9a01a0eab9fda82ffe834c2a4977f36cc9bcb1f2327bdac5fb48ffbeb9656efcdf70d2656c328903e9fb96e4e3f470c447b3053cc68d68cf0ad317fe10aa7f254222e47ea07f3c1c3aacb74e5926a67262f261c1ed3120576ab877b49a81fb8aac51431858662af6b1a8138a44e9d0812d032340369459ccc98b109347cc874c7202dceecc3dbb09d7f9e5658f1ca3a92d22be1fa28f9945205d853e2c866d9b649301ac9857b07b92e4865283d3d5e2b711ea5f85cb2da71965382ece050508d3d008bbe4df5458f70bd3e1bfcc50b34222b43cd28cbe39a3bab6e464664a742161df99c607638e415ced49d0cd719518539ed5f561f81d07fe40d3ce85508e0332465313e60ad9ae271d580022ffca4fbe4d72d38d18e7a6e20d020a1d1e5a8f411291ab95521386fa538ddfe6a391d4a3669cc64c40f07895f031550b32f7d73205a69c214a8ef3cdf996c495e3fd24c00873f30ea6b2bfabfd38de1c3da357d1fefe203573fdad22f675cb5cfabbec0a041b1b31274f70193da8e90cfc4d6dc054c7cd26d09c1dadd064ec52b6ddcfa0cb144d65d9e131c0c88f8004f90d363034d839aa7760167b5302c36d2c2f6714b41782070b10c51c178bd923182d28502f36e19b079b190008c46d19c399331fd60b6b6bde898bd1dd0a71ee7ec7ff7124cc3d374846614389e7b5975b77c4059bc42b810673dbb6f8b951e5b636bdf24afd2a3cbe96ce8600e8a79731b4a56c697596e0bff7b73f413bdbc75069b002b00d713fae8d6450428246f1b794d56717050fdb77bbe094ac2ee6af54a153e2fb8ce1d31a86c4fdd523783b910bedf7db58a46ba6ce48ac3ca194f3cf2275e {"pid": 40}
github.com/0xPolygon/zkevm-ethtx-manager/ethtxmanager.(*Client).add
/go/pkg/mod/github.com/0x!polygon/zkevm-ethtx-manager@v0.2.1/ethtxmanager/ethtxmanager.go:270
github.com/0xPolygon/zkevm-ethtx-manager/ethtxmanager.(*Client).Add
/go/pkg/mod/github.com/0x!polygon/zkevm-ethtx-manager@v0.2.1/ethtxmanager/ethtxmanager.go:177
github.com/0xPolygon/cdk/aggregator.(*Aggregator).settleDirect
/go/src/github.com/0xPolygon/cdk/aggregator/aggregator.go:587
github.com/0xPolygon/cdk/aggregator.(*Aggregator).sendFinalProof
/go/src/github.com/0xPolygon/cdk/aggregator/aggregator.go:508
2024-11-18T08:36:32.892Z ERROR aggregator/aggregator.go:589 Error Adding TX to ethTxManager: failed to estimate gas: execution reverted (0x09bde339), data: 1489ed1000000000000000000000000000000000000000000000000000000000000000010000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000520000000000000000000000000000000000000000000000000000000000000000cbec6bce507ef87ba7cb1adaca9d65f2791bfc03cc77c6602c4e6f7be7eb23e100000000000000000000000047ce1498ae40c634d60314f447816a2184602a7a20227cbcef731b6cbdc0edd5850c63dc7fbc27fb58d12cd4d08298799cf66a0512c230867d3375a1f4669e7267dad2c31ebcddbaccea6abd67798ceae35ae7611c665b6069339e6812d015e239594aa71c4e217288e374448c358f6459e057c91ad2ef514570b5dea21508e214430daadabdd23433820000fe98b1c6fa81d5c512b86fbf87bd7102775f8ef1da7e8014dc7aab225503237c7927c032e589e9a01a0eab9fda82ffe834c2a4977f36cc9bcb1f2327bdac5fb48ffbeb9656efcdf70d2656c328903e9fb96e4e3f470c447b3053cc68d68cf0ad317fe10aa7f254222e47ea07f3c1c3aacb74e5926a67262f261c1ed3120576ab877b49a81fb8aac51431858662af6b1a8138a44e9d0812d032340369459ccc98b109347cc874c7202dceecc3dbb09d7f9e5658f1ca3a92d22be1fa28f9945205d853e2c866d9b649301ac9857b07b92e4865283d3d5e2b711ea5f85cb2da71965382ece050508d3d008bbe4df5458f70bd3e1bfcc50b34222b43cd28cbe39a3bab6e464664a742161df99c607638e415ced49d0cd719518539ed5f561f81d07fe40d3ce85508e0332465313e60ad9ae271d580022ffca4fbe4d72d38d18e7a6e20d020a1d1e5a8f411291ab95521386fa538ddfe6a391d4a3669cc64c40f07895f031550b32f7d73205a69c214a8ef3cdf996c495e3fd24c00873f30ea6b2bfabfd38de1c3da357d1fefe203573fdad22f675cb5cfabbec0a041b1b31274f70193da8e90cfc4d6dc054c7cd26d09c1dadd064ec52b6ddcfa0cb144d65d9e131c0c88f8004f90d363034d839aa7760167b5302c36d2c2f6714b41782070b10c51c178bd923182d28502f36e19b079b190008c46d19c399331fd60b6b6bde898bd1dd0a71ee7ec7ff7124cc3d374846614389e7b5975b77c4059bc42b810673dbb6f8b951e5b636bdf24afd2a3cbe96ce8600e8a79731b4a56c697596e0bff7b73f413bdbc75069b002b00d713fae8d6450428246f1b794d56717050fdb77bbe094ac2ee6af54a153e2fb8ce1d31a86c4fdd523783b910bedf7db58a46ba6ce48ac3ca194f3cf2275e {"pid": 40, "version": "v0.4.0-beta5", "module": "aggregator"}
github.com/0xPolygon/cdk/aggregator.(*Aggregator).settleDirect
/go/src/github.com/0xPolygon/cdk/aggregator/aggregator.go:589
github.com/0xPolygon/cdk/aggregator.(*Aggregator).sendFinalProof
/go/src/github.com/0xPolygon/cdk/aggregator/aggregator.go:508
2024-11-18T08:36:32.892Z ERROR aggregator/aggregator.go:591 Error to add batch verification tx to eth tx manager: failed to estimate gas: execution reverted (0x09bde339), data: 1489ed1000000000000000000000000000000000000000000000000000000000000000010000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000520000000000000000000000000000000000000000000000000000000000000000cbec6bce507ef87ba7cb1adaca9d65f2791bfc03cc77c6602c4e6f7be7eb23e100000000000000000000000047ce1498ae40c634d60314f447816a2184602a7a20227cbcef731b6cbdc0edd5850c63dc7fbc27fb58d12cd4d08298799cf66a0512c230867d3375a1f4669e7267dad2c31ebcddbaccea6abd67798ceae35ae7611c665b6069339e6812d015e239594aa71c4e217288e374448c358f6459e057c91ad2ef514570b5dea21508e214430daadabdd23433820000fe98b1c6fa81d5c512b86fbf87bd7102775f8ef1da7e8014dc7aab225503237c7927c032e589e9a01a0eab9fda82ffe834c2a4977f36cc9bcb1f2327bdac5fb48ffbeb9656efcdf70d2656c328903e9fb96e4e3f470c447b3053cc68d68cf0ad317fe10aa7f254222e47ea07f3c1c3aacb74e5926a67262f261c1ed3120576ab877b49a81fb8aac51431858662af6b1a8138a44e9d0812d032340369459ccc98b109347cc874c7202dceecc3dbb09d7f9e5658f1ca3a92d22be1fa28f9945205d853e2c866d9b649301ac9857b07b92e4865283d3d5e2b711ea5f85cb2da71965382ece050508d3d008bbe4df5458f70bd3e1bfcc50b34222b43cd28cbe39a3bab6e464664a742161df99c607638e415ced49d0cd719518539ed5f561f81d07fe40d3ce85508e0332465313e60ad9ae271d580022ffca4fbe4d72d38d18e7a6e20d020a1d1e5a8f411291ab95521386fa538ddfe6a391d4a3669cc64c40f07895f031550b32f7d73205a69c214a8ef3cdf996c495e3fd24c00873f30ea6b2bfabfd38de1c3da357d1fefe203573fdad22f675cb5cfabbec0a041b1b31274f70193da8e90cfc4d6dc054c7cd26d09c1dadd064ec52b6ddcfa0cb144d65d9e131c0c88f8004f90d363034d839aa7760167b5302c36d2c2f6714b41782070b10c51c178bd923182d28502f36e19b079b190008c46d19c399331fd60b6b6bde898bd1dd0a71ee7ec7ff7124cc3d374846614389e7b5975b77c4059bc42b810673dbb6f8b951e5b636bdf24afd2a3cbe96ce8600e8a79731b4a56c697596e0bff7b73f413bdbc75069b002b00d713fae8d6450428246f1b794d56717050fdb77bbe094ac2ee6af54a153e2fb8ce1d31a86c4fdd523783b910bedf7db58a46ba6ce48ac3ca194f3cf2275e {"pid": 40, "monitoredTxId": "0x0000000000000000000000000000000000000000000000000000000000000000", "from": "0x47ce1498Ae40c634D60314F447816a2184602A7a", "to": "0x12dF12067fcCf1830424b419E338101E8FC09d41"}
github.com/0xPolygon/cdk/aggregator.(*Aggregator).settleDirect
/go/src/github.com/0xPolygon/cdk/aggregator/aggregator.go:591
github.com/0xPolygon/cdk/aggregator.(*Aggregator).sendFinalProof
/go/src/github.com/0xPolygon/cdk/aggregator/aggregator.go:508
// settleDirect sends the final proof to the L1 smart contract directly.
func (a *Aggregator) settleDirect(
ctx context.Context,
proof *state.Proof,
inputs ethmanTypes.FinalProofInputs) bool {
// add batch verification to be monitored
sender := common.HexToAddress(a.cfg.SenderAddress)
to, data, err := a.etherman.BuildTrustedVerifyBatchesTxData(
func (etherMan *Client) BuildTrustedVerifyBatchesTxData(
lastVerifiedBatch, newVerifiedBatch uint64, inputs *ethmanTypes.FinalProofInputs, beneficiary common.Address,
) (to *common.Address, data []byte, err error) {
opts, err := etherMan.generateRandomAuth()
if err != nil {
return nil, nil, fmt.Errorf("failed to build trusted verify batches, err: %w", err)
}
opts.NoSend = true
// force nonce, gas limit and gas price to avoid querying it from the chain
opts.Nonce = big.NewInt(1)
opts.GasLimit = uint64(1)
opts.GasPrice = big.NewInt(1)
var newLocalExitRoot [32]byte
copy(newLocalExitRoot[:], inputs.NewLocalExitRoot)
var newStateRoot [32]byte
copy(newStateRoot[:], inputs.NewStateRoot)
proof, err := convertProof(inputs.FinalProof.Proof)
if err != nil {
log.Errorf("error converting proof. Error: %v, Proof: %s", err, inputs.FinalProof.Proof)
return nil, nil, err
}
const pendStateNum = 0 // TODO hardcoded for now until we implement the pending state feature
tx, err := etherMan.Contracts.Banana.RollupManager.VerifyBatchesTrustedAggregator(
&opts,
etherMan.RollupID,
pendStateNum,
lastVerifiedBatch,
newVerifiedBatch,
newLocalExitRoot,
newStateRoot,
beneficiary,
proof,
)
if err != nil {
if parsedErr, ok := TryParseError(err); ok {
err = parsedErr
}
return nil, nil, err
}
return tx.To(), tx.Data(), nil
}
调用的合约为 Banana.RollupManager.VerifyBatchesTrustedAggregator
https://chaintool.tech/querySelector
0x09bde339->InvalidProof()
function verifyBatchesTrustedAggregator(
uint32 rollupID,
uint64 pendingStateNum,
uint64 initNumBatch,
uint64 finalNewBatch,
bytes32 newLocalExitRoot,
bytes32 newStateRoot,
address beneficiary,
bytes32[24] calldata proof
) external onlyRole(_TRUSTED_AGGREGATOR_ROLE) {
RollupData storage rollup = rollupIDToRollupData[rollupID];
_verifyAndRewardBatches(
rollup,
pendingStateNum,
initNumBatch,
finalNewBatch,
newLocalExitRoot,
newStateRoot,
beneficiary,
proof
);
// Consolidate state
rollup.lastVerifiedBatch = finalNewBatch;
rollup.batchNumToStateRoot[finalNewBatch] = newStateRoot;
rollup.lastLocalExitRoot = newLocalExitRoot;
// Clean pending state if any
if (rollup.lastPendingState > 0) {
rollup.lastPendingState = 0;
rollup.lastPendingStateConsolidated = 0;
}
// Interact with globalExitRootManager
globalExitRootManager.updateExitRoot(getRollupExitRoot());
emit VerifyBatchesTrustedAggregator(
rollupID,
finalNewBatch,
newStateRoot,
newLocalExitRoot,
msg.sender
);
}
function _verifyAndRewardBatches(
RollupData storage rollup,
uint64 pendingStateNum,
uint64 initNumBatch,
uint64 finalNewBatch,
bytes32 newLocalExitRoot,
bytes32 newStateRoot,
address beneficiary,
bytes32[24] calldata proof
) internal virtual {
bytes32 oldStateRoot;
uint64 currentLastVerifiedBatch = _getLastVerifiedBatch(rollup);
if (initNumBatch < rollup.lastVerifiedBatchBeforeUpgrade) {
revert InitBatchMustMatchCurrentForkID();
}
// Use pending state if specified, otherwise use consolidated state
if (pendingStateNum != 0) {
// Check that pending state exist
// Already consolidated pending states can be used aswell
if (pendingStateNum > rollup.lastPendingState) {
revert PendingStateDoesNotExist();
}
// Check choosen pending state
PendingState storage currentPendingState = rollup
.pendingStateTransitions[pendingStateNum];
// Get oldStateRoot from pending batch
oldStateRoot = currentPendingState.stateRoot;
// Check initNumBatch matches the pending state
if (initNumBatch != currentPendingState.lastVerifiedBatch) {
revert InitNumBatchDoesNotMatchPendingState();
}
} else {
// Use consolidated state
oldStateRoot = rollup.batchNumToStateRoot[initNumBatch];
if (oldStateRoot == bytes32(0)) {
revert OldStateRootDoesNotExist();
}
// Check initNumBatch is inside the range, sanity check
if (initNumBatch > currentLastVerifiedBatch) {
revert InitNumBatchAboveLastVerifiedBatch();
}
}
// Check final batch
if (finalNewBatch <= currentLastVerifiedBatch) {
revert FinalNumBatchBelowLastVerifiedBatch();
}
// Get snark bytes
bytes memory snarkHashBytes = _getInputSnarkBytes(
rollup,
initNumBatch,
finalNewBatch,
newLocalExitRoot,
oldStateRoot,
newStateRoot
);
// Calulate the snark input
uint256 inputSnark = uint256(sha256(snarkHashBytes)) % _RFIELD;
// Verify proof
if (!rollup.verifier.verifyProof(proof, [inputSnark])) {
revert InvalidProof(); // 这里报错
}
// Pay POL rewards
uint64 newVerifiedBatches = finalNewBatch - currentLastVerifiedBatch;
pol.safeTransfer(
beneficiary,
calculateRewardPerBatch() * newVerifiedBatches
);
// Update aggregation parameters
totalVerifiedBatches += newVerifiedBatches;
lastAggregationTimestamp = uint64(block.timestamp);
// Callback to the rollup address
rollup.rollupContract.onVerifyBatches(
finalNewBatch,
newStateRoot,
msg.sender
);
}
contracts/verifiers/FflonkVerifier_12.sol
function verifyProof(
bytes32[24] calldata proof,
uint256[1] calldata pubSignals
) public view returns (bool) {
.....
合约验证电路,细节不用展开了
对于电路,当前合约中包含了10 ,11,12 三种电路,排除下是否是电路版本不对应
contracts/PolygonZkEVM.sol
constructor(
IPolygonZkEVMGlobalExitRoot _globalExitRootManager,
IERC20Upgradeable _matic,
IVerifierRollup _rollupVerifier, // 构造时传入对应版本FflonkVerifier
IPolygonZkEVMBridge _bridgeAddress,
uint64 _chainID,
uint64 _forkID
) {
globalExitRootManager = _globalExitRootManager;
matic = _matic;
rollupVerifier = _rollupVerifier;
bridgeAddress = _bridgeAddress;
chainID = _chainID;
forkID = _forkID;
}
deployment/v2/4_createRollup.ts
let verifierContract;
if (realVerifier === true) {
let verifierName = `FflonkVerifier_${forkID}`;
const VerifierRollup = await ethers.getContractFactory(verifierName, deployer);
verifierContract = await VerifierRollup.deploy();
await verifierContract.waitForDeployment();
} else
deployment/v2/create_rollup_parameters.json
{
"realVerifier": true,
...
"forkID": 12,
...
}
合约内版本是正确的,fork12
https://hub.docker.com/r/hermeznetwork/zkevm-prover/tags?name=v8.0.0
从官方信息https://github.com/0xPolygon/kurtosis-cdk/blob/60f6e4cdbe0c218d14eb443fb7c8618a06c485b5/CDK_VERSION_MATRIX.MD
Fork12 对应的 ZkEVM Prover:v8.0.0-RC14-fork.12
看一下Prover Key https://storage.googleapis.com/zkevm/ 在fork12版本下,只有一个
<Contents>
<Key>zkproverc/v8.0.0-rc.9-fork.12.tgz</Key>
<Generation>1725622722765425</Generation>
<MetaGeneration>1</MetaGeneration>
<LastModified>2024-09-06T11:38:42.770Z</LastModified>
<ETag>"93c4e4ea1755059d9e1f6325c089472c"</ETag>
<Size>45066205192</Size>
</Contents>
目前怀疑ZkEVM Proverv8.0.0-RC14-fork.12
与 key v8.0.0-rc.9-fork.12.tgz
存在不兼容
由于目前只有v8.0.0-rc.9-fork.12.tgz
版本的key,所以目前将ZkEVM Prover降级为v8.0.0-RC9-fork.12
docker pull hermeznetwork/zkevm-prover:v8.0.0-RC9-fork.12
降级完测试,问题依旧存在,排除版本不一致问题
继续分析日志
2024-11-18T10:02:05.927Z WARN aggregator/aggregator.go:653 NewLocalExitRoot and NewStateRoot look like a mock values, using values from executor instead: LER: 000000..000000, SR: 959a88..208620 {"pid": 39, "version": "v0.4.0-beta5", "module": "aggregator", "prover": "PROVER-2", "proverId": "7f32fd37-7b07-43de-b0e8-b7a861d5f495", "proverAddr": "172.18.34.135:60064", "recursiveProofId": "b47f3cf8-42d8-43b3-a529-7c37ebd7351a", "batches": "1-368", "finalProofId": "40ac43d6-7a7b-498e-b083-30d1a0656596"}
github.com/0xPolygon/cdk/aggregator.(*Aggregator).buildFinalProof
/go/src/github.com/0xPolygon/cdk/aggregator/aggregator.go:653
github.com/0xPolygon/cdk/aggregator.(*Aggregator).tryBuildFinalProof
/go/src/github.com/0xPolygon/cdk/aggregator/aggregator.go:730
github.com/0xPolygon/cdk/aggregator.(*Aggregator).Channel
/go/src/github.com/0xPolygon/cdk/aggregator/aggregator.go:442
github.com/0xPolygon/cdk/aggregator/prover._AggregatorService_Channel_Handler
/go/src/github.com/0xPolygon/cdk/aggregator/prover/aggregator_grpc.pb.go:109
google.golang.org/grpc.(*Server).processStreamingRPC
/go/pkg/mod/google.golang.org/grpc@v1.64.0/server.go:1673
google.golang.org/grpc.(*Server).handleStream
/go/pkg/mod/google.golang.org/grpc@v1.64.0/server.go:1794
google.golang.org/grpc.(*Server).serveStreams.func2.1
/go/pkg/mod/google.golang.org/grpc@v1.64.0/server.go:1029
从mock关键词分析,当前prover配置是否开始mock配置
{
"runAggregatorClient": false,
"runAggregatorClientMock": true,
...
确实开始了mock配置,导致没有生成真实的证明,修改配置如下
{
"runAggregatorClient": true,
"runAggregatorClientMock": false,
...
重启后,提示如下错误
20241118_104820_282634 5781eab 010bc40 c12aGenericCHelpers=config/c12a/c12a.chelpers_generic.bin
20241118_104820_282636 5781eab 010bc40 recursive1GenericCHelpers=config/recursive1/recursive1.chelpers_generic.bin
20241118_104820_282638 5781eab 010bc40 recursive2GenericCHelpers=config/recursive2/recursive2.chelpers_generic.bin
20241118_104820_282639 5781eab 010bc40 recursivefGenericCHelpers=config/recursivef/recursivef.chelpers_generic.bin
20241118_104820_282641 5781eab 010bc40 publicsOutput=public.json
20241118_104820_282643 5781eab 010bc40 proofFile=proof.json
20241118_104820_282645 5781eab 010bc40 keccakScriptFile=config/scripts/keccak_script.json
20241118_104820_282647 5781eab 010bc40 sha256ScriptFile=config/scripts/sha256_script.json
20241118_104820_282649 5781eab 010bc40 keccakPolsFile=keccak_pols.json
20241118_104820_282745 5781eab 010bc40 zkError: required file config.zkevmConstPols=config/zkevm/zkevm.const does not exist
20241118_104820_282754 5781eab 010bc40 zkError: required file config.c12aConstPols=config/c12a/c12a.const does not exist
20241118_104820_282759 5781eab 010bc40 zkError: required file config.recursive1ConstPols=config/recursive1/recursive1.const does not exist
20241118_104820_282764 5781eab 010bc40 zkError: required file config.recursive2ConstPols=config/recursive2/recursive2.const does not exist
20241118_104820_282768 5781eab 010bc40 zkError: required file config.recursivefConstPols=config/recursivef/recursivef.const does not exist
20241118_104820_282773 5781eab 010bc40 zkError: required file config.zkevmVerifier=config/zkevm/zkevm.verifier.dat does not exist
20241118_104820_282776 5781eab 010bc40 zkError: required file config.zkevmVerkey=config/zkevm/zkevm.verkey.json does not exist
20241118_104820_282780 5781eab 010bc40 zkError: required file config.c12aVerkey=config/c12a/c12a.verkey.json does not exist
20241118_104820_282784 5781eab 010bc40 zkError: required file config.recursive1Verifier=config/recursive1/recursive1.verifier.dat does not exist
20241118_104820_282788 5781eab 010bc40 zkError: required file config.recursive1Verkey=config/recursive1/recursive1.verkey.json does not exist
20241118_104820_282793 5781eab 010bc40 zkError: required file config.recursive2Verifier=config/recursive2/recursive2.verifier.dat does not exist
20241118_104820_282652 5781eab 010bc40 sha256PolsFile=sha256_connections.json
20241118_104820_282653 5781eab 010bc40 keccakConnectionsFile=keccak_connections.json
20241118_104820_282656 5781eab 010bc40 storageRomFile=config/scripts/storage_sm_rom.json
20241118_104820_282658 5781eab 010bc40 zkevmStarkInfo=config/zkevm/zkevm.starkinfo.json
20241118_104820_282660 5781eab 010bc40 c12aStarkInfo=config/c12a/c12a.starkinfo.json
20241118_104820_282662 5781eab 010bc40 databaseURL=postg...
20241118_104820_282664 5781eab 010bc40 dbNodesTableName=state.nodes
20241118_104820_282796 5781eab 010bc40 zkError: required file config.recursive2Verkey=config/recursive2/recursive2.verkey.json does not exist
20241118_104820_282802 5781eab 010bc40 zkError: required file config.finalVerifier=config/final/final.verifier.dat does not exist
20241118_104820_282805 5781eab 010bc40 zkError: required file config.recursivefVerifier=config/recursivef/recursivef.verifier.dat does not exist
20241118_104820_282810 5781eab 010bc40 zkError: required file config.recursivefVerkey=config/recursivef/recursivef.verkey.json does not exist
20241118_104820_282814 5781eab 010bc40 zkError: required file config.finalStarkZkey=config/final/final.fflonk.zkey does not exist
20241118_104820_282832 5781eab 010bc40 zkError: required file config.zkevmStarkInfo=config/zkevm/zkevm.starkinfo.json does not exist
20241118_104820_282837 5781eab 010bc40 zkError: required file config.c12aStarkInfo=config/c12a/c12a.starkinfo.json does not exist
20241118_104820_282841 5781eab 010bc40 zkError: required file config.recursive1StarkInfo=config/recursive1/recursive1.starkinfo.json does not exist
20241118_104820_282846 5781eab 010bc40 zkError: required file config.recursive2StarkInfo=config/recursive2/recursive2.starkinfo.json does not exist
20241118_104820_282850 5781eab 010bc40 zkError: required file config.recursivefStarkInfo=config/recursivef/recursivef.starkinfo.json does not exist
20241118_104820_282853 5781eab 010bc40 zkError: required file config.zkevmCHelpers=config/zkevm/zkevm.chelpers.bin does not exist
20241118_104820_282857 5781eab 010bc40 zkError: required file config.c12aCHelpers=config/c12a/c12a.chelpers.bin does not exist
20241118_104820_282666 5781eab 010bc40 dbProgramTableName=state.program
20241118_104820_282861 5781eab 010bc40 zkError: required file config.recursive1CHelpers=config/recursive1/recursive1.chelpers.bin does not exist
20241118_104820_282668 5781eab 010bc40 dbMultiWrite=1
20241118_104820_282865 5781eab 010bc40 zkError: required file config.recursive2CHelpers=config/recursive2/recursive2.chelpers.bin does not exist
20241118_104820_282670 5781eab 010bc40 dbMultiWriteSingleQuerySize=20971520
20241118_104820_282869 5781eab 010bc40 zkError: required file config.recursivefCHelpers=config/recursivef/recursivef.chelpers.bin does not exist
20241118_104820_282672 5781eab 010bc40 dbConnectionsPool=1
20241118_104820_282872 5781eab 010bc40 zkError: required file config.zkevmGenericCHelpers=config/zkevm/zkevm.chelpers_generic.bin does not exist
20241118_104820_282674 5781eab 010bc40 dbNumberOfPoolConnections=30
20241118_104820_282876 5781eab 010bc40 zkError: required file config.c12aGenericCHelpers=config/c12a/c12a.chelpers_generic.bin does not exist
20241118_104820_282676 5781eab 010bc40 dbMetrics=1
20241118_104820_282880 5781eab 010bc40 zkError: required file config.recursive1GenericCHelpers=config/recursive1/recursive1.chelpers_generic.bin does not exist
20241118_104820_282677 5781eab 010bc40 dbClearCache=0
20241118_104820_282883 5781eab 010bc40 zkError: required file config.recursive2GenericCHelpers=config/recursive2/recursive2.chelpers_generic.bin does not exist
20241118_104820_282679 5781eab 010bc40 dbGetTree=1
20241118_104820_282886 5781eab 010bc40 zkError: required file config.recursivefGenericCHelpers=config/recursivef/recursivef.chelpers_generic.bin does not exist
20241118_104820_282681 5781eab 010bc40 dbReadOnly=0
20241118_104820_282890 5781eab 010bc40 zkError: required file config.c12aExec=config/c12a/c12a.exec does not exist
20241118_104820_282682 5781eab 010bc40 dbReadRetryCounter=10
20241118_104820_282894 5781eab 010bc40 zkError: required file config.recursive1Exec=config/recursive1/recursive1.exec does not exist
20241118_104820_282684 5781eab 010bc40 dbReadRetryDelay=100000
20241118_104820_282898 5781eab 010bc40 zkError: required file config.recursive2Exec=config/recursive2/recursive2.exec does not exist
20241118_104820_282687 5781eab 010bc40 stateManager=1
20241118_104820_282688 5781eab 010bc40 stateManagerPurge=1
20241118_104820_282690 5781eab 010bc40 cleanerPollingPeriod=600
20241118_104820_282902 5781eab 010bc40 zkError: required file config.recursivefExec=config/recursivef/recursivef.exec does not exist
20241118_104820_282692 5781eab 010bc40 requestsPersistence=3600
20241118_104820_282905 5781eab 010bc40 zkError: main() failed calling config.check()
20241118_104820_282693 5781eab 010bc40 maxExecutorThreads=20
20241118_104820_282696 5781eab 010bc40 maxExecutorReceiveMessageSize=1073741824
20241118_104820_282698 5781eab 010bc40 maxExecutorSendMessageSize=0
20241118_104820_282700 5781eab 010bc40 maxHashDBThreads=8
20241118_104820_282702 5781eab 010bc40 dbMTCacheSize=1024
20241118_104820_282703 5781eab 010bc40 dbProgramCacheSize=1024
20241118_104820_282705 5781eab 010bc40 fullTracerTraceReserveSize=262144
20241118_104820_282717 5781eab 010bc40 --> WHOLE_PROCESS starting...
20241118_104820_283857 5781eab 010bc40 CALL STACK
20241118_104820_283862 5781eab 010bc40 0: call=zkProver(_Z14printCallStackv+0x45) [0x55aedb76ded5]
20241118_104820_283864 5781eab 010bc40 1: call=zkProver(_Z11exitProcessv+0x16) [0x55aedb792dd6]
20241118_104820_283865 5781eab 010bc40 2: call=zkProver(main+0x16f1) [0x55aec780c661]
20241118_104820_283867 5781eab 010bc40 3: call=/lib/x86_64-linux-gnu/libc.so.6(+0x29d90) [0x7f7b80d09d90]
20241118_104820_283868 5781eab 010bc40 4: call=/lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0x80) [0x7f7b80d09e40]
20241118_104820_283869 5781eab 010bc40 5: call=zkProver(_start+0x25) [0x55aec785e115]
20241118_104820_284026 5781eab 010bc40 MEMORY INFO
MemTotal: 1027509.960938 MB
MemFree: 765092.031250 MB
MemAvailable: 1012630.906250 MB
Buffers: 1117.746094 MB
Cached: 251721.140625 MB
SwapCached: 0.000000 MB
SwapTotal: 0.000000 MB
SwapFree: 0.000000 MB
VM: 504.105469 MB
RSS: 0.006591 MB
20241118_104820_284047 5781eab 010bc40 PROCESS INFO
Pid: 1
User time: 0.010000 s
Kernel time: 0.010000 s
Total time: 0.020000 s
Num threads: 1
Virtual mem: 504 MB
从错误分析,当前配置没有正确加载对应目录,增加以下参数
{
"outputPath": "runtime/output",
"configPath": "/usr/src/app/config"
}
prover正确运行,并开始计算证明,且cpu利用率拉满
20241118_110311_263112 06517aa 4b83640 <-- SAVE_PUBLICS_JSON_BATCH_PROOF done: 0.000241 s
20241118_110311_263114 06517aa 4b83640 --> STARK_PROOF_BATCH_PROOF starting...
20241118_110311_263212 06517aa 4b83640 --> STARK_INITIALIZATION starting...
20241118_110311_263248 06517aa 4b83640 <-- STARK_INITIALIZATION done: 0.000035 s
20241118_110311_263249 06517aa 4b83640 --> STARK_STEP_1 starting...
20241118_110311_263251 06517aa 4b83640 --> STARK_STEP_1_LDE_AND_MERKLETREE starting...
20241118_110311_263252 06517aa 4b83640 --> STARK_STEP_1_LDE starting...
20241118_110534_424148 06517aa 4b83640 <-- STARK_STEP_1_LDE done: 143.160871 s
20241118_110534_424181 06517aa 4b83640 --> STARK_STEP_1_MERKLETREE starting...
https://sepolia.etherscan.io/address/0x47ce1498Ae40c634D60314F447816a2184602A7a
在Aggregator中有如下配置,需要配置Sequencer的私钥
[Aggregator.SequencerPrivateKey]
Path = "/etc/cdk/sequencer.keystore"
Password = "password"
查看链代码
if !cfg.SyncModeOnlyEnabled && cfg.SettlementBackend == AggLayer {
aggLayerClient = agglayer.NewAggLayerClient(cfg.AggLayerURL)
sequencerPrivateKey, err = cdkcommon.NewKeyFromKeystore(cfg.SequencerPrivateKey)
if err != nil {
return nil, err
}
}
以下条件时被设置
查看SequencerPrivateKey使用逻辑
func (a *Aggregator) settleWithAggLayer(
ctx context.Context,
proof *state.Proof,
inputs ethmanTypes.FinalProofInputs) bool {
proofStrNo0x := strings.TrimPrefix(inputs.FinalProof.Proof, "0x")
proofBytes := common.Hex2Bytes(proofStrNo0x)
tx := agglayer.Tx{
LastVerifiedBatch: cdkTypes.ArgUint64(proof.BatchNumber - 1),
NewVerifiedBatch: cdkTypes.ArgUint64(proof.BatchNumberFinal),
ZKP: agglayer.ZKP{
NewStateRoot: common.BytesToHash(inputs.NewStateRoot),
NewLocalExitRoot: common.BytesToHash(inputs.NewLocalExitRoot),
Proof: cdkTypes.ArgBytes(proofBytes),
},
RollupID: a.etherman.GetRollupId(),
}
signedTx, err := tx.Sign(a.sequencerPrivateKey)
if err != nil {
a.logger.Errorf("failed to sign tx: %v", err)
a.handleFailureToAddVerifyBatchToBeMonitored(ctx, proof)
return false
}
switch a.cfg.SettlementBackend {
case AggLayer:
if success := a.settleWithAggLayer(ctx, proof, inputs); !success {
continue
}
default:
if success := a.settleDirect(ctx, proof, inputs); !success {
continue
}
}
SettlementBackend 配置选项
总结
当使用agglayer结算时,才需要设置SequencerPrivateKey,所以独立部署时Aggregator不需要配置Aggregator.SequencerPrivateKey