您正在查看: Surou 发布的文章

zkSync block revert

异常场景

注意:无法回滚已执行的高度,否则报Attempt to revert already executed blocks
此时,强制停止 prover,并发送交易, 制造落后场景

查询最新数据

zk f cargo run --bin block_reverter --release -- print-suggested-values --json

返回数据中包含几个信息

  • committed 最新高度
  • verified 已验证高度
  • executed 本地已执行高度
  • last_executed_l1_batch_number: 推荐回滚高度
  • nonce: 执行交易nonce
  • priority_fee: 推荐fee
    2023-09-13T10:25:08.901399Z  INFO zksync_core::block_reverter: Last L1 batch numbers on contract: committed 1171, verified 1170, executed 1170
    {"last_executed_l1_batch_number":1170,"nonce":3867,"priority_fee":1000000000}

    此时链上最新高度为committed:1171

测试回滚

先停止下zksync_server/zksync_external_node

合约回滚

1171回滚到1170

cd $ZKSYNC_HOME && zk f cargo run --bin block_reverter --release -- send-eth-transaction --l1-batch-number 1170 --nonce 3871  --priority-fee-per-gas 1000000000

此时会向合约发送revertBlocks交易,无需手动执行合约
此时再次查询

zk f cargo run --bin block_reverter --release -- print-suggested-values --json

此时链上最新高度为committed:1170

本地数据库回滚

注意:需要先停止下zksync_server/zksync_external_node, 否则报 ./db/main/tree/LOCK

cd $ZKSYNC_HOME && zk f cargo run --bin block_reverter --release -- rollback-db --l1-batch-number 1170 --rollback-postgres --rollback-tree --rollback-sk-cache

附加

core/bin/zksync_core/src/bin/block_reverter.rs
Command

  • print-suggested-values
    显示要使用的建议值。
  • send-eth-transaction
    将恢复事务发送到 L1。
  • rollback-db
    将内部数据库状态恢复到前一个块。
  • clear-failed-transactions
    清除失败的 L1 事务。

测试用例

core/tests/revert-test/tests/revert-and-restart.test.ts
const executedProcess = await utils.exec(
    'cd $ZKSYNC_HOME && ' +
        'RUST_LOG=off cargo run --bin block_reverter --release -- print-suggested-values --json'
    // ^ Switch off logs to not pollute the output JSON
);
const suggestedValuesOutput = executedProcess.stdout;
const { lastL1BatchNumber, nonce, priorityFee } = parseSuggestedValues(suggestedValuesOutput);
expect(lastL1BatchNumber < blocksCommittedBeforeRevert, 'There should be at least one block for revert').to.be
    .true;

console.log(
    Reverting with parameters: last unreverted L1 batch number: ${lastL1BatchNumber}, nonce: ${nonce}, priorityFee: ${priorityFee}
);

console.log('Sending ETH transaction..');
await utils.spawn(
    cd $ZKSYNC_HOME && cargo run --bin block_reverter --release -- send-eth-transaction --l1-batch-number ${lastL1BatchNumber} --nonce ${nonce} --priority-fee-per-gas ${priorityFee}
);

console.log('Rolling back DB..');
await utils.spawn(
    cd $ZKSYNC_HOME && cargo run --bin block_reverter --release -- rollback-db --l1-batch-number ${lastL1BatchNumber} --rollback-postgres --rollback-tree --rollback-sk-cache
);

let blocksCommitted = await mainContract.getTotalBlocksCommitted();
expect(blocksCommitted.eq(lastL1BatchNumber), 'Revert on contract was unsuccessful').to.be.true;

zkSync revert block后,扩展节点启动报错

问题背景

前面测试了zkSync的revert block, 然后在别的同步的节点,同步时报以下错误

2023-09-14T12:46:52.271426230Z 2023-09-14T12:46:52.271274Z  INFO zksync_core::sync_layer::fetcher: New batch: 3205. Timestamp: 1694603829
2023-09-14T12:46:52.271447030Z 2023-09-14T12:46:52.271296Z  INFO zksync_core::sync_layer::fetcher: New miniblock: 6891 / 8176
2023-09-14T12:46:52.385460894Z 2023-09-14T12:46:52.385324Z  INFO zksync_core::consistency_checker: Batch 3195 is consistent with L1
2023-09-14T12:46:52.408615713Z 2023-09-14T12:46:52.408488Z  INFO zksync_core::consistency_checker: Checking commit tx 0xa07f…353d for batch 3196
2023-09-14T12:46:52.526929804Z 2023-09-14T12:46:52.526783Z  INFO zksync_core::sync_layer::fetcher: New miniblock: 6892 / 8176
2023-09-14T12:46:52.726526535Z thread 'tokio-runtime-worker' panicked at 'index out of bounds: the len is 1 but the index is 1', /usr/src/zksync/core/bin/zksync_core/src/consistency_checker/mod.rs:111:27
2023-09-14T12:46:52.726566126Z stack backtrace:
2023-09-14T12:46:52.744782978Z    0:     0x5637f01373ea - std::backtrace_rs::backtrace::libunwind::trace::h79937bc171ada62c
2023-09-14T12:46:52.744804819Z                                at /rustc/d5a82bbd26e1ad8b7401f6a718a9c57c96905483/library/std/src/../../backtrace/src/backtrace/libunwind.rs:93:5
2023-09-14T12:46:52.744811549Z    1:     0x5637f01373ea - std::backtrace_rs::backtrace::trace_unsynchronized::h2292bca8571cb919
2023-09-14T12:46:52.744816889Z                                at /rustc/d5a82bbd26e1ad8b7401f6a718a9c57c96905483/library/std/src/../../backtrace/src/backtrace/mod.rs:66:5
2023-09-14T12:46:52.744822060Z    2:     0x5637f01373ea - std::sys_common::backtrace::_print_fmt::h9c461f248e4ae90d
2023-09-14T12:46:52.744827090Z                                at /rustc/d5a82bbd26e1ad8b7401f6a718a9c57c96905483/library/std/src/sys_common/backtrace.rs:65:5
2023-09-14T12:46:52.744832010Z    3:     0x5637f01373ea - <std::sys_common::backtrace::_print::DisplayBacktrace as core::fmt::Display>::fmt::he9fe6bf1a39182e1
2023-09-14T12:46:52.744837660Z                                at /rustc/d5a82bbd26e1ad8b7401f6a718a9c57c96905483/library/std/src/sys_common/backtrace.rs:44:22
2023-09-14T12:46:52.746411727Z    4:     0x5637f015caee - core::fmt::write::h032658c119c720d7
2023-09-14T12:46:52.746431598Z                                at /rustc/d5a82bbd26e1ad8b7401f6a718a9c57c96905483/library/core/src/fmt/mod.rs:1208:17
2023-09-14T12:46:52.746438008Z    5:     0x5637f0131875 - std::io::Write::write_fmt::h299fc90dfae41c0d
2023-09-14T12:46:52.746443078Z                                at /rustc/d5a82bbd26e1ad8b7401f6a718a9c57c96905483/library/std/src/io/mod.rs:1682:15
2023-09-14T12:46:52.746448138Z    6:     0x5637f01371b5 - std::sys_common::backtrace::_print::heb70d25df9937e3f
2023-09-14T12:46:52.746453018Z                                at /rustc/d5a82bbd26e1ad8b7401f6a718a9c57c96905483/library/std/src/sys_common/backtrace.rs:47:5
2023-09-14T12:46:52.746473369Z    7:     0x5637f01371b5 - std::sys_common::backtrace::print::had745c0a76b8b521
2023-09-14T12:46:52.746492989Z                                at /rustc/d5a82bbd26e1ad8b7401f6a718a9c57c96905483/library/std/src/sys_common/backtrace.rs:34:9
2023-09-14T12:46:52.746513250Z    8:     0x5637f0138e0f - std::panicking::default_hook::{{closure}}::h1ea782cdfa2fd097
2023-09-14T12:46:52.746530751Z                                at /rustc/d5a82bbd26e1ad8b7401f6a718a9c57c96905483/library/std/src/panicking.rs:267:22
2023-09-14T12:46:52.746567952Z    9:     0x5637f0138b4b - std::panicking::default_hook::h1cc3af63455a163c
2023-09-14T12:46:52.746574652Z                                at /rustc/d5a82bbd26e1ad8b7401f6a718a9c57c96905483/library/std/src/panicking.rs:286:9
2023-09-14T12:46:52.746619113Z   10:     0x5637f013951c - std::panicking::rust_panic_with_hook::h5cafdc4b3bfd5528
2023-09-14T12:46:52.746638164Z                                at /rustc/d5a82bbd26e1ad8b7401f6a718a9c57c96905483/library/std/src/panicking.rs:688:13
2023-09-14T12:46:52.746644624Z   11:     0x5637f01392b9 - std::panicking::begin_panic_handler::{{closure}}::hf31c60f40775892c
2023-09-14T12:46:52.746649704Z                                at /rustc/d5a82bbd26e1ad8b7401f6a718a9c57c96905483/library/std/src/panicking.rs:579:13
2023-09-14T12:46:52.746690595Z   12:     0x5637f013789c - std::sys_common::backtrace::__rust_end_short_backtrace::h28a5c7be595826cd
2023-09-14T12:46:52.746696666Z                                at /rustc/d5a82bbd26e1ad8b7401f6a718a9c57c96905483/library/std/src/sys_common/backtrace.rs:137:18
2023-09-14T12:46:52.746702006Z   13:     0x5637f0138fc2 - rust_begin_unwind
2023-09-14T12:46:52.746712406Z                                at /rustc/d5a82bbd26e1ad8b7401f6a718a9c57c96905483/library/std/src/panicking.rs:575:5
2023-09-14T12:46:52.746735127Z   14:     0x5637ee6f3ba3 - core::panicking::panic_fmt::h8fa27a0b37dd98b7
2023-09-14T12:46:52.746740467Z                                at /rustc/d5a82bbd26e1ad8b7401f6a718a9c57c96905483/library/core/src/panicking.rs:64:14
2023-09-14T12:46:52.746760237Z   15:     0x5637ee6f3cf2 - core::panicking::panic_bounds_check::hd27fa6e100ea4568
2023-09-14T12:46:52.746792558Z                                at /rustc/d5a82bbd26e1ad8b7401f6a718a9c57c96905483/library/core/src/panicking.rs:147:5
2023-09-14T12:46:52.746802909Z   16:     0x5637eead1f9a - zksync_core::consistency_checker::ConsistencyChecker::run::{{closure}}::h449bcdaefdc936fd
2023-09-14T12:46:52.746839150Z   17:     0x5637eeb35af5 - tokio::loom::std::unsafe_cell::UnsafeCell<T>::with_mut::h91ab53e454932f64
2023-09-14T12:46:52.746864871Z   18:     0x5637eebe0dbd - tokio::runtime::task::core::Core<T,S>::poll::h795cb0b9385c1329
2023-09-14T12:46:52.746885391Z   19:     0x5637eeb4df5e - tokio::runtime::task::harness::Harness<T,S>::poll::hc2ab6dba6d5ba4ae
2023-09-14T12:46:52.746909382Z   20:     0x5637f00eefee - tokio::runtime::scheduler::multi_thread::worker::Context::run_task::h6fb307488dc375ee
2023-09-14T12:46:52.746937683Z   21:     0x5637f00ee143 - tokio::runtime::scheduler::multi_thread::worker::Context::run::h175658cfae89590d
2023-09-14T12:46:52.746969794Z   22:     0x5637f00da9b9 - tokio::macros::scoped_tls::ScopedKey<T>::set::h2b771be14cbef94d
2023-09-14T12:46:52.746992634Z   23:     0x5637f00ede49 - tokio::runtime::scheduler::multi_thread::worker::run::ha39c9ec4dce0c89d
2023-09-14T12:46:52.747062136Z   24:     0x5637f00f8158 - tokio::runtime::task::core::Core<T,S>::poll::h6feb9b8dd5ca027e
2023-09-14T12:46:52.747090037Z   25:     0x5637f00ce6ff - tokio::runtime::task::harness::Harness<T,S>::poll::hb45689cdecd9b901
2023-09-14T12:46:52.747152079Z   26:     0x5637f00dc338 - tokio::runtime::blocking::pool::Inner::run::hc09fcd48a7633fbf
2023-09-14T12:46:52.747159409Z   27:     0x5637f00dd82a - std::sys_common::backtrace::__rust_begin_short_backtrace::hf8e4ebb56e2acb86
2023-09-14T12:46:52.747167950Z   28:     0x5637f00f050b - core::ops::function::FnOnce::call_once{{vtable.shim}}::h4834ddb4b804c368
2023-09-14T12:46:52.747235692Z   29:     0x5637f013c093 - <alloc::boxed::Box<F,A> as core::ops::function::FnOnce<Args>>::call_once::hb77d8d72ebcf79c4
2023-09-14T12:46:52.747246432Z                                at /rustc/d5a82bbd26e1ad8b7401f6a718a9c57c96905483/library/alloc/src/boxed.rs:2000:9
2023-09-14T12:46:52.747251792Z   30:     0x5637f013c093 - <alloc::boxed::Box<F,A> as core::ops::function::FnOnce<Args>>::call_once::hc08c3353e1568487
2023-09-14T12:46:52.747270633Z                                at /rustc/d5a82bbd26e1ad8b7401f6a718a9c57c96905483/library/alloc/src/boxed.rs:2000:9
2023-09-14T12:46:52.747282563Z   31:     0x5637f013c093 - std::sys::unix::thread::Thread::new::thread_start::h7168e596cd5e5ce6
2023-09-14T12:46:52.747294573Z                                at /rustc/d5a82bbd26e1ad8b7401f6a718a9c57c96905483/library/std/src/sys/unix/thread.rs:108:17
2023-09-14T12:46:52.747489009Z   32:     0x7f5cddf78fa3 - start_thread
2023-09-14T12:46:52.747671095Z   33:     0x7f5cddd2006f - clone
2023-09-14T12:46:52.747677365Z   34:                0x0 - <unknown>
2023-09-14T12:46:52.764575588Z 2023-09-14T12:46:52.764414Z  INFO zksync_core::sync_layer::fetcher: New batch: 3206. Timestamp: 1694603945
2023-09-14T12:46:52.764596038Z 2023-09-14T12:46:52.764439Z  INFO zksync_core::sync_layer::fetcher: New miniblock: 6893 / 8176
2023-09-14T12:46:53.009199118Z 2023-09-14T12:46:53.009043Z  INFO zksync_core::sync_layer::fetcher: New miniblock: 6894 / 8176
2023-09-14T12:46:53.066490673Z 2023-09-14T12:46:53.066347Z  INFO zksync_core::sync_layer::batch_status_updater: Batch 3202: committed
2023-09-14T12:46:53.066924736Z 2023-09-14T12:46:53.066789Z  INFO zksync_core::sync_layer::batch_status_updater: Commit status change: number 3202, hash 0xa07f…353d, happened at 2023-09-07 10:46:55.753857 UTC
2023-09-14T12:46:53.109595076Z 2023-09-14T12:46:53.109454Z  INFO zksync_core::sync_layer::external_io: Sealing the batch
2023-09-14T12:46:53.109620647Z 2023-09-14T12:46:53.109471Z DEBUG zksync_core::state_keeper::keeper: L1 batch #3203 should be sealed unconditionally as per sealing rules
2023-09-14T12:46:53.113694988Z 2023-09-14T12:46:53.113561Z  INFO zksync_core::state_keeper::io::seal_logic: Sealing miniblock 6888 (L1 batch 3203) with 0 (0 L2 + 0 L1) txs, 1 events, 13 reads, 4 writes
2023-09-14T12:46:53.157928775Z 2023-09-14T12:46:53.157781Z DEBUG zksync_core::state_keeper::io::seal_logic: miniblock execution stage insert_storage_logs took 42.366601ms with count Some(4)
2023-09-14T12:46:53.161347507Z 2023-09-14T12:46:53.161215Z DEBUG zksync_core::state_keeper::io::seal_logic: sealed miniblock 6888 in 47.653779ms
2023-09-14T12:46:53.161386708Z 2023-09-14T12:46:53.161237Z DEBUG zksync_core::state_keeper::io::seal_logic: L1 batch execution stage fictive_miniblock took 47.68931ms with count None
2023-09-14T12:46:53.161400299Z 2023-09-14T12:46:53.161341Z  INFO zksync_core::state_keeper::io::seal_logic: Sealing L1 batch 3203 with 1 (1 L2 + 0 L1) txs, 1 l2_l1_logs, 4 events, 90 reads (21 deduped), 18 writes (5 deduped)
2023-09-14T12:46:53.209996525Z 2023-09-14T12:46:53.209850Z DEBUG zksync_core::state_keeper::io::seal_logic: L1 batch execution stage insert_protective_reads took 44.407542ms with count Some(21)
2023-09-14T12:46:53.211452098Z 2023-09-14T12:46:53.211323Z DEBUG zksync_core::state_keeper::io::seal_logic: sealed l1 batch 3203 in 98.10967ms
2023-09-14T12:46:53.211476579Z 2023-09-14T12:46:53.211362Z  INFO zksync_core::sync_layer::external_io: Batch 3203 is sealed
2023-09-14T12:46:53.211484819Z 2023-09-14T12:46:53.211377Z DEBUG zksync_core::sync_layer::external_io: Waiting for the new batch params
2023-09-14T12:46:53.211490459Z 2023-09-14T12:46:53.211385Z  INFO zksync_core::sync_layer::external_io: Getting previous L1 batch hash
2023-09-14T12:46:53.240026509Z 2023-09-14T12:46:53.239805Z  INFO zksync_core::sync_layer::fetcher: New batch: 3207. Timestamp: 1694604066
2023-09-14T12:46:53.240053310Z 2023-09-14T12:46:53.239824Z  INFO zksync_core::sync_layer::fetcher: New miniblock: 6895 / 8176
2023-09-14T12:46:53.263598861Z 2023-09-14T12:46:53.263459Z  INFO zksync_core::metadata_calculator::updater: Loading blocks with numbers 3203..=3203 to update Merkle tree
2023-09-14T12:46:53.267278250Z 2023-09-14T12:46:53.267144Z  INFO zksync_core::metadata_calculator::updater: Processing L1 batches #3203..=3203 with 26 total logs
2023-09-14T12:46:53.267494317Z 2023-09-14T12:46:53.267361Z  INFO zksync_merkle_tree::domain: Extending Merkle tree with batch #3203 with 26 ops in full mode
2023-09-14T12:46:53.273107584Z 2023-09-14T12:46:53.272966Z  INFO zksync_merkle_tree::domain: Processed batch #3203; root hash is 0x7f78…d4a0, 9078 leaves in total, 1 initial writes, 4 repeated writes
2023-09-14T12:46:53.275036721Z 2023-09-14T12:46:53.274902Z  INFO zksync_core::metadata_calculator::updater: Saved witnesses for L1 batch #3203 to object storage at `merkel_tree_paths_3203.bin`
2023-09-14T12:46:53.276841885Z 2023-09-14T12:46:53.276649Z  INFO zksync_core::metadata_calculator::updater: Updated metadata for L1 batch #3203 in Postgres
2023-09-14T12:46:53.276882016Z 2023-09-14T12:46:53.276733Z  INFO zksync_merkle_tree::domain: Flushing L1 batches #[3203] to RocksDB
2023-09-14T12:46:53.277228036Z 2023-09-14T12:46:53.277107Z  INFO zksync_core::metadata_calculator::metrics: L1 batches #3203..=3203 processed in tree
2023-09-14T12:46:53.314666571Z 2023-09-14T12:46:53.314511Z  INFO zksync_core::sync_layer::external_io: Previous L1 batch hash: 57655874197922256061346220318188548076559386516802029495849130083054353765536
2023-09-14T12:46:53.340176950Z 2023-09-14T12:46:53.340044Z DEBUG zksync_state::rocksdb: loading storage for l1 batch number 3203
2023-09-14T12:46:53.340206721Z 2023-09-14T12:46:53.340091Z DEBUG zksync_state::rocksdb: loading state changes for l1 batch 3203
2023-09-14T12:46:53.341542481Z 2023-09-14T12:46:53.341421Z DEBUG zksync_state::rocksdb: loading factory deps for l1 batch 3203
2023-09-14T12:46:53.342740826Z 2023-09-14T12:46:53.342505Z  INFO zksync_core::state_keeper::batch_executor: Secondary storage for batch 3204 initialized, size is 9101
2023-09-14T12:46:53.342764707Z 2023-09-14T12:46:53.342543Z DEBUG zksync_core::sync_layer::external_io: Waiting for the new tx, next action is Some(Tx(Transaction(0x5f4f2a2fb69447e75db65b222e50f30a0b439c882223ce6e890a8cf5c46d55ff)))
2023-09-14T12:46:53.342772307Z 2023-09-14T12:46:53.342578Z  INFO zksync_core::state_keeper::batch_executor: Starting executing batch #3204
2023-09-14T12:46:53.352470206Z 2023-09-14T12:46:53.352342Z  INFO zksync_core::sync_layer::external_io: Sealing miniblock
2023-09-14T12:46:53.352495827Z 2023-09-14T12:46:53.352360Z DEBUG zksync_core::state_keeper::keeper: Miniblock #6889 (L1 batch #3204) should be sealed as per sealing rules
2023-09-14T12:46:53.355504267Z 2023-09-14T12:46:53.355373Z  INFO zksync_core::state_keeper::io::seal_logic: Sealing miniblock 6889 (L1 batch 3204) with 1 (1 L2 + 0 L1) txs, 3 events, 77 reads, 14 writes
2023-09-14T12:46:53.401962459Z 2023-09-14T12:46:53.401823Z DEBUG zksync_core::state_keeper::io::seal_logic: miniblock execution stage insert_storage_logs took 42.33107ms with count Some(14)
2023-09-14T12:46:53.405522575Z 2023-09-14T12:46:53.405396Z DEBUG zksync_core::state_keeper::io::seal_logic: sealed miniblock 6889 in 50.024698ms
2023-09-14T12:46:53.406345130Z 2023-09-14T12:46:53.406143Z  INFO zksync_core::sync_layer::external_io: Miniblock 6890 is sealed
2023-09-14T12:46:53.406393921Z 2023-09-14T12:46:53.406163Z DEBUG zksync_core::state_keeper::keeper: Initialized new miniblock #6890 (L1 batch #3204) with timestamp 2023-09-13 11:16:06 UTC
2023-09-14T12:46:53.406401451Z 2023-09-14T12:46:53.406180Z DEBUG zksync_core::sync_layer::external_io: Waiting for the new tx, next action is Some(SealBatch)
2023-09-14T12:46:53.472626763Z 2023-09-14T12:46:53.472431Z  INFO zksync_core::sync_layer::fetcher: New miniblock: 6896 / 8176
2023-09-14T12:46:53.708788983Z 2023-09-14T12:46:53.708623Z  INFO zksync_core::sync_layer::fetcher: New batch: 3208. Timestamp: 1694604126
2023-09-14T12:46:53.708815224Z 2023-09-14T12:46:53.708642Z  INFO zksync_core::sync_layer::fetcher: New miniblock: 6897 / 8176
2023-09-14T12:46:53.903700525Z 2023-09-14T12:46:53.903557Z  INFO zksync_external_node: initialized ETH-TxManager in 4.725550326s
2023-09-14T12:46:53.913371443Z 2023-09-14T12:46:53.913245Z  INFO zksync_external_node: initializing Opside send plug
2023-09-14T12:46:53.936266844Z 2023-09-14T12:46:53.936049Z  INFO zksync_external_node: initialized Opside send plug4.758046524s
2023-09-14T12:46:53.936291305Z 2023-09-14T12:46:53.936056Z  INFO zksync_external_node: initializing Opside manage plug
2023-09-14T12:46:53.936298395Z 2023-09-14T12:46:53.936070Z  INFO zksync_core::eth_sender::opside_send_plug::send_proof_plug: start handle_caches_event loop
2023-09-14T12:46:53.936304385Z 2023-09-14T12:46:53.936109Z  INFO zksync_core::eth_sender::opside_send_plug::send_proof_plug: start handle_monit_tx_event_loop loop
2023-09-14T12:46:53.936309966Z 2023-09-14T12:46:53.936112Z  INFO zksync_core::eth_sender::opside_send_plug::send_proof_plug: handle_chans_event: start handle_chans_event loop
2023-09-14T12:46:53.943476709Z 2023-09-14T12:46:53.943336Z  INFO zksync_core::eth_sender::opside_send_plug::manage_proof_plug: manage proof start try_fetch_proof_to_send
2023-09-14T12:46:53.943501999Z 2023-09-14T12:46:53.943373Z  INFO zksync_core::eth_sender::opside_send_plug::manage_proof_plug: manage proof: process_resend wait for start signal
2023-09-14T12:46:53.943512100Z 2023-09-14T12:46:53.943444Z  INFO zksync_core::eth_sender::opside_send_plug::send_proof_plug: start into handle history proof txs
2023-09-14T12:46:53.944207140Z 2023-09-14T12:46:53.943940Z  INFO zksync_utils::wait_for_tasks: One of the tokio actors unexpectedly finished with error: index out of bounds: the len is 1 but the index is 1
2023-09-14T12:46:53.944242241Z 2023-09-14T12:46:53.943968Z  INFO zksync_storage::db: Waiting for all the RocksDB instances to be dropped, 2 remaining
2023-09-14T12:46:53.944249652Z 2023-09-14T12:46:53.943986Z  INFO zksync_core::metadata_calculator::updater: Stop signal received, metadata_calculator is shutting down
2023-09-14T12:46:53.944290583Z 2023-09-14T12:46:53.944210Z  INFO zksync_core::api_server::web3: Stop signal received, web3 HTTP JSON RPC API is shutting down
2023-09-14T12:46:53.944302103Z 2023-09-14T12:46:53.944224Z  INFO zksync_core::api_server::web3: Stop signal received, WS JSON RPC API is shutting down
2023-09-14T12:46:53.945073626Z 2023-09-14T12:46:53.944947Z  INFO zksync_storage::db: Waiting for all the RocksDB instances to be dropped, 1 remaining
2023-09-14T12:46:53.945101567Z 2023-09-14T12:46:53.945013Z  INFO zksync_core::sync_layer::fetcher: New miniblock: 6898 / 8176
2023-09-14T12:46:53.945111027Z 2023-09-14T12:46:53.945033Z  INFO zksync_core::sync_layer::fetcher: Stop signal received, exiting the fetcher routine
2023-09-14T12:46:53.949720505Z 2023-09-14T12:46:53.949609Z  INFO zksync_core::eth_sender::opside_send_plug::send_proof_plug: finish handle history proof txs
2023-09-14T12:46:53.949748195Z 2023-09-14T12:46:53.949638Z  INFO zksync_core::eth_sender::opside_send_plug::send_proof_plug: Stop signal received, eth_tx_aggregator is shutting down
2023-09-14T12:46:54.135835514Z 2023-09-14T12:46:54.135694Z  INFO zksync_core::api_server::web3::pubsub_notifier: Stop signal received, pubsub_tx_notifier is shutting down
2023-09-14T12:46:54.136480683Z 2023-09-14T12:46:54.136390Z  INFO zksync_core::api_server::web3::pubsub_notifier: Stop signal received, pubsub_block_notifier is shutting down
2023-09-14T12:46:54.141903295Z 2023-09-14T12:46:54.141774Z  INFO zksync_core::api_server::web3::pubsub_notifier: Stop signal received, pubsub_logs_notifier is shutting down
2023-09-14T12:46:54.218539427Z 2023-09-14T12:46:54.218338Z  INFO zksync_core::eth_sender::eth_tx_manager: Stop signal received, eth_tx_manager is shutting down
2023-09-14T12:46:54.358294949Z 2023-09-14T12:46:54.358130Z  INFO zksync_core::l1_gas_price::main_node_fetcher: Stop signal received, MainNodeGasPriceFetcher is shutting down
2023-09-14T12:46:54.417717566Z 2023-09-14T12:46:54.417570Z  INFO zksync_core::state_keeper::keeper: Stop signal received, state keeper is shutting down
2023-09-14T12:46:54.417754358Z 2023-09-14T12:46:54.417607Z  INFO zksync_core::state_keeper::batch_executor: State keeper exited with an unfinished batch
2023-09-14T12:46:54.420950213Z 2023-09-14T12:46:54.420815Z  INFO zksync_storage::db: All the RocksDB instances are dropped
2023-09-14T12:46:55.500606305Z 2023-09-14T12:46:55.500432Z  INFO zksync_core::eth_sender::opside_send_plug::manage_proof_plug: get proofBatchNumFinal == 0 &&  proofHashBatchNumFinal == 0 not do commit pending proof
2023-09-14T12:46:55.500631526Z 2023-09-14T12:46:55.500460Z  INFO zksync_core::eth_sender::opside_send_plug::manage_proof_plug: manage proof start try_fetch_proof_to_

关键错误信息

2023-09-14T12:46:52.726526535Z thread 'tokio-runtime-worker' panicked at 'index out of bounds: the len is 1 but the index is 1', /usr/src/zksync/core/bin/zksync_core/src/consistency_checker/mod.rs:111:27

相关代码分析

 async fn check_commitments(&self, batch_number: L1BatchNumber) -> Result<bool, error::Error> {
        let mut storage = self.db.access_storage().await;

        let storage_block = storage
            .blocks_dal()
            .get_storage_block(batch_number)
            .await
            .unwrap_or_else(|| panic!("Block {} not found in the database", batch_number));

        let commit_tx_id = storage_block
            .eth_commit_tx_id
            .unwrap_or_else(|| panic!("Block commit tx not found for block {}", batch_number))
            as u32;

        let block_metadata = storage
            .blocks_dal()
            .get_block_with_metadata(storage_block)
            .await
            .unwrap_or_else(|| {
                panic!(
                    "Block metadata for block {} not found in the database",
                    batch_number
                )
            });

        let commit_tx_hash = storage
            .eth_sender_dal()
            .get_confirmed_tx_hash_by_eth_tx_id(commit_tx_id)
            .await
            .unwrap_or_else(|| {
                panic!(
                    "Commit tx hash not found in the database. Commit tx id: {}",
                    commit_tx_id
                )
            });

        vlog::info!(
            "Checking commit tx {} for batch {}",
            commit_tx_hash,
            batch_number.0
        );

        // we can't get tx calldata from db because it can be fake
        let commit_tx = self
            .web3
            .eth()
            .transaction(TransactionId::Hash(commit_tx_hash))
            .await?
            .expect("Commit tx not found on L1");

        let commit_tx_status = self
            .web3
            .eth()
            .transaction_receipt(commit_tx_hash)
            .await?
            .expect("Commit tx receipt not found on L1")
            .status;

        assert_eq!(
            commit_tx_status,
            Some(1.into()),
            "Main node gave us a failed commit tx"
        );

        let commitments = self
            .contract
            .function("commitBlocks")
            .unwrap()
            .decode_input(&commit_tx.input.0[4..])
            .unwrap()
            .pop()
            .unwrap()
            .into_array()
            .unwrap();

        // Commit transactions usually publish multiple commitments at once, so we need to find
        // the one that corresponds to the batch we're checking.
        let first_batch_number = match &commitments[0] {
            ethabi::Token::Tuple(tuple) => tuple[0].clone().into_uint().unwrap().as_usize(),
            _ => panic!("ABI does not match the commitBlocks() function on the zkSync contract"),
        };
        let commitment = &commitments[batch_number.0 as usize - first_batch_number];

        Ok(commitment == &block_metadata.l1_commit_data())
    }

check_commitments作用主要是根据指定的L1BatchNumber,

  • 从l1_batches中获取对应的Block信息,然后获得eth_commit_tx_id字段->commit_tx_id
  • 从factory_deps等数据里获得->block_metadata
  • 根据commit_tx_ideth_txs_history数据表中得到->commit_tx_hash->0xa07f1bfcb6826c3db3b7d6cc3d69db47e3ceb17008c15dcd3bd1e7ceb10b353d
  • 通过RPCeth_getTransactionByHash查询commit_tx_hash得到->commit_tx
  • 通过RPCeth_getTransactionReceipt查询commit_tx_hash得到->commit_tx_status
  • 检查commit_tx_status是否正确
  • 通过commitBlocksABI decode commit_tx.input得到->commitments
  • 获取commitments[0]得到->first_batch_number
  • 根据batch_numberfirst_batch_number获取commitments对应位置数据

此时报异常:'index out of bounds: the len is 1 but the index is 1',差值1
由于batch_number==3196,所以first_batch_number等于3195,并且commitments只有一个数据

// TODO

zkSync 优化L1交易提交和处理

问题描述

长时间没有向L1提交proof数据 问题解决后,发现目前定序服务发送commitblock的速度远远大于提交proofhash和proveblocks的速度。导致last_verified_batch和sequenced_batch差距持续增大。

问题定位

因为提交proofhash和proveblocks需要有交易确认等待期,先看下交易状态的处理逻辑以及关注如何实现各个阶段的加速处理,以及降低压力数据(commitblock)的产生频率,从而达到稳定的平衡

降低压力数据(commitblock)的产生频率

优先处理下,减少sequencer提交压力数据增加幅度

CHAIN_STATE_KEEPER_TRANSACTION_SLOTS=500
CHAIN_STATE_KEEPER_BLOCK_COMMIT_DEADLINE_MS=300000

5分钟提交一次

基于 EVM 的区块链的索引

TrueBlocks 是一个开源的区块链数据索引器,可以为所有的 EVM 链提供数据服务。

官网:https://trueblocks.io/
GitHub:https://github.com/trueblocks/trueblocks-core
api: https://trueblocks.io/api/
使用帮助和详细介绍:https://learnblockchain.cn/article/6467

Polygon zkEVM bridge技术文档

1. 引言

区块链互操作性是指链A与链B交互数据的能力。近年来区块链生态快速扩张,出现了大量具有不同属性的区块链网络,互操作性是区块链设计时的一个重要考虑指标。不具有互操作性,网络具有孤立于更大生态的风险,为此,激励了项目方研究和开发互操作性解决方案。每种互操作性解决方案具有不同的权衡和底层技术。本文由Polygon团队提供的解决方案,为Polygon zkEVM L2网络提供了原生的互操作性。

bridge为基础设施元素,允许L1与L2之间进行资产迁移和通信。从用户角度来看,bridge可在不改变资产数量或资产功能的情况下,将资产由网络A转移至网络B;bridge也可以在网络间发送data payload(即跨链消息传递)。

Polygon zkEVM这样的L2 rollups,其L2 State Transitions和交易的数据可用性 均由L1合约来保证,因此,若正确设计L2架构,可仅依赖于合约逻辑来同步bridge的两端,而不需要可信的链下relayer来跨网络同步bridge两端。需注意的是,本bridge方案必须在L2层包含相应的设计

如图1所示,bridging interface为部署在L1和L2网络上的bridge合约,用户可用于:

  1. bridge assets(1):在“origin”网络lock某资产,在“destination”网络的bridge合约中claim时,会mint出该资产的representative token。
  2. bridge assets的逆操作(2):burn某资产的representative token,然后在“origin”网络中unlock其原始资产。
  3. 跨链通讯channel(3):在L1和L2之间相互发送data payload。

2. Exit Merkle trees

bridge中包含了名为Global Exit Merkle Tree(GEMT)的默克尔树。在GEMT树中,每个叶子节点表示了特定网络的Exit Merkle Tree(EMT).GEMT树中仅包含了2个叶子节点,一个对应为L1 EMT root,另一个对应为L2 EMT root。GEMT树的结构如图2所示:

GEMT为固定只有2个叶子节点的常规默克尔树,而EMT为append only sparse Merkle Trees(SMT)且具有固定的depth(Polygon zkEVM中设计depth为32)。SMT为大量使用的默克尔树,可在链上高效使用——详情见附录A。

特定网络EMT的每个叶子节点,表示了从该网络往外 bridge某资产(或某资产的representative token)或 发送某消息 的意图。EMT每个叶子节点为如下参数的abi encoded packed structurekeccak256哈希值

  1. uint8 leafType:0表示asset,1表示message。
  2. int32 originNetwork:为原始资产所属的Origin Network ID。
  3. address originAddress:若leafType=0,则为Origin network token address,其中“0x000…0000”保留为ether token address;若leafType=1,则为message的msg.sender。
  4. uint32 destinationNetwork:为bridging的destination网络ID。
  5. address destinationAddress:为目标网络中接收bridged asset的收款方地址。
  6. uint256 amount:bridge的token或ether数量。
  7. bytes32 metadataHash:为metadata哈希值。metadata将包含所转移资产信息或所转移message payload信息。

一旦某leaf添加到EMT中,将计算新的EMT root,以及新的GEMT root。GEMT root将在网络间同步,使得可在对方网络中证明leaf inclusion,并完成bridge操作。

3. 合约架构

大多数bridge架构都使用在双方网络上的智能合约来实现。但是,为同步二者的GEMT,有一部分bridge逻辑必须与L2 State管理架构进行集成。因此,为理解本brIdge方案,还需要考虑L2 State管理中涉及到的链下角色——如Sequencer、Aggregator以及PolygonZkEVM.sol合约。
此外,bridge架构中还包含以下元素:

  1. PolygonZkEVMBridge.sol合约:为bridging interface,允许用户与该bridge进行交互,并进行bridgeAsset/bridgeMessage/claimAsset/claimMessage等操作。每个网络都有一个PolygonZkEVMBridge.sol合约,并管理相应的EMT。
  2. PolygonZkEVMGlobalExitRoot.sol合约:管理GEMT树,充当GEMT树的历史存储库,具体为:
    2.1 存储GEMT树,
    2.2 每次由PolygonZkEVM.sol合约更新新的EMT(实际为更新L2 EMT)时,会计算新的GEMT root,
    2.3 每次由L1的PolygonZkEVMBridge.sol合约更新新的EMT(实际为更新L1 EMT)时,会计算新的GEMT root。
  3. PolygonZkEVMGlobalExitRootL2.sol合约:为一种特殊的合约,用于跨网络同步GEMT和L2 EMT roots:
    3.1 该合约中有storage slots来存储GEMT roots以及L2 EMT root。
    3.2 该合约的特殊性在于:底层的zero-knowledge proving/verification系统可直接访问其storage slots,以确保由L1同步到L2(L1->L2)的GEMT的有效性,以及有L2同步到L1(L2->L1)的L2 EMT的有效性。

3.1 bridging数据流


图3展示了详细的bridge架构,以及为实现bridge操作的finality,双方网络是如何交互的。具体分为2种bridge操作:

  1. 由L1->L2的bridge操作
  2. 由L2->L1的bridge操作

3.1.1 由L1->L2的bridge操作


由L1->L2的bridge操作的基本流程为:

  • (1)用户调用L1 PolygonZkEVMBridge.sol合约的bridgeAsset/bridgeMessage函数。取决于具体的asset类型,该函数的内部实现将有所不同。若该bridge请求有效,则合约将根据该请求的属性来填充新的L1 EMT leaf,将该叶子节点填充到EMT树中,并计算新的L1 EMT root。
  • (2)在bridgeAsset/bridgeMessage同一L1交易中,L1 PolygonZkEVMBridge.sol合约会调用L1 PolygonZkEVMGlobalExitRoot.sol合约来更新新的L1 EMT root,并计算新的GEMT root。
  • (3)L2 Sequencer将从 L1 PolygonZkEVMGlobalExitRoot.sol合约中获取新的GEMT root。
  • (4)在transaction batch execution之初,L2 Sequencer会将新的GEMT root存入 L2 PolygonZkEVMGlobalExitRootL2.sol合约的特殊storage slots中,允许L2用户访问。
  • (5)&(6)为完成bridging流程,用户必须调用L2 PolygonZkEVMBridge.sol合约的claimAsset/claimMessage函数,并提供之前已添加到EMT的节点的Merkle inclusion proof。L2 PolygonZkEVMBridge.sol合约 将从 L2 PolygonZkEVMGlobalExitRootL2.sol合约中获取GEMT root,并验证该inclusion proof的有效性。若该inclusion proof有效,则取决于所bridge的资产类型,L2 PolygonZkEVMBridge.sol合约将完成相应的bridging流程。若该inclusion proof无效,则该交易将被revert。

3.1.2 由L2->L1的bridge操作


由L2->L1的bridge操作的基本流程为:

  • (1)用户调用L2 PolygonZkEVMBridge.sol合约的bridgeAsset/bridgeMessage函数。取决于具体的asset类型,该函数的内部实现将有所不同。若该bridge请求有效,则合约将根据该请求的属性来填充新的L2 EMT leaf,将该叶子节点填充到EMT树中,并计算新的L2 EMT root。
  • (2)在bridgeAsset/bridgeMessage同一L2交易中,L2 PolygonZkEVMBridge.sol合约会调用L2 PolygonZkEVMGlobalExitRootL2.sol合约来更新新的L2 EMT root ,并计算新的GEMT root 。
  • (3)L2 Aggregator将为 包含了 该L2 bridge交易在内 的sequence of batches的execution生成相应的计算完整性Zero-Knowledge proof。通过该execution,可从L2 State中获得新的L2 EMT.
  • (4)L2 Aggregator会将新的L2 EMT以及相应的ZKP证明提交到 L1 PolygonZkEVM.sol合约。
  • (5)L1 PolygonZkEVM.sol合约会验证该ZKP证明的有效性,若有效,L1 PolygonZkEVM.sol合约 会调用 L1 PolygonZkEVMGlobalExitRoot.sol来更新新的L2 EMT root,并计算新的GEMT root。
  • (6)&(7)为完成bridging流程,用户必须调用L1 PolygonZkEVMBridge.sol合约的claimAsset/claimMessage函数,并提供之前已添加到EMT的节点的Merkle inclusion proof。L1 PolygonZkEVMBridge.sol合约 将从 L1 PolygonZkEVMGlobalExitRootL2.sol合约中获取GEMT root,并验证该inclusion proof的有效性。若该inclusion proof有效,则取决于所bridge的资产类型,L1 PolygonZkEVMBridge.sol合约将完成相应的bridging流程。若该inclusion proof无效,则该交易将被revert。

3.2 L1/L2 PolygonZkEVMBridge.sol合约

PolygonZkEVMBridge.sol合约为特定网络用户的bridging interface,因此在每个网络都有一个PolygonZkEVMBridge.sol合约。

PolygonZkEVMBridge.sol合约具有:

  • 所需的storage slots,来维护每个网络的EMT
  • 所需的函数供用户交互

PolygonZkEVMBridge.sol合约目前有2种bridge函数:

  • 1)bridgeAsset函数
  • 2)bridgeMessage函数

默认L2网络的账号是没有ether来支付交易手续费的,当claiming源自L1的Asset或Message时,调用L2 bridge合约claiming函数的 L2 claiming交易可以不支付gas费,由polygon zkEVM协议来资助

PolygonZkEVMBridge.sol合约目前有2种claiming函数

  • 1)claimAsset函数
  • 2)claimMessage函数

3.2.1 bridgeAsset函数

bridgeAsset函数用于向另一网络转移资产:

function bridgeAsset(
    uint32 destinationNetwork,
     address destinationAddress,
     uint256 amount,
     address token,
     bool forceUpdateGlobalExitRoot,
     bytes calldata permitData
)

bridgeAsset函数参数有:

  • token:为原始网络的ERC20 token地址,若为“0x0000…0000”,则意味着用户想要转移ether。
  • destinationNetwork:为目标网络的网络ID,必须不同于 调用本函数的所属网络ID,否则交易将被rever。
  • destinationAddress:目标网络接收所bridge token的收款方地址。
  • amount:bridge的token数量。
  • permitData:为具有EIP-2612 Permit扩展的ERC-20 token的 已签名permit data,用于改变某账号的ERC-20 allowance,并允许bridge合约将所bridged token转移给自身。


所bridge的资产类型有3种,对应的bridgeAsset有3条可能的执行流:

  • (1)所bridged asset为ether。

    • 对应bridgeAsset函数的token参数为“0x0000…0000”。
    • 交易的msg.value必须匹配amount参数。
    • 为完成bridge操作,相应的ether数量将lock在bridge合约中。注意L1和L2的ether将以相同的方式来处理,按1:1兑换。且ether为L1和L2网络的原生token,用于支付gas费。
    • 由于ehter源自L1,相应leaf中的originNetwork参数将设置为L1网络ID。
  • (2)所bridged asset为源自另一网络ERC20 token的representative ERC-20 token。

    • Representative ERC-20 tokens由PolygonZkEVMBridge.sol管理——负责mint和burn。
    • bridge合约中有名为wrappedTokenToTokenInfo的map,为记录部署在本网络的representative ERC-20 token contracts list。
    • 对于所部署的每个representative token contract,wrappedTokenToTokenInfo map会以该representative token合约地址为key,相应的value为TokenInformation结构体:
      // Wrapped Token information struct
       struct TokenInformation {
       uint32 originNetwork;
       address originTokenAddress;
       }
    • 对应bridgeAsset函数的token参数为wrappedTokenToTokenInfo map中的某key,则意味着所bridged token为源自另一网络ERC-20 token的representative ERC-20 token。
    • 为完成bridge操作,对应amount参数相应数量的token将被burn,bridge合约无需用户许可,有权burn相应的token。leaf中的originAddress和originNetwork参数将从wrappedTokenToTokenInfo map相应value中获取。
  • (3)所bridged asset为源自本网络的ERC-20 token。

    • 为完成bridge操作,对应amount参数相应数量的token将lock在bridge合约中。
    • 为让bridge合约能将相应数量的token转移给自身,bridge合约必须具有不少于用户所bridge token数量的allowance。
    • 若该token合约支持EIP-2612 permit扩展,相应的allowance可在同一笔交易中实现——对应有signed permitData参数。
    • leaf中的originAddress和originNetwork参数分别为当前网络ID和ERC-20 token合约地址。
    • leaf中的meatadataHash参数计算方式为
      metadataHash = keccak256(
        abi.encode(
            IERC20MetadataUpgradeable(token).name(),
            IERC20MetadataUpgradeable(token).symbol(),
            IERC20MetadataUpgradeable(token).decimals()
        )
      )

      对应在bridgeAsset函数中的metadata具体表示为:

      // Encode metadata
      metadata = abi.encode(
        _safeName(token),
        _safeSymbol(token),
        _safeDecimals(token)
      );

      最后的仔细步骤则是相同的,与资产类型无关。剩余的leaf参数有:

  • leafType参数:设置为0,表示资产。

  • destinationNetwork和destinationAddress参数:根据调用bridgeAsset函数的相应参数设置。

bridgeAsset流程中:

  • 会释放包含了new leaf所有信息的BridgeEvent事件
  • 该new leaf将添加到EMT中
  • 为更新该new EMT root,会调用GEMT合约。

3.2.2 bridgeMessage函数

bridgeMessage函数用于向另一网络转移消息:

function bridgeMessage(
    uint32 destinationNetwork,
     address destinationAddress,
     bool forceUpdateGlobalExitRoot,
     bytes calldata metadata
)

bridgeMessage函数的参数有:

  • destinationNetwork:为目标网络的网络ID,必须不同于调用本函数所在的网络ID,否则交易将被revert。
  • destinationAddress:为目标网络接收bridged message的接收地址。
  • forceUpdateGlobalExitRoot:标记是否更新新的global exit root。
  • metadata:为Message payload。

bridgeMessage函数将:

  • 直接释放BridgeEvent事件
  • 向EMT中添加new leaf
  • 与bridgeAsset函数类似,调用GEMT合约更新new EMT root

bridgeMessage函数与bridgeAsset函数的主要差异在于:

  • 所创建的leaf中的leafType参数为1
  • leaf中的orginAddress和metadataHash参数分别为msg.sender值和message payload的哈希值。
  • 用户在bridge message的同时,也可bridge ether。具体bridged ether的数量添加在bridgeMeesage函数调用交易的msg.value中,在目标网络上接收消息的同时,可destinationAddress.call{value: amount}获得相应的ether。

3.2.3 claimAsset函数

claimAsset函数用于claim源自另一网络bridge来的资产:

function claimAsset(
    bytes32[_DEPOSIT_CONTRACT_TREE_DEPTH] calldata smtProof,
     uint32 index,
     bytes32 mainnetExitRoot,
     bytes32 rollupExitRoot,
     uint32 originNetwork,
     address originTokenAddress,
     uint32 destinationNetwork,
     address destinationAddress,
     uint256 amount,
     bytes calldata metadata
)

claimAsset函数参数有:

  • smtProof:为Merkle proof,即为验证该leaf所需的sibling nodes array。
  • index:为leaf index。
  • mainnetExitRoot:为包含该leaf的L1 EMT root。
  • rollupExitRoot:为包含该leaf的L2 EMT root。
  • originNetwork:为所bridge资产所属的原始网络ID。
  • originTokenAddress:为原始网络的ERC-20 token地址,若为0x0000…0000,则表示claIm的为ether资产。
  • destinationNetwork:为目标网络ID,即为调用claimAsset函数所属的网络ID。
  • destinationAddress:为接收bridged token的收款方地址。
  • amount:所claim的token数量。
  • metadata:
    • 若claim的token为ether 或 为调用claimAsset函数所属网络的ERC-20 token,则metatdata值为0(metadata参数设置为0x)。
    • 若bridgeAsset时,bridge的为orginNetwork的ERC-20 token时,提供metadata,供目标网络上claimAsset时,wrap相应的representative token。

// 以上数据参数,通过bridge service服务api提供

// Encode metadata,bridgeAsset本网络ERC-20 token
    metadata = abi.encode( 
    _safeName(token),
    _safeSymbol(token),
    _safeDecimals(token)
);

// claimAsset时,基于metadata构建相应的wrap token
// Get ERC20 metadata
(
    string memory name,
    string memory symbol,
    uint8 decimals
) = abi.decode(metadata, (string, string, uint8));

// Create a new wrapped erc20 using create2
TokenWrapped newWrappedToken = (new TokenWrapped){
    salt: tokenInfoHash
}(name, symbol, decimals);

claimAsset函数会根据用户提供的参数来验证相应leaf的有效性。
为避免重放攻击,需确保指定leaf仅能成功验证一次。PolygonZkEVMBridge.sol合约具有claimedBitMap map来存储每个已成功验证leaf index的nullifier bit,具体如图5所示:

为优化storage slots usage,claimedBitMap map中的每个条目会hold 256 nullifier bits for 256 already verified leaves。
认定 某指定leaf的merkle proof是有效的,需满足以下条件:

  • claimedBitMap map中该leaf index的nullifier bit必须未设置。
  • 该leaf中的destinationNetwork参数必须 与 调用claimAsset函数所属网络ID 一致。
  • 对mainnetExitRoot参数和rollupExitRoot参数哈希获得的GEMT root结果,必须已存在于PolygonZkEVMGlobalExitRoot.sol 合约中。
  • 该merkle proof必须有效,即可生成期待的GEMT root。

若该leaf验证成功,与该leaf index相应的claimedBitMap map中的bit将被nullified,后续的流程如图6所示:

与bridgeAsset一致,claimAsset根据资产类型不同,有3种可能的执行流程:

  • (1)所claimed asset为ether:
    • originTokenAddress参数为“0x0000…0000”
    • amount参数对应数量的ether将发送到destinationAddress参数对应的地址账号中。
    • 由于ether无法minted on demand,L2 PolygonZkEVMBridge.sol 合约中具有preminted 100000000000 ether(1000亿个ether)作为ether bridging liquidity。由于假设所有的L2 ether都源自L1,因此,L1 PolygonZkEVMBridge.sol 合约中无需预置ether balance,L2中的每个ether wei都有a backing ether wei blocked in L1 contract。注意,L2 PolygonZkEVMBridge.sol 合约中 preminted liquidity并不会ether有任何通胀影响。【不过目前genesis中,L2 PolygonZkEVMBridge.sol 合约中preminted ether数为20亿个。】
  • (2)所claimed asset为源自本网络的ERC-20 token
    • originNetwork参数对应 调用claimAsset函数所属网络ID。
    • 意味着所claimed asset源自本网络,且之前已在PolygonZkEVMBridge.sol合约中锁定。
    • amount对应发送到destinationAddress参数对应地址账号中的相应ERC-20 token数量。
  • (3)所claimed asset为源自另一网络ERC-20 token的representative ERC-20 token:
    • PolygonZkEVMBridge.sol合约中有tokenInfoToWrappedToken map,该map中存储了部署在本网络的representative ERC-20 token合约地址。部署的representative ERC-20 token合约采用create2 opcode,相应的salt为tokenInfoToWrappedToken map的key值。该salt值根据originNetwork和originTokenAddress参数计算而来:
      // The tokens is not from this network
      // Create a wrapper for the token if not exist yet
      bytes32 tokenInfoHash = keccak256(
        abi.encodePacked(originNetwork, originTokenAddress)
      );
    • bridge合约会检查所claimed asset的representative ERC-20 token合约是否已存在tokenInfoToWrappedToken map中:
      • 若存在,则说明已部署该representative ERC-20 token合约,可mint对应amount参数数量的token到destinationAddress参数对应的账号中。
      • 若不存在,则需使用create2 opcode以及之前计算的salt 来部署新的representative ERC-20 token合约。使用create2 opcode和指定的salt,可确定性的绑定 representative token的合约地址 与 origin network的origin token合约地址。部署成功后,可mint对应amount参数数量的token到destinationAddress参数对应的账号中,同时:
        • 会释放NewWrappedToken事件。
        • 会在tokenInfoToWrappedToken map和wrappedTokenToTokenInfo map中添加新representative ERC-20 token合约的新条目。

最终,无论是claim的是哪种资产,都会释放ClaimEvent事件。

3.2.4 claimMessage函数

claimMessage函数用于claim源自其它网络的message:

function claimMessage(
bytes32[_DEPOSIT_CONTRACT_TREE_DEPTH] calldata smtProof,
     uint32 index,
     bytes32 mainnetExitRoot,
     bytes32 rollupExitRoot,
     uint32 originNetwork,
     address originAddress,
     uint32 destinationNetwork,
     address destinationAddress,
     uint256 amount,
     bytes calldata metadata
)

与claimAsset函数类似,claimMessage会验证用户给定的leaf,由于二者的leaf格式是一样的,因此这2个函数的参数也是一样的。
与claimAsset函数一样,若leaf验证通过,通过设置相应leaf index对应在claimdBitMap map中的bit,来将相应leaf index nullified。
然后哦,底层会调用destinationAddress参数:

// Execute message
// Transfer ether
/* solhint-disable avoid-low-level-calls */
(bool success, ) = destinationAddress.call{value: amount}(
    abi.encodeCall(
        IBridgeMessageReceiver.onMessageReceived,
        (originAddress, originNetwork, metadata)
    )
);

可看出,为调用onMessageReceived函数设置了originAddress, originNetwork, metadata参数,且若message中包含了ether,相应的call value设置为amount参数。其中metadata参数为message payload。
注意,messaging service可用于将ether转移给Externally owned accounts(EOAs),但是,EOAs无法解析消息,因此message payload对其不可用。

最终,若message发送成功,则会释放ClaimEvent事件。

3.3 L2 PolygonZkEVMGlobalExitRoot.sol合约

PolygonZkEVMGlobalExitRoot.sol合约为L1合约,可计算和存储每个new GEMT root。为确保所添加的每个leaf在未来都可被验证,需要存储每个GEMT root。名为globalExitRootMap的map用来存储所有已计算的GEMT roots。

L1 PolygonZkEVMBridge.sol合约在验证leaf时,会从globalExitRootMap map中获取GEMT roots。

L1 PolygonZkEVMGlobalExitRoot.sol合约中updateExitRoot函数用于更新EMT roots并计算新的GEMT root,updateExitRoot函数会 由L1 PolygonZkEVM.sol合约 或 由L1 PolygonZkEVMBridge.sol合约 调用:

  • 若由L1 PolygonZkEVM.sol合约调用updateExitRoot函数,则将更新L2 EMT root。
  • 若由L1 PolygonZkEVMBridge.sol合约调用updateExitRoot函数,则将更新L1 EMT root。
function updateExitRoot(bytes32 newRoot) external

  • 每个L2 state transition均由L1 PolygonZkEVM.sol合约固化(通过验证Aggregator提交的ZKP证明),通过调用L1 PolygonZkEVMGlobalExitRoot.sol合约的updateExitRoot函数,将更新new L2 EMT root。
  • 当有new leaf添加到L1 PolygonZkEVMBridge.sol合约的L1 EMT中时,通过调用L1 PolygonZkEVMGlobalExitRoot.sol合约的updateExitRoot函数,将更新new L1 EMT root。

最终,都会释放UpdateGlobalExitRoot事件。

3.4 L2 PolygonZkEVMGlobalExitRootL2.sol合约

L2 PolygonZkEVMGlobalExitRootL2.sol 合约 为部署在L2上的“特殊”合约。
【v1.1版本的bridge文档与当前的实现有出入】
Aggregator zkEVM node软件在执行完transactions batches时,可直接访问L2 PolygonZkEVMGlobalExitRootL2.sol 合约的lastRollupExitRoot storage slots。lastRollupExitRoot storage slots中存储的为L2 EMT root。
随后,该L2 EMT root会和L2 State transition proof一起,提交到L1 PolygonZkEVM.sol合约中。若该proof验证通过,则会更新L1 PolygonZkEVMGlobalExitRoot.sol合约中的new L2 EMT root,并计算new GEMT,从而使得L1 PolygonZkEVMBridge.sol合约可获得有效的GEMT roots来验证包含在aggregated batches中的user claim transactions。

附录A Gas efficient append only sparse Merkle tree

sparse Merkle tree基础只是可参看:
Sparse Merkle Tree
sparse Merkle tree为具有intractable size的Merkle tree,可以以有效的方式处理。假设它几乎是空的,事实上,最初它是空的。当它中的所有叶子都具有相同的零(空)值时,它被认为是空的,由于这种假设,可以通过log ⁡ 2 ( n ) \log_2(n)log
2

(n)次哈希运算来计算root,其中n nn是树上的叶子数量。请注意,而对于non-sparse tree,需要2 n − 1 2n−12n−1次哈希运算才能计算root。在计算空树根的过程中,在每个level中,所有节点都将取相同的值,因此不需要计算每个level中的所有子树。当树的特定level的节点为空时,该节点的值都被命名为Zero Hash,并将表示具有x xx个零叶子的subtree。

以3层(8个叶子)的empty sparse Merkle tree为例,相应的Zero H按时list为:




参考资料

[1] polygon zkEVM Technical Document Bridge v.1.1

原文链接:https://blog.csdn.net/mutourend/article/details/129986151