-
Notifications
You must be signed in to change notification settings - Fork 201
perf(l1): reduce BAL parallel-path overhead #6639
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: bal-devnet-7-pr
Are you sure you want to change the base?
Changes from all commits
7e081a6
d7d9434
f113d17
939a773
6fc5b11
51a768d
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.
| Original file line number | Diff line number | Diff line change |
|---|---|---|
|
|
@@ -859,36 +859,40 @@ impl Blockchain { | |
| const NUM_WORKERS: usize = 16; | ||
| let parent_state_root = parent_header.state_root; | ||
|
|
||
| // === Stage A: Drain + accumulate all AccountUpdates === | ||
| // BAL guarantees completeness, so we block until execution finishes. | ||
| let mut all_updates: FxHashMap<Address, AccountUpdate> = FxHashMap::default(); | ||
| for updates in rx { | ||
| let current_length = queue_length.fetch_sub(1, Ordering::Acquire); | ||
| *max_queue_length = current_length.max(*max_queue_length); | ||
| for update in updates { | ||
| match all_updates.entry(update.address) { | ||
| Entry::Vacant(e) => { | ||
| e.insert(update); | ||
| } | ||
| Entry::Occupied(mut e) => { | ||
| e.get_mut().merge(update); | ||
| } | ||
| } | ||
| // === Stage A: receive the single BAL-derived batch === | ||
| // execute_block_parallel calls bal_to_account_updates BEFORE the rayon tx | ||
| // loop and sends exactly one Vec<AccountUpdate>. Receiving once (instead of | ||
| // draining until channel close = exec end) lets Stage B's parallel storage | ||
| // roots overlap with parallel exec instead of serializing after it. | ||
| // | ||
| // BAL accounts are unique by address (one entry per touched address), so | ||
| // no merge step is needed — skip the FxHashMap detour entirely. | ||
| let updates: Vec<AccountUpdate> = match rx.recv() { | ||
| Ok(updates) => { | ||
| let current_length = queue_length.fetch_sub(1, Ordering::Acquire); | ||
| *max_queue_length = current_length.max(*max_queue_length); | ||
| updates | ||
| } | ||
| } | ||
| Err(_) => { | ||
| // Channel closed without a message — execution failed before | ||
| // bal_to_account_updates ran. Return empty work so the exec | ||
| // error surfaces in execution_result rather than being masked. | ||
| Vec::new() | ||
| } | ||
| }; | ||
|
Comment on lines
+870
to
+882
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
Prompt To Fix With AIThis is a comment left during a code review.
Path: crates/blockchain/blockchain.rs
Line: 870-882
Comment:
**Single-recv invariant has no defensive check**
`handle_merkleization_bal` now consumes exactly one message and then never touches `rx` again. If `execute_block_parallel` is ever modified to send a second batch, the extra message will sit in the unbounded channel and be silently dropped — the merkleizer will proceed with only the first batch, producing a wrong state root with no error. A defensive `debug_assert!(rx.try_recv().is_err(), "expected exactly one batch from execute_block_parallel")` after the `Ok` arm would catch any accidental protocol change during development.
How can I resolve this? If you propose a fix, please make it concise. |
||
|
|
||
| // Extract witness accumulator before consuming updates | ||
| // Witness accumulator (clone since we move `updates` into Stage B below). | ||
| let accumulated_updates = if self.options.precompute_witnesses { | ||
| Some(all_updates.values().cloned().collect::<Vec<_>>()) | ||
| Some(updates.clone()) | ||
| } else { | ||
| None | ||
| }; | ||
|
|
||
| // Extract code updates and build work items with pre-hashed addresses | ||
| // Build work items with pre-hashed addresses + extract code updates. | ||
| let mut code_updates: Vec<(H256, Code)> = Vec::new(); | ||
| let mut accounts: Vec<(H256, AccountUpdate)> = Vec::with_capacity(all_updates.len()); | ||
| for (addr, update) in all_updates { | ||
| let hashed = keccak(addr); | ||
| let mut accounts: Vec<(H256, AccountUpdate)> = Vec::with_capacity(updates.len()); | ||
| for update in updates { | ||
| let hashed = keccak(update.address); | ||
| if let Some(info) = &update.info | ||
| && let Some(code) = &update.code | ||
| { | ||
|
|
||
Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.
Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Err(_)path continues through Stage C (16 trie opens)When the channel is closed without a message (execution failure before
bal_to_account_updates), the function returnsVec::new()and falls through to Stage C, which unconditionally spawns 16 threads to open the parent state trie even though all shards will have no items. Returning an emptyAccountUpdatesListearly in theErrarm would avoid this overhead without changing the visible behaviour, since the execution error surfaces viaexecution_result?regardless.Prompt To Fix With AI