runtime, graph: fix VID collisions in ipfs.map() callbacks#6416
runtime, graph: fix VID collisions in ipfs.map() callbacks#6416
Conversation
Each ipfs.map() callback created a fresh BlockState with vid_seq reset to RESERVED_VIDS. When multiple callbacks write to the same entity table, they all generate vids starting at (block<<32)+100, producing duplicates. Fix by threading vid_seq through the callback loop and updating it in EntityCache::extend() so the parent handler also continues from the right sequence after merging callback results. Addresses same underlying issue as PR #6336
There was a problem hiding this comment.
Pull request overview
Fixes internal VID (version ID) collisions when ipfs.map() processes multiple JSON objects in the same block by ensuring the VID sequence (vid_seq) is carried across callback invocations and preserved when merging entity caches.
Changes:
- Thread
vid_seqthrough theipfs.map()callback loop so each callback continues the sequence instead of resetting toRESERVED_VIDS. - Update
EntityCache::extend()to carry forwardvid_seqafter merges so subsequent writes continue from the correct sequence.
Reviewed changes
Copilot reviewed 2 out of 2 changed files in this pull request and generated 1 comment.
| File | Description |
|---|---|
runtime/wasm/src/host_exports.rs |
Carries vid_seq across ipfs.map() callback iterations by seeding derived contexts and updating the sequence after each callback result. |
graph/src/components/store/entity_cache.rs |
Ensures vid_seq is propagated on cache merge via extend() to prevent post-merge VID reuse. |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| let mut derived = ctx.derive_with_empty_block_state(); | ||
| // Continue the vid sequence from the previous callback so | ||
| // each iteration doesn't reset to RESERVED_VIDS and | ||
| // produce duplicate VIDs. | ||
| derived.state.entity_cache.vid_seq = ctx.state.entity_cache.vid_seq; | ||
| let module = WasmInstance::from_valid_module_with_ctx_boxed( | ||
| valid_module.clone(), | ||
| ctx.derive_with_empty_block_state(), | ||
| derived, | ||
| host_metrics.clone(), | ||
| wasm_ctx.experimental_features, | ||
| ) | ||
| .await?; | ||
| let result = module | ||
| .handle_json_callback(&callback, &sv.value, &user_data) | ||
| .await?; | ||
| // Carry forward vid_seq so the next iteration continues | ||
| // the sequence. | ||
| ctx.state.entity_cache.vid_seq = result.entity_cache.vid_seq; |
There was a problem hiding this comment.
This change will alter the VIDs assigned by ipfs.map() across multiple callback invocations (second callback should now get RESERVED_VIDS + 1, etc.). The existing runtime test runtime/test/src/test.rs::test_ipfs_map currently hard-codes vid: 100 for both inserted entities via make_thing(..., 100); with this fix it should expect the second insert to have vid: 101 (and potentially any additional callbacks to be sequential), otherwise CI will fail.
Each ipfs.map() callback created a fresh BlockState with vid_seq reset to RESERVED_VIDS. When multiple callbacks write to the same entity table, they all generate vids starting at (block<<32)+100, producing duplicates.
Fix by threading vid_seq through the callback loop and updating it in EntityCache::extend() so the parent handler also continues from the right sequence after merging callback results.
Addresses same underlying issue as PR #6336