Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
5 changes: 5 additions & 0 deletions wgpu-core/src/device/life.rs
Original file line number Diff line number Diff line change
Expand Up @@ -384,6 +384,11 @@ impl LifetimeTracker {
for buffer in self.ready_to_map.drain(..) {
match buffer.map(snatch_guard) {
Some(cb) => pending_callbacks.push(cb),
// `None` means the mapping was cancelled between when this
// buffer was added to `ready_to_map` and now — typically
// because `unmap` was called on it. `unmap_inner` already fired
// the callback with `MapAborted` in that case, so there is
// nothing left to do here.
None => continue,
}
}
Expand Down
6 changes: 6 additions & 0 deletions wgpu-core/src/device/queue.rs
Original file line number Diff line number Diff line change
Expand Up @@ -281,6 +281,12 @@ impl Drop for Queue {
self.maintain(last_successful_submission_index, &snatch_guard);
drop(snatch_guard);

// `wait_for_idle` above ensures all in-flight submissions have
// completed, so `maintain` should have drained every active
// submission. A false result here indicates that something went
// wrong — most likely the device was lost and `wait_for_idle`
// did not surface that as an error, leaving submissions
// unprocessed and their `mapped` buffer callbacks unfired.
assert!(queue_empty);

let closures = crate::device::UserClosures {
Expand Down
19 changes: 17 additions & 2 deletions wgpu-core/src/resource.rs
Original file line number Diff line number Diff line change
Expand Up @@ -438,7 +438,16 @@ pub type BufferAccessResult = Result<(), BufferAccessError>;
pub(crate) struct BufferPendingMapping {
pub(crate) range: Range<wgt::BufferAddress>,
pub(crate) op: BufferMapOperation,
// hold the parent alive while the mapping is active

/// A strong reference to the parent buffer, to hold it alive
/// while the mapping is pending.
///
/// This creates a temporary reference cycle: [`Buffer::map_state`] owns
/// this `BufferPendingMapping`, which in turn holds an `Arc<Buffer>` back
/// to the same buffer. The cycle is intentional — it keeps the buffer alive
/// while it sits in `LifetimeTracker::ready_to_map` with no other owner —
/// and is broken by `Buffer::map` when it moves the `BufferPendingMapping`
/// out of [`Buffer::map_state`].
Comment on lines +447 to +450
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
/// to the same buffer. The cycle is intentional — it keeps the buffer alive
/// while it sits in `LifetimeTracker::ready_to_map` with no other owner —
/// and is broken by `Buffer::map` when it moves the `BufferPendingMapping`
/// out of [`Buffer::map_state`].
/// to the same buffer. The cycle is intentional, and is broken by
/// `Buffer::map` when it moves the `BufferPendingMapping` out of
/// [`Buffer::map_state`]. TODO: clarify the purpose of this

As we discussed offline, I'm not sure this fully captures what's going on here, I think we should revisit it.

pub(crate) _parent_buffer: Arc<Buffer>,
}

Expand Down Expand Up @@ -716,6 +725,12 @@ impl Buffer {
}

if let Some(queue) = device.get_queue().as_ref() {
// Flush pending writes to this buffer before scheduling the map.
//
// Such writes get added to `queue.life_tracker.lock().active`, so
// that `lock_life().map` below will find the buffer in that
// submission and wait for it, rather than placing it in
// `ready_to_map` prematurely.
match queue.flush_writes_for_buffer(self, snatch_guard) {
Err(err) => {
let state = mem::replace(&mut *self.map_state.lock(), BufferMapState::Idle);
Expand All @@ -725,7 +740,7 @@ impl Buffer {
return Err((op, err));
}
Ok(()) => {
// Schedule the buffer map in the lifetime tracker.
// Schedule the buffer map in the lifetime tracker.
//
// This call searches for use of the buffer by pending submissions.
// If we just flushed pending writes, that search is redundant; we
Expand Down
Loading