Skip to content

fix(langchain): preserve LangGraph control flow traces#1632

Open
hassiebp wants to merge 7 commits intomainfrom
codex/langgraph-control-flow-resume-traces
Open

fix(langchain): preserve LangGraph control flow traces#1632
hassiebp wants to merge 7 commits intomainfrom
codex/langgraph-control-flow-resume-traces

Conversation

@hassiebp
Copy link
Copy Markdown
Contributor

@hassiebp hassiebp commented Apr 20, 2026

Summary

This updates the LangChain callback handler to treat LangGraph control-flow exceptions as expected behavior instead of hard errors, and to preserve trace continuity across interrupt/resume cycles.

What changed

  • classify CONTROL_FLOW_EXCEPTION_TYPES as DEFAULT instead of ERROR in on_tool_error, on_retriever_error, on_llm_error, and on_chain_error
  • keep the interruption status message visible for those control-flow cases
  • persist a pending root trace context across an interrupt so the resumed root run is attached back to the original trace
  • add focused unit coverage for control-flow classification and resume trace reuse
  • add a few short code comments explaining why the resume trace context exists and when it is consumed

Root cause

LangGraph uses GraphBubbleUp subclasses for expected control flow such as interrupts and handoffs. The callback handler only recognized that pattern in on_chain_error, while other error callbacks always marked those paths as failures. Separately, the handler reset root callback state after a bubbled interrupt, so the next resumed root run started a fresh trace.

Impact

  • LangGraph interrupt and handoff flows stay visible without showing up as red error observations
  • resumed runs are stitched back onto the original trace instead of appearing as disconnected traces
  • the callback behavior is now consistent across chain, tool, retriever, and LLM error callbacks

Verification

  • python -m py_compile langfuse/langchain/CallbackHandler.py tests/unit/test_langchain.py
  • attempted local test execution, but this environment does not currently have pytest, langchain, or langchain_openai installed outside the project-managed environment
  • git commit required --no-verify because the local git hook references pre-commit, which is not available in this shell

Context

This addresses the behavior investigated in LFE-8001 and LFE-8520.

Disclaimer: Experimental PR review

Greptile Summary

This PR fixes two related LangGraph callback handler issues: it extends the CONTROL_FLOW_EXCEPTION_TYPES classification from on_chain_error to all four error callbacks (on_tool_error, on_retriever_error, on_llm_error), and adds _resume_trace_context to stitch a resumed graph run back onto the original interrupted trace.

  • The new _persist_resume_trace_context / _consume_root_trace_context pairing is logically sound: the resume context is consumed exactly once by the first root-level chain start that follows an interrupt, and _reset() (which only clears _child_to_parent_run_id_map) does not wipe it.
  • Minor inconsistency: _persist_resume_trace_context is invoked before observation.end() in on_chain_error but after in the three other error handlers — harmless since span IDs are assigned at creation time, but slightly confusing.
  • on_tool_error / on_retriever_error / on_llm_error do not call _exit_propagation_context() or _reset() for root-level control-flow errors (consistent with pre-existing behaviour for those callbacks, but now those paths also write _resume_trace_context).

Confidence Score: 5/5

Safe to merge — core logic is correct, findings are minor edge-case style issues that don't affect the primary LangGraph interrupt/resume path.

The resume trace-stitching logic is sound: _reset() doesn't clear _resume_trace_context, _consume_root_trace_context() is called only at root-level span creation, and the context is consumed exactly once. The two remaining comments are P2: one is a cosmetic ordering inconsistency (harmless), and the other covers an edge case (root-level tool/LLM errors) that does not arise in typical LangGraph usage. No data loss, no security issues, tests cover both new behaviours.

No files require special attention for merge.

Important Files Changed

Filename Overview
langfuse/langchain/CallbackHandler.py Adds _resume_trace_context state to stitch LangGraph resume runs back to the original trace, and introduces _get_error_level_and_status_message to consistently classify control-flow exceptions as DEFAULT across all four error callbacks. Minor inconsistency: _persist_resume_trace_context is called before observation.end() in on_chain_error but after in the other three handlers; and tool/retriever/LLM error paths don't call _exit_propagation_context/_reset() for root-level control-flow errors.
tests/unit/test_langchain.py Adds two focused unit tests: one verifying all four error callbacks classify control-flow exceptions as DEFAULT with the correct status message, and one verifying trace stitching across interrupt/resume/fresh cycles. Tests clean up CONTROL_FLOW_EXCEPTION_TYPES in finally blocks and correctly import otel_context.

Sequence Diagram

sequenceDiagram
    participant LG as LangGraph
    participant CB as CallbackHandler
    participant LF as Langfuse Trace

    Note over LG,LF: First invocation — interrupted
    LG->>CB: on_chain_start(run_id=A)
    CB->>LF: start_observation(trace_context=None) → Span A (new trace T1)
    LG->>CB: on_chain_error(GraphBubbleUp, run_id=A)
    CB->>CB: _get_error_level_and_status_message → (DEFAULT, msg)
    CB->>CB: _persist_resume_trace_context({trace_id: T1, parent_span_id: A})
    CB->>LF: observation.end(level=DEFAULT)
    CB->>CB: _exit_propagation_context() + _reset()

    Note over LG,LF: Second invocation — resume
    LG->>CB: on_chain_start(run_id=B)
    CB->>CB: _consume_root_trace_context() → {trace_id: T1, parent_span_id: A}
    CB->>LF: start_observation(trace_context={T1, A}) → Span B (child of A, same trace T1)
    LG->>CB: on_chain_end(run_id=B)
    CB->>CB: _clear_resume_trace_context()
    CB->>LF: span.end()

    Note over LG,LF: Third invocation — fresh
    LG->>CB: on_chain_start(run_id=C)
    CB->>CB: _consume_root_trace_context() → None (cleared)
    CB->>LF: start_observation(trace_context=None) → Span C (new trace T2)
    LG->>CB: on_chain_end(run_id=C)
    CB->>LF: span.end()
Loading
Prompt To Fix All With AI
This is a comment left during a code review.
Path: langfuse/langchain/CallbackHandler.py
Line: 186-193

Comment:
**`_persist_resume_trace_context` called before `observation.end()` in `on_chain_error` but after in other handlers**

In `on_chain_error`, `_persist_resume_trace_context(observation)` is called before `observation.end()`, while in `on_tool_error`, `on_retriever_error`, and `on_llm_error` it is called after. Since `observation.trace_id` and `observation.id` are assigned at span creation time (not at end time), both orderings are functionally safe. But the inconsistency is easy to misread — a future reader might assume `.end()` is required before reading those fields. Consider aligning all four handlers to call `_persist_resume_trace_context` after `.end()`.

How can I resolve this? If you propose a fix, please make it concise.

---

This is a comment left during a code review.
Path: langfuse/langchain/CallbackHandler.py
Line: 949-952

Comment:
**Missing `_exit_propagation_context()` / `_reset()` for root-level tool/retriever/LLM control-flow errors**

For `on_tool_error`, `on_retriever_error`, and `on_llm_error`, when `parent_run_id is None` and the exception is a control-flow type, `_persist_resume_trace_context` is now called — but neither `_exit_propagation_context()` nor `_reset()` is called, unlike in `on_chain_error`. If a tool or LLM were ever the root-level run, the propagation context and `_child_to_parent_run_id_map` would not be cleaned up, which could corrupt handler state for the next invocation. This pattern exists in all three non-chain error handlers.

How can I resolve this? If you propose a fix, please make it concise.

Reviews (2): Last reviewed commit: "test(langchain): isolate resume trace as..." | Re-trigger Greptile

@hassiebp hassiebp marked this pull request as ready for review April 20, 2026 12:38
@hassiebp hassiebp changed the title [codex] fix(langchain): preserve LangGraph control flow traces fix(langchain): preserve LangGraph control flow traces Apr 20, 2026
@hassiebp hassiebp force-pushed the codex/langgraph-control-flow-resume-traces branch from e0e81a6 to 19a231e Compare April 20, 2026 12:42
Copy link
Copy Markdown

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: e0e81a67d5

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

Comment thread langfuse/langchain/CallbackHandler.py Outdated
Comment thread langfuse/langchain/CallbackHandler.py Outdated
@github-actions
Copy link
Copy Markdown

@claude review

Copy link
Copy Markdown

@claude claude bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Additional findings (outside current diff — PR may have been updated during review):

  • 🔴 langfuse/langchain/CallbackHandler.py:666-681 — When on_chain_start fails internally (its exception is caught and swallowed), no observation is registered, so _detach_observation() returns None in both on_chain_end and on_chain_error. In that case the _clear_resume_trace_context() call — which is guarded inside if span is not None / if observation is not None — is skipped entirely, while the finally block only calls _exit_propagation_context() and _reset(). A stale _resume_trace_context from a prior LangGraph interrupt therefore persists and is silently consumed by the very next root-level invocation, stitching an unrelated run onto the interrupted trace. The fix is to call self._clear_resume_trace_context() unconditionally in the finally block of both methods (guarded only by parent_run_id is None).

    Extended reasoning...

    What the bug is and how it manifests

    The PR introduces _resume_trace_context to stitch a LangGraph-resumed root run back onto the original interrupted trace. _persist_resume_trace_context() saves the context when a control-flow error bubbles up, and _consume_root_trace_context() (called in on_chain_start, on_tool_start, etc.) reads and clears it. However, two terminal callbacks — on_chain_end and on_chain_error — only call _clear_resume_trace_context() inside their if span/observation is not None guard. The finally block in both methods calls _reset() (which clears only _child_to_parent_run_id_map) and _exit_propagation_context(), but never clears _resume_trace_context.

    The specific code path that triggers it

    1. A prior control-flow interrupt (e.g. DummyControlFlowError) causes on_chain_error to call _persist_resume_trace_context(), setting _resume_trace_context.
    2. The next invocation calls on_chain_start. Inside on_chain_start, propagate_attributes().__enter__() (or any code before start_observation) raises an exception. The outer except Exception in on_chain_start catches and swallows it — no observation is registered.
    3. LangChain's runtime (which manages its own run lifecycle independently) later fires on_chain_end or on_chain_error for that run.
    4. In on_chain_end: _detach_observation(run_id) returns Nonespan is None_clear_resume_trace_context() is skipped → finally calls only _exit_propagation_context() and _reset().
    5. _resume_trace_context is still populated and will be consumed by the subsequent fresh root-level invocation, silently stitching it onto the interrupted trace.

    Why existing code doesn't prevent it

    _reset() (called in finally) only clears _child_to_parent_run_id_map — there is no mention of _resume_trace_context there. _consume_root_trace_context() would clear it, but only if a successful on_chain_start runs to the point where start_observation is invoked. The only defensive clears are inside the if observation/span is not None guards, which are bypassed when startup failed.

    Impact

    A completely unrelated root-level invocation (e.g. a new user query) gets attached to a previous, interrupted trace instead of starting a fresh one. This produces invisible, incorrect trace stitching in the Langfuse UI — the new run appears as a child of the old interrupted trace — a silent data-correctness bug with no error signal to the user.

    How to fix it

    Add self._clear_resume_trace_context() to the finally block of both on_chain_end and on_chain_error, guarded by parent_run_id is None:

    finally:
        if parent_run_id is None:
            self._clear_resume_trace_context()   # ADD THIS
            self._exit_propagation_context()
            self._reset()

    This mirrors the same safe-guard already present in on_llm_end (line added in this PR: self._clear_resume_trace_context() before self._reset()).

    Step-by-step proof

    1. Call handler.on_chain_error(DummyControlFlowError(...), run_id=R1) for a root run (parent_run_id=None). _persist_resume_trace_context() sets handler._resume_trace_context = {"trace_id": "T1", ...}.
    2. _reset() fires in finally — clears _child_to_parent_run_id_map only. _resume_trace_context is still {"trace_id": "T1", ...}.
    3. handler.on_chain_start({"name": "LangGraph"}, ..., run_id=R2) is called. Suppose propagate_attributes().__enter__() raises inside the try block. Exception is caught; no observation is stored for R2.
    4. LangChain calls handler.on_chain_end({}, run_id=R2, parent_run_id=None). _detach_observation(R2) returns None. The if span is not None block is skipped. _clear_resume_trace_context() is never called. finally calls _reset() — still doesn't touch _resume_trace_context.
    5. handler.on_chain_start({"name": "LangGraph"}, ..., run_id=R3) fires for a completely fresh root invocation. _consume_root_trace_context() returns {"trace_id": "T1", ...} and clears it. R3's span is now created under trace T1 — incorrectly stitched onto the interrupted run.

Copy link
Copy Markdown

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: 19a231e570

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

Comment thread langfuse/langchain/CallbackHandler.py Outdated
Comment thread langfuse/langchain/CallbackHandler.py
Copy link
Copy Markdown

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: ad2e408fdf

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

Comment thread langfuse/langchain/CallbackHandler.py Outdated
Comment thread langfuse/langchain/CallbackHandler.py
Comment thread langfuse/langchain/CallbackHandler.py
Comment thread langfuse/langchain/CallbackHandler.py Outdated
Comment thread langfuse/langchain/CallbackHandler.py
Copy link
Copy Markdown

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: fccbffcb3e

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

Comment thread langfuse/langchain/CallbackHandler.py
Comment thread langfuse/langchain/CallbackHandler.py
Comment thread langfuse/langchain/CallbackHandler.py Outdated
Copy link
Copy Markdown

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: 12bc17b58b

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

Comment thread langfuse/langchain/CallbackHandler.py Outdated
Comment thread langfuse/langchain/CallbackHandler.py
Copy link
Copy Markdown

@claude claude bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

All previously flagged critical issues have been addressed — the fix is correct and the new tests cover the key scenarios well.

Extended reasoning...

Overview

This PR touches two files: the LangChain callback handler (~1300 LOC) and its unit tests (~830 LOC added). It adds interrupt/resume trace stitching for LangGraph control-flow exceptions and extends the existing DEFAULT-level classification from chain errors to tool, retriever, and LLM error handlers.

Security risks

None. The change is purely observability instrumentation with no auth, crypto, or permission logic.

Level of scrutiny

The PR is moderately complex — it introduces new stateful fields, multiple interacting helper methods, and a non-trivial cleanup strategy — but the test coverage is thorough. Tests cover: multi-thread keyed resume contexts, non-resume Command passthrough, span creation failure recovery, bounded dict eviction, and concurrent root run key preservation. The two remaining inline comments are both nits (performance of the O(N²) _reset cleanup path for large traces, and a test-isolation gap if an assertion in the last test fails without a finally guard), neither of which affects correctness for typical usage.

Other factors

Every issue raised in previous review rounds — matching all Command types, missing resume key seeding in on_tool_start/on_retriever_start, missing cleanup finally blocks, span creation atomicity, unbounded dict, and coupled imports — has been addressed in the current revision. The logic is sound and the inline nit comments are already attached to the relevant lines.

Comment thread langfuse/langchain/CallbackHandler.py
Comment on lines +825 to +833
with patch("builtins.__import__", side_effect=import_without_langgraph_command):
reloaded_module = importlib.reload(callback_handler_module)
assert reloaded_module.LANGGRAPH_COMMAND_TYPE is None
assert any(
exception_type.__name__ == "GraphBubbleUp"
for exception_type in reloaded_module.CONTROL_FLOW_EXCEPTION_TYPES
)

importlib.reload(callback_handler_module)
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🟡 The final importlib.reload(callback_handler_module) restore call in test_graphbubbleup_import_is_independent_from_command_import is not inside a finally block, so if either assertion fails the module remains with LANGGRAPH_COMMAND_TYPE = None. Although this test is currently last in the file under default ordering (limiting immediate impact), test randomization, --lf reruns, or future test additions after it would cause subsequent tests calling _is_langgraph_resume() to silently return False, breaking resume-stitching. Wrap the restore reload in a try/finally.

Extended reasoning...

What the bug is and how it manifests

The test test_graphbubbleup_import_is_independent_from_command_import (lines 814–833) reloads callback_handler_module inside a with patch(...) block, which causes the module to be imported without langgraph.types, setting LANGGRAPH_COMMAND_TYPE = None. The restore call importlib.reload(callback_handler_module) on the final line is outside the with block but is NOT guarded by a finally clause.

The specific code path that triggers it

If either assertion — assert reloaded_module.LANGGRAPH_COMMAND_TYPE is None or assert any(exception_type.name == 'GraphBubbleUp' ...) — raises an AssertionError, Python unwinds out of the with block (which correctly restores builtins.import), but execution never reaches the final importlib.reload. The module object pointed to by the module-level variable callback_handler_module retains LANGGRAPH_COMMAND_TYPE = None for the remainder of the process.

Why existing code does not prevent it

importlib.reload modifies the module object in-place. The with patch(...) context manager only restores builtins.import; it does not undo the reload's side effects on the module object. There is no try/finally to guarantee the restore reload runs even on assertion failure.

Addressing the refutation

The refutation correctly observes that test_graphbubbleup_import_is_independent_from_command_import is currently the last test defined in the file, which under pytest's default top-to-bottom collection order means no subsequent same-file test would be affected by a failed assertion. This limits the practical impact in the default configuration. However, three scenarios break this assumption: (1) pytest-randomly or pytest-random-order plugins shuffle test order; (2) pytest --lf reruns only the failed test first in a new session, and its broken module state contaminates all other tests collected afterward; (3) any developer adding a new test after this one inherits the silent failure. The refutation's claim of a 'duplicate of bug_004' is not borne out by the synthesis — only bug_001 appears here, and the underlying correctness issue is genuine.

What the impact would be if triggered

Every subsequent test on the same callback_handler_module that calls handler._is_langgraph_resume() would find LANGGRAPH_COMMAND_TYPE is None and always return False. Tests such as test_control_flow_resume_uses_thread_keyed_explicit_resume_context and test_control_flow_resume_ignores_non_resume_commands would fail with misleading assertion errors unrelated to the feature they are testing, making diagnosis difficult.

Step-by-step proof

  1. pytest-randomly shuffles order; test_graphbubbleup_import_is_independent_from_command_import runs before test_control_flow_resume_uses_thread_keyed_explicit_resume_context.
  2. The with patch(...) block reloads the module, setting callback_handler_module.LANGGRAPH_COMMAND_TYPE = None.
  3. An assertion fails; AssertionError propagates; the with block exits cleanly (builtins.import restored), but the final importlib.reload is skipped.
  4. callback_handler_module.LANGGRAPH_COMMAND_TYPE remains None.
  5. The next test creates CallbackHandler(), calls on_chain_start with Command(resume=...); _is_langgraph_resume returns False because LANGGRAPH_COMMAND_TYPE is None.
  6. Resume context is never consumed; the resumed span starts on a disconnected trace; the assertion resumed_span.parent.span_id == initial_span.context.span_id fails spuriously.

How to fix

Wrap the restore in a try/finally so it runs regardless of assertion outcome:

try:
    with patch('builtins.__import__', side_effect=import_without_langgraph_command):
        reloaded_module = importlib.reload(callback_handler_module)
        assert reloaded_module.LANGGRAPH_COMMAND_TYPE is None
        assert any(
            exception_type.__name__ == 'GraphBubbleUp'
            for exception_type in reloaded_module.CONTROL_FLOW_EXCEPTION_TYPES
        )
finally:
    importlib.reload(callback_handler_module)

Comment on lines 460 to +478
if observation is not None:
level, status_message = self._get_error_level_and_status_message(error)
observation.update(
level="ERROR",
status_message=str(error),
level=cast(
Optional[Literal["DEBUG", "DEFAULT", "WARNING", "ERROR"]],
level,
),
status_message=status_message,
input=kwargs.get("inputs"),
cost_details={"total": 0},
).end()

if parent_run_id is None and level == "DEFAULT":
self._persist_resume_trace_context(
run_id=run_id, observation=observation
)
elif parent_run_id is None:
self._clear_root_run_resume_key(run_id)

Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🟡 In on_chain_error, _persist_resume_trace_context is correctly called before observation.end() (line 868), so the resume key is saved even if end() throws. In the three non-chain handlers (on_retriever_error, on_tool_error, on_llm_error), observation.update().end() is chained and _persist_resume_trace_context is called afterward; if end() throws, the except block is entered without saving the resume key, and the subsequent finally: _reset(run_id) destroys the entire _RootRunState including its resume_key, losing it permanently. Fix by moving _persist_resume_trace_context before the .end() call in these three handlers, mirroring on_chain_error.

Extended reasoning...

What the bug is and how it manifests

In on_chain_error, observation.update() and observation.end() are separate calls, with _persist_resume_trace_context invoked between them (line 868 before line 875). Even if observation.end() raises, the resume key has already been transferred into _pending_resume_trace_contexts. The three non-chain handlers (on_retriever_error, on_tool_error, on_llm_error) use observation.update(...).end() as a single chained expression, then call _persist_resume_trace_context afterward. If the chained .end() raises, Python immediately jumps to the except block and _persist_resume_trace_context is never reached.

The specific code path that triggers it

Three conditions must hold simultaneously: (1) the error handler is invoked for a root-level run (parent_run_id is None); (2) the exception is a LangGraph control-flow type so level == "DEFAULT"; (3) observation.update().end() raises internally. When all three hold, the resume key stored in root_run_state.resume_key is never popped into _pending_resume_trace_contexts.

Why existing code does not prevent it

The finally: if parent_run_id is None: self._reset(run_id) block then executes unconditionally. _reset calls self._root_run_states.pop(run_state.root_run_id, None), which removes the entire _RootRunState object. Since _persist_resume_trace_context was never called, root_run_state.resume_key still holds the thread's resume key — but the object is now discarded. The key never reaches _pending_resume_trace_contexts.

Step-by-step proof

  1. on_tool_start(run_id=A, parent_run_id=None, metadata={"thread_id": "t1"}) creates _RootRunState(resume_key="t1") in _root_run_states[A].
  2. Tool raises GraphBubbleUp; on_tool_error fires with run_id=A, parent_run_id=None.
  3. _get_error_level_and_status_message returns ("DEFAULT", ...); code enters the if observation is not None branch.
  4. observation.update(...).end() raises (SDK internal error, OTel resource exhaustion, etc.).
  5. except Exception catches; execution jumps to finally. _persist_resume_trace_context at line 473 was never reached; _pending_resume_trace_contexts has no entry for "t1".
  6. finally: _reset(A) pops _root_run_states[A] — the _RootRunState with resume_key="t1" is discarded.
  7. Next Command(resume=...) invocation on thread "t1": _take_root_trace_context returns None. A disconnected trace is created instead of stitching to the interrupted one.

How to fix

Split observation.update().end() into separate calls and move _persist_resume_trace_context / _clear_root_run_resume_key before the .end() call in all three non-chain handlers, mirroring on_chain_error. The observation.trace_id and observation.id fields are assigned at span creation time, not at end time, so calling persist before end() is safe.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant