Skip to content

GH-126910: Add gdb support for unwinding JIT frames#146071

Open
diegorusso wants to merge 36 commits intopython:mainfrom
diegorusso:add-gdb-support
Open

GH-126910: Add gdb support for unwinding JIT frames#146071
diegorusso wants to merge 36 commits intopython:mainfrom
diegorusso:add-gdb-support

Conversation

@diegorusso
Copy link
Copy Markdown
Contributor

@diegorusso diegorusso commented Mar 17, 2026

The PR adds the support to GDB for unwinding JIT frames by emitting eh frames.
It reuses part of the existent infrastructure for the perf_jit from @pablogsal.

This is part of the overall plan laid out here: #126910 (comment)

The output in GDB looks like:

Program received signal SIGINT, Interrupt.
0x0000fffff7fb50f8 in py::jit_entry:<jit> ()
(gdb) bt
#0  0x0000fffff7fb50f8 in py::jit_entry:<jit> ()
#2  0x0000aaaaaad5e314 in _PyEval_EvalFrameDefault (tstate=0xfffff7fb80f0, frame=0xfffff774bab0, throwflag=6, throwflag@entry=0)
    at ../../Python/generated_cases.c.h:5711
#3  0x0000aaaaaad61350 in _PyEval_EvalFrame (tstate=0xaaaaab1d57b0 <_PyRuntime+344632>, frame=0xfffff7fb8020, throwflag=0)
    at ../../Include/internal/pycore_ceval.h:122
...

@diegorusso diegorusso added the 🔨 test-with-buildbots Test PR w/ buildbots; report in status section label Mar 18, 2026
@bedevere-bot
Copy link
Copy Markdown

🤖 New build scheduled with the buildbot fleet by @diegorusso for commit ac018d6 🤖

Results will be shown at:

https://buildbot.python.org/all/#/grid?branch=refs%2Fpull%2F146071%2Fmerge

If you want to schedule another build, you need to add the 🔨 test-with-buildbots label again.

@bedevere-bot bedevere-bot removed the 🔨 test-with-buildbots Test PR w/ buildbots; report in status section label Mar 18, 2026
@pablogsal
Copy link
Copy Markdown
Member

I have some questions about the EH frame generation and how it applies to the different code regions.

Looking at jit_record_code, it's called in two places:

  1. For jit_shim (line 811): the entry shim compiled from Tools/jit/shim.c
  2. For jit_executor (line 757): the full executor code region (code_size + state.trampolines.size)

Both end up calling _PyJitUnwind_GdbRegisterCode, which builds the same EH frame via _PyJitUnwind_BuildEhFrame.

The EH frame in elf_init_ehframe describes a specific prologue/epilogue sequence. On x86_64 for example:

push %rbp          (1 byte)
mov %rsp, %rbp     (3 bytes)
call *%rcx         (2 bytes)
pop %rbp           (1 byte)
ret

I understand how this is correct for jit_shim. Looking at Tools/jit/shim.c, it's a normal C function that calls into the executor:

_Py_CODEUNIT *
_JIT_ENTRY(...) {
    jit_func_preserve_none jitted = (jit_func_preserve_none)exec->jit_code;
    return jitted(exec, frame, stack_pointer, tstate, ...);
}

The compiler will emit exactly the prologue/epilogue the EH frame describes.

But I don't understand how the same EH frame is correct for jit_executor. The executor code region is a concatenation of many stencils, each compiled from Tools/jit/template.c with __attribute__((preserve_none)), chaining together via __attribute__((musttail)) tail calls. These stencils don't have the push rbp / mov rsp,rbp prologue that the EH frame describes. They use a completely different calling convention.

The FDE covers the full code_size + trampolines.size range but the CFI instructions only describe ~7 bytes of prologue/epilogue. DWARF will apply the last rule (CFA = RSP + 8 on x86_64) to all remaining addresses in the range. I don't understand why that rule would be correct at arbitrary points within the stencil code. Is it guaranteed that preserve_none stencils never modify RSP? Or is there something else going on that makes this work?

The test (test_jit.py) sets a breakpoint at id(42) which hits in the interpreter, not in the middle of a stencil. So the test verifies that the symbols appear in GDB's backtrace, but I don't think it exercises unwinding from an arbitrary point within the executor code region. Could we add a test that triggers unwinding from inside JIT code (e.g., via a signal or Ctrl+C while executing JIT code)?

Am I missing something about how the stencils interact with the stack, or is the EH frame intentionally approximate for the executor region?

Copy link
Copy Markdown
Member

@pablogsal pablogsal left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A bunch of questions I have from reading the code so far

Comment thread Python/jit_unwind.c Outdated
struct jit_code_entry *first_entry;
};

static volatile struct jit_descriptor __jit_debug_descriptor = {
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should these be non-static? The GDB JIT interface spec says GDB locates __jit_debug_descriptor and __jit_debug_register_code by name in the symbol table. With static linkage they would be invisible in .dynsym on stripped builds and when CPython is loaded as a shared library via dlopen. Am I missing something, or would this silently break in release/packaged builds where .symtab is stripped?

Maybe also worth adding __attribute__((used)) to prevent the linker from eliding them?

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, you are right. Instead of removing the static I've exported with the macro Py_EXPORTED_SYMBOL

Comment thread Lib/test/test_gdb/gdb_jit_sample.py Outdated
id(42)
return

warming_up = True
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could this loop hang? When warming_up=True, the call passes warming_up_caller=True which returns immediately at line 8, so the recursive body never actually executes. If the JIT does not activate via some other path, would this not spin forever until the timeout kills it? Should there be a max iteration count as a safety net?

Also, line 16 uses bitwise & instead of and. Was that intentional? It means is_active() is always evaluated even when is_enabled() is False.

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've simplified the test, the loop is not more controlled and deterministic.

Comment thread Python/jit.c Outdated
return;
}
_PyJitUnwind_GdbRegisterCode(
code_addr, (unsigned int)code_size, entry, filename);
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

code_size comes in as size_t but gets cast to unsigned int here. I know JIT regions will not be 4GB, but should the API just take size_t throughout for consistency?

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is now done.

@diegorusso
Copy link
Copy Markdown
Contributor Author

I have some questions about the EH frame generation and how it applies to the different code regions.

Looking at jit_record_code, it's called in two places:

  1. For jit_shim (line 811): the entry shim compiled from Tools/jit/shim.c
  2. For jit_executor (line 757): the full executor code region (code_size + state.trampolines.size)

Both end up calling _PyJitUnwind_GdbRegisterCode, which builds the same EH frame via _PyJitUnwind_BuildEhFrame.

The EH frame in elf_init_ehframe describes a specific prologue/epilogue sequence. On x86_64 for example:

push %rbp          (1 byte)
mov %rsp, %rbp     (3 bytes)
call *%rcx         (2 bytes)
pop %rbp           (1 byte)
ret

I understand how this is correct for jit_shim. Looking at Tools/jit/shim.c, it's a normal C function that calls into the executor:

_Py_CODEUNIT *
_JIT_ENTRY(...) {
    jit_func_preserve_none jitted = (jit_func_preserve_none)exec->jit_code;
    return jitted(exec, frame, stack_pointer, tstate, ...);
}

The compiler will emit exactly the prologue/epilogue the EH frame describes.

But I don't understand how the same EH frame is correct for jit_executor. The executor code region is a concatenation of many stencils, each compiled from Tools/jit/template.c with __attribute__((preserve_none)), chaining together via __attribute__((musttail)) tail calls. These stencils don't have the push rbp / mov rsp,rbp prologue that the EH frame describes. They use a completely different calling convention.

The FDE covers the full code_size + trampolines.size range but the CFI instructions only describe ~7 bytes of prologue/epilogue. DWARF will apply the last rule (CFA = RSP + 8 on x86_64) to all remaining addresses in the range. I don't understand why that rule would be correct at arbitrary points within the stencil code. Is it guaranteed that preserve_none stencils never modify RSP? Or is there something else going on that makes this work?

The test (test_jit.py) sets a breakpoint at id(42) which hits in the interpreter, not in the middle of a stencil. So the test verifies that the symbols appear in GDB's backtrace, but I don't think it exercises unwinding from an arbitrary point within the executor code region. Could we add a test that triggers unwinding from inside JIT code (e.g., via a signal or Ctrl+C while executing JIT code)?

Am I missing something about how the stencils interact with the stack, or is the EH frame intentionally approximate for the executor region?

What this change synthesises for jit_executor is one unwind description for the executor as a whole, not compiler-emitted per-stencil CFI. Because the stencils are musttail-chained, the jumps between stencils do not add extra native call frames. The unwind job here is just to recover the caller of the executor frame. We don't want to describe each stencil as its own frame.

When GDB stops at a PC inside py::jit_executor:<jit>:

  • it finds the FDE whose range covers that PC
  • takes the CFI row for that PC,
  • computes the CFA from that row
  • uses the CFA rules to recover the caller registers and return PC.

On AArch64, for most of the covered executor range, the synthetic CFI says:

  • CFA = x29 + 16
  • saved x29 at CFA - 16
  • saved x30 at CFA - 8.
    That is enough for GDB to recover the caller frame in py::jit_shim:<jit>, and then continue unwinding into _PyEval_*.

Good catch for the testing gap. I’ve now added a new test that breaks inside the jit executor. It sill breaks at the builtin_id but GDB then finishes out through the C helper frames until the selected frame is py::jit_executor:<jit> (thanks to some GDB-python scripting), single-steps twice inside the executor, and only then runs bt.
The backtrace is now taken with the current PC in executor code itself, and it unwinds through py::jit_shim:<jit> and then back into _PyEval_*.

@Fidget-Spinner
Copy link
Copy Markdown
Member

Fidget-Spinner commented Mar 25, 2026

@diegorusso @pablogsal I think I may have come up with a solution that works.

EDIT: I think I gdb doesn't only use backtrace. So we're still stuck. Sorry for the noise!

Background info (skip if not interested):

  1. glibc seems to call out to libgcc on linux when needing to unwind in backtrace from execinfo.h.
  2. libgcc does not seem to implement proper frame pointer backchaining for x86_64 and AArch64, only PPC 1.
  3. So we need eh_frames it seems for backtrace.

The current issue:

  1. In DWARF, you can specify in the eh_frame the canonical frame address (CFA). Traditionally, it's defined as rsp + offset.
  2. The problem with the current PR however, is that each stencil changes rsp. That means the CFA is usually wrong for the executor.
  3. A possible solution is to generate DWARF opcodes for each stencil that say "bump rsp", but that's slow and complicated.

The solution:

  1. Notice: we already have frame pointers in prologue and preserve them! That means you can just say it's tied to rbp at fixed offset of rbp + 16 all the time instead of tying it to an rsp that changes. I got this idea, and also checked with cranelift's Chris Fallin, who said they do it. Thanks a lot Chris!
  2. So that means, with frame pointers, eh frame creation is a lot simpler. You can have just one eh frame for the whole JIT code still, we just need the eh_frame to point to rbp + 16.
  3. This ensures correctness while also allowing for a simple implementation.

This should work with backtrace from execinfo.h. without issues. It should even work with gdb step debugging/bt, with the exception that at the function prologue, it might be broken. However the most important thing is that we unbreak all C extension code that uses backtrace! Also, backtrace should be fast as our DWARF would be tiny and simple.

TLDR: frame pointers = eh_frame is simple.

@pablogsal
Copy link
Copy Markdown
Member

pablogsal commented Mar 25, 2026

I have some questions about the EH frame generation and how it applies to the different code regions.
Looking at jit_record_code, it's called in two places:

  1. For jit_shim (line 811): the entry shim compiled from Tools/jit/shim.c
  2. For jit_executor (line 757): the full executor code region (code_size + state.trampolines.size)

Both end up calling _PyJitUnwind_GdbRegisterCode, which builds the same EH frame via _PyJitUnwind_BuildEhFrame.
The EH frame in elf_init_ehframe describes a specific prologue/epilogue sequence. On x86_64 for example:

push %rbp          (1 byte)
mov %rsp, %rbp     (3 bytes)
call *%rcx         (2 bytes)
pop %rbp           (1 byte)
ret

I understand how this is correct for jit_shim. Looking at Tools/jit/shim.c, it's a normal C function that calls into the executor:

_Py_CODEUNIT *
_JIT_ENTRY(...) {
    jit_func_preserve_none jitted = (jit_func_preserve_none)exec->jit_code;
    return jitted(exec, frame, stack_pointer, tstate, ...);
}

The compiler will emit exactly the prologue/epilogue the EH frame describes.
But I don't understand how the same EH frame is correct for jit_executor. The executor code region is a concatenation of many stencils, each compiled from Tools/jit/template.c with __attribute__((preserve_none)), chaining together via __attribute__((musttail)) tail calls. These stencils don't have the push rbp / mov rsp,rbp prologue that the EH frame describes. They use a completely different calling convention.
The FDE covers the full code_size + trampolines.size range but the CFI instructions only describe ~7 bytes of prologue/epilogue. DWARF will apply the last rule (CFA = RSP + 8 on x86_64) to all remaining addresses in the range. I don't understand why that rule would be correct at arbitrary points within the stencil code. Is it guaranteed that preserve_none stencils never modify RSP? Or is there something else going on that makes this work?
The test (test_jit.py) sets a breakpoint at id(42) which hits in the interpreter, not in the middle of a stencil. So the test verifies that the symbols appear in GDB's backtrace, but I don't think it exercises unwinding from an arbitrary point within the executor code region. Could we add a test that triggers unwinding from inside JIT code (e.g., via a signal or Ctrl+C while executing JIT code)?
Am I missing something about how the stencils interact with the stack, or is the EH frame intentionally approximate for the executor region?

What this change synthesises for jit_executor is one unwind description for the executor as a whole, not compiler-emitted per-stencil CFI. Because the stencils are musttail-chained, the jumps between stencils do not add extra native call frames. The unwind job here is just to recover the caller of the executor frame. We don't want to describe each stencil as its own frame.

When GDB stops at a PC inside py::jit_executor:<jit>:

* it finds the FDE whose range covers that PC

* takes the CFI row for that PC,

* computes the CFA from that row

* uses the CFA rules to recover the caller registers and return PC.

On AArch64, for most of the covered executor range, the synthetic CFI says:

* CFA = x29 + 16

* saved x29 at CFA - 16

* saved x30 at CFA - 8.
  That is enough for GDB to recover the caller frame in `py::jit_shim:<jit>`, and then continue unwinding into `_PyEval_*`.

Good catch for the testing gap. I’ve now added a new test that breaks inside the jit executor. It sill breaks at the builtin_id but GDB then finishes out through the C helper frames until the selected frame is py::jit_executor:<jit> (thanks to some GDB-python scripting), single-steps twice inside the executor, and only then runs bt. The backtrace is now taken with the current PC in executor code itself, and it unwinds through py::jit_shim:<jit> and then back into _PyEval_*.

@diegorusso I have to say that I am tremendously confused here.

If GDB or backtrace() stops at an arbitrary PC inside py::jit_executor:<jit>, the unwind info for that exact PC should let the unwinder reconstruct the caller frame (py::jit_shim:<jit>) and then continue into _PyEval_*.

So the real question is not “does the FDE cover the address range?” and it is not “do the stencils form one logical frame?”. The real question is: does the CFI row that applies at that PC actually describe the machine state there?

That is the part I do not think has been explained.

I agree with the narrow musttail point: tail-chaining the stencils means you do not accumulate one native call frame per stencil. Fine. But that only tells us that we want to unwind the executor as one logical frame. It does not tell us that one fixed synthetic unwind recipe is valid everywhere inside the executor blob.

And that is exactly where I think the argument goes off the rails.

jit_executor is not one ordinary C function with one stable prologue/epilogue. It is a concatenation of many preserve_none stencils, glued together with musttail. For a single synthetic FDE to be correct across the whole region, there has to be some invariant that says “for any PC in executor code, the CFA and saved return state look like this”. I do not see that invariant stated anywhere, and the current explanation seems to jump from “musttail” straight to “the unwind is correct”, which are not the same thing.

A concrete x86_64 example of why this seems wrong to me:

with the same sort of flags used for executor stencils, a preserve_none + musttail function can compile to something as trivial as

jmp callee

or, if it needs temporary stack space / spills, something more like

subq $24, %rsp
...
addq $24, %rsp
jmp callee

In the first case there is no %rbp frame at all. In the second case the CFA is temporarily %rsp-relative and changes inside the body. So I do not understand how one synthetic %rbp-based description for the entire covered executor range is supposed to be generally correct.

For jit_shim I can at least see the intended story, because it is one ordinary non-tail C function that calls into JIT code. For jit_executor, I still do not see what makes the unwind recipe valid for arbitrary PCs inside the blob.

Also, I rebuilt the branch locally and tried the exact “finish to py::jit_executor:<jit>, step twice, then bt” flow. On x86_64 I still get:

#0  py::jit_executor:<jit> ()
#1  ?? ()
...
Backtrace stopped: previous frame inner to this frame (corrupt stack?)

So this is not just a theoretical concern for me. I still do not understand why the model being described here is supposed to work.I am of course not objecting to the goal. I am saying I still do not see the correctness argument. If the claim is that this is actually a correct unwind description for jit_executor as a whole, then I think what is missing from the discussion is the key invariant: what exactly is guaranteed to be true about the CFA / saved FP / saved return address at an arbitrary PC inside executor code that makes this one synthetic FDE valid?

@diegorusso
Copy link
Copy Markdown
Contributor Author

@diegorusso @pablogsal I think I may have come up with a solution that works.

EDIT: I think I gdb doesn't only use backtrace. So we're still stuck. Sorry for the noise!

Background info (skip if not interested):

  1. glibc seems to call out to libgcc on linux when needing to unwind in backtrace from execinfo.h.
  2. libgcc does not seem to implement proper frame pointer backchaining for x86_64 and AArch64, only PPC 1.
  3. So we need eh_frames it seems for backtrace.

The current issue:

  1. In DWARF, you can specify in the eh_frame the canonical frame address (CFA). Traditionally, it's defined as rsp + offset.
  2. The problem with the current PR however, is that each stencil changes rsp. That means the CFA is usually wrong for the executor.
  3. A possible solution is to generate DWARF opcodes for each stencil that say "bump rsp", but that's slow and complicated.

The solution:

  1. Notice: we already have frame pointers in prologue and preserve them! That means you can just say it's tied to rbp at fixed offset of rbp + 16 all the time instead of tying it to an rsp that changes. I got this idea, and also checked with cranelift's Chris Fallin, who said they do it. Thanks a lot Chris!
  2. So that means, with frame pointers, eh frame creation is a lot simpler. You can have just one eh frame for the whole JIT code still, we just need the eh_frame to point to rbp + 16.
  3. This ensures correctness while also allowing for a simple implementation.

This should work with backtrace from execinfo.h. without issues. It should even work with gdb step debugging/bt, with the exception that at the function prologue, it might be broken. However the most important thing is that we unbreak all C extension code that uses backtrace! Also, backtrace should be fast as our DWARF would be tiny and simple.

TLDR: frame pointers = eh_frame is simple.

Thanks, thanks for the comment. I regenerated the x86_64 and AArch64 stencils after the recent frame-pointer changes. What we have today is that shim gets a real frame-pointer prologue, but the executor stencils still are not uniformly rbp/x29-framed, so I don’t think the current generated code is enough to justify a single executor-wide CFA = rbp + 16 / x29 + const rule for arbitrary PCs in the blob.
If we want to go to that direction, we would need to force frame pointers for the executor stencils too, not just for the shim (which it doesn't make any sense as we moved away from them!)

The current implementation is still one synthetic executor-wide FDE. The unwinder uses the current PC to select that FDE and apply its CFI to recover the caller frame. That works where the actual machine state matches the synthetic rule at the stop PC, but it is still approximate executor-wide unwind metadata, not exact per-stencil CFI.

Separately, once this PR lands, wiring up libgcc-backed backtrace should be fairly easy. We already synthesise .eh_frame; the remaining work is to call the appropriate __register_frame* and deregistration API for that blob so the unwinder can see it.

@diegorusso
Copy link
Copy Markdown
Contributor Author

I have some questions about the EH frame generation and how it applies to the different code regions.
Looking at jit_record_code, it's called in two places:

  1. For jit_shim (line 811): the entry shim compiled from Tools/jit/shim.c
  2. For jit_executor (line 757): the full executor code region (code_size + state.trampolines.size)

Both end up calling _PyJitUnwind_GdbRegisterCode, which builds the same EH frame via _PyJitUnwind_BuildEhFrame.
The EH frame in elf_init_ehframe describes a specific prologue/epilogue sequence. On x86_64 for example:

push %rbp          (1 byte)
mov %rsp, %rbp     (3 bytes)
call *%rcx         (2 bytes)
pop %rbp           (1 byte)
ret

I understand how this is correct for jit_shim. Looking at Tools/jit/shim.c, it's a normal C function that calls into the executor:

_Py_CODEUNIT *
_JIT_ENTRY(...) {
    jit_func_preserve_none jitted = (jit_func_preserve_none)exec->jit_code;
    return jitted(exec, frame, stack_pointer, tstate, ...);
}

The compiler will emit exactly the prologue/epilogue the EH frame describes.
But I don't understand how the same EH frame is correct for jit_executor. The executor code region is a concatenation of many stencils, each compiled from Tools/jit/template.c with __attribute__((preserve_none)), chaining together via __attribute__((musttail)) tail calls. These stencils don't have the push rbp / mov rsp,rbp prologue that the EH frame describes. They use a completely different calling convention.
The FDE covers the full code_size + trampolines.size range but the CFI instructions only describe ~7 bytes of prologue/epilogue. DWARF will apply the last rule (CFA = RSP + 8 on x86_64) to all remaining addresses in the range. I don't understand why that rule would be correct at arbitrary points within the stencil code. Is it guaranteed that preserve_none stencils never modify RSP? Or is there something else going on that makes this work?
The test (test_jit.py) sets a breakpoint at id(42) which hits in the interpreter, not in the middle of a stencil. So the test verifies that the symbols appear in GDB's backtrace, but I don't think it exercises unwinding from an arbitrary point within the executor code region. Could we add a test that triggers unwinding from inside JIT code (e.g., via a signal or Ctrl+C while executing JIT code)?
Am I missing something about how the stencils interact with the stack, or is the EH frame intentionally approximate for the executor region?

What this change synthesises for jit_executor is one unwind description for the executor as a whole, not compiler-emitted per-stencil CFI. Because the stencils are musttail-chained, the jumps between stencils do not add extra native call frames. The unwind job here is just to recover the caller of the executor frame. We don't want to describe each stencil as its own frame.
When GDB stops at a PC inside py::jit_executor:<jit>:

* it finds the FDE whose range covers that PC

* takes the CFI row for that PC,

* computes the CFA from that row

* uses the CFA rules to recover the caller registers and return PC.

On AArch64, for most of the covered executor range, the synthetic CFI says:

* CFA = x29 + 16

* saved x29 at CFA - 16

* saved x30 at CFA - 8.
  That is enough for GDB to recover the caller frame in `py::jit_shim:<jit>`, and then continue unwinding into `_PyEval_*`.

Good catch for the testing gap. I’ve now added a new test that breaks inside the jit executor. It sill breaks at the builtin_id but GDB then finishes out through the C helper frames until the selected frame is py::jit_executor:<jit> (thanks to some GDB-python scripting), single-steps twice inside the executor, and only then runs bt. The backtrace is now taken with the current PC in executor code itself, and it unwinds through py::jit_shim:<jit> and then back into _PyEval_*.

@diegorusso I have to say that I am tremendously confused here.

My understanding of what this code is supposed to do is pretty simple: if GDB or backtrace() stops at an arbitrary PC inside py::jit_executor:<jit>, the unwind info for that exact PC should let the unwinder reconstruct the caller frame (py::jit_shim:<jit>) and then continue into _PyEval_*.

So the real question is not “does the FDE cover the address range?” and it is not “do the stencils form one logical frame?”. The real question is: does the CFI row that applies at that PC actually describe the machine state there?

That is the part I do not think has been explained.

I agree with the narrow musttail point: tail-chaining the stencils means you do not accumulate one native call frame per stencil. Fine. But that only tells us that we want to unwind the executor as one logical frame. It does not tell us that one fixed synthetic unwind recipe is valid everywhere inside the executor blob.

And that is exactly where I think the argument goes off the rails.

jit_executor is not one ordinary C function with one stable prologue/epilogue. It is a concatenation of many preserve_none stencils, glued together with musttail. For a single synthetic FDE to be correct across the whole region, there has to be some invariant that says “for any PC in executor code, the CFA and saved return state look like this”. I do not see that invariant stated anywhere, and the current explanation seems to jump from “musttail” straight to “the unwind is correct”, which are not the same thing.

A concrete x86_64 example of why this seems wrong to me:

with the same sort of flags used for executor stencils, a preserve_none + musttail function can compile to something as trivial as

jmp callee

or, if it needs temporary stack space / spills, something more like

subq $24, %rsp
...
addq $24, %rsp
jmp callee

In the first case there is no %rbp frame at all. In the second case the CFA is temporarily %rsp-relative and changes inside the body. So I do not understand how one synthetic %rbp-based description for the entire covered executor range is supposed to be generally correct.

For jit_shim I can at least see the intended story, because it is one ordinary non-tail C function that calls into JIT code. For jit_executor, I still do not see what makes the unwind recipe valid for arbitrary PCs inside the blob.

Also, I rebuilt the branch locally and tried the exact “finish to py::jit_executor:<jit>, step twice, then bt” flow. On x86_64 I still get:

#0  py::jit_executor:<jit> ()
#1  ?? ()
...
Backtrace stopped: previous frame inner to this frame (corrupt stack?)

So this is not just a theoretical concern for me. I still do not understand why the model being described here is supposed to work.I am of course not objecting to the goal. I am saying I still do not see the correctness argument. If the claim is that this is actually a correct unwind description for jit_executor as a whole, then I think what is missing from the discussion is the key invariant: what exactly is guaranteed to be true about the CFA / saved FP / saved return address at an arbitrary PC inside executor code that makes this one synthetic FDE valid?

Ok, I think now I understand. After re-checking the generated stencils I agree the current explanation was too bold.

musttail only establishes the narrow point that the stencil-to-stencil transitions do not accumulate one native call frame per stencil and it does not by itself establish the stronger property needed for unwinding: that for an arbitrary PC inside jit_executor, the CFA and saved return state always have a shape described by one executor-wide FDE.

That stronger property is the missing invariant here.

After looking again at the regenerated x86_64 and AArch64 stencils, I don't think we have that invariant today:

  • only jit_shim gets a guaranteed frame-pointer-based prologue
  • executor stencils are not uniform
    • on x86_64, many executor stencils are frameless and/or adjust rsp
    • on AArch64, many executor stencils just save x30 and adjust sp, without establishing x29
    • only a small subset of executor stencils actually materialise a conventional rbp/x29 frame

I cannot justify the current synthetic executor-wide FDE as being correct for arbitrary PCs in the executor blob. The new test I added is still useful, but it proves something narrower: that the synthetic FDE works for the exercised in-executor stop. It does not prove that the same CFI is exact for every interior PC in the region (like you did in your example)

I think the real options are:

  1. Make the invariant true in codegen/stencils by forcing all executor stencils to follow one documented frame-layout rule, so one FDE is actually justified.
    For example:
  • x86_64: every executor stencil establishes the same rbp-based frame shape
  • AArch64: every executor stencil establishes the same x29/x30-based frame shape, or at least guarantees that x29 is stable and the original return-to-shim state is always recoverable in one fixed way
  1. Emit finer-grained unwind metadata: keep the current mixed stencil shapes, but stop pretending one unwind recipe covers the whole executor. That means multiple FDEs or per-range metadata.

  2. Narrow the claim: keep executor symbolisation, but do not claim a correct executor-wide unwind description until we have either (1) or (2).

  3. Something else?

The current implementation does not yet have the invariant needed to justify one executor-wide FDE for jit_executor but at the same time I don't really like the suggestions above.

Let me think about it

@Fidget-Spinner
Copy link
Copy Markdown
Member

the executor stencils still are not uniformly rbp/x29-framed,

The current generation reserve the rbp. So all current stencils assume an rbp. Do you think it would fix it if we emitted our own prologue for the very first JIT executor uop ie (push %rbp; movq %rsp, %rbp) , and teardown (popq %rbp) at all rets ? I have a working branch that does that. FWIW, it can be done quite easily using the assembly manipulator we have in the JIT. Will that make it appropiate rbp/x29-framed?

call the appropriate __register_frame* and deregistration API for that blob so the unwinder can see it.

Unfortunately, it seems you're right here. I dug around libgcc a little more and that's the only interface I see that intercepts _Unwind_Find_FDE. The function is public but undocumented, which is annoying. I'm just shocked that libgcc does not seem to use frame pointers as a fallback for x86_64 or AArch64 when I looked around it.

@diegorusso
Copy link
Copy Markdown
Contributor Author

The current generation reserve the rbp. So all current stencils assume an rbp.

Not all of them. See _SET_IP family. But you can see others as well. On AArch64 if we reserve the frame pointer, it will be barely touched (just a few uops set it). If we don't reserve it, then we have the standard prologue/epilogue for the majority of the uops.

I'm not entirely sure your statement is true.

@Fidget-Spinner
Copy link
Copy Markdown
Member

_SET_IP

Huh that's surprising! On x86_64, the current main produces code that doesn't touch rbp at all (from manual inspection at least). I wonder why it's different on AArch64, thanks for reporting back.

@Fidget-Spinner
Copy link
Copy Markdown
Member

_SET_IP

Huh that's surprising! On x86_64, the current main produces code that doesn't touch rbp at all (from manual inspection at least). I wonder why it's different on AArch64, thanks for reporting back.

Oh sorry I'm wrong, not main, my branch, I had to pass the usual:

            "-fno-omit-frame-pointer",
            "-mno-omit-leaf-frame-pointer",

to it to get things like that.

@pablogsal
Copy link
Copy Markdown
Member

Hey, I did another pass and found something important in the DWARF/GDB unwind info. As I did not want to keep you pushing again and again, I pushed some commits, please check them out.

The issue is that the old GDB CFI was describing the JIT executor like a normal function with a real prologue. On x86_64 it was effectively telling GDB to unwind as if the code began like this:

push %rbp
mov  %rsp, %rbp

and the equivalent DWARF rule was basically:

DW_CFA_advance_loc 1          # after push %rbp
DW_CFA_def_cfa_offset 16
DW_CFA_offset rbp, -16
DW_CFA_advance_loc 3          # after mov %rsp,%rbp
DW_CFA_def_cfa_register rbp

That is a valid description for a normal C-style entry sequence, but it is not what the executor stencils actually do. The executor code keeps the frame pointer pinned across the whole region, so there is no real prologue for GDB to “walk through” instruction by instruction. If GDB stops near the start of the executor and the DWARF says “pretend the prologue already happened”, it computes the CFA from the wrong place and can read the saved frame pointer / return address from the wrong stack slots.

The new GDB-only path fixes that by describing the frame layout that is actually true while the executor is running. On x86_64 the equivalent rule is now just:

DW_CFA_def_cfa rbp, 16
DW_CFA_offset rip, -8
DW_CFA_offset rbp, -16

with no per-PC prologue simulation in the FDE. In other words, instead of telling GDB “watch a fake prologue happen”, we now tell it “this is the frame layout for this JIT region, unwind from that”. The same idea is used on AArch64 with x29 / x30.

I also validated this by hand in GDB instead of trusting only the Python test harness. I built a clean JIT-enabled tree, broke in builtin_id, finished until the selected frame was py::jit_entry:<jit>, single-stepped inside the JIT blob, and checked info frame / bt. With the new DWARF the unwind chain stayed sane and went back into _PyEval_* and PyEval_EvalCode as expected.

I also did a negative check by forcing the wrong unwind mode on purpose, and the backtrace immediately turned into garbage which is a strong sign that this change is fixing a real mismatch between the unwind metadata and the code we actually emit.

Add a shared helper that asserts exactly one py::jit_entry frame
above at least one eval frame, so regressions producing duplicate
JIT frames or JIT-below-eval can't pass the old tolerant regex.
diegorusso and others added 2 commits April 20, 2026 17:29
Remove absolute_addr from elf_init_ehframe_perf code path
Implement a hack on AArch64 to tell where the shim prologue is
positioned. to be properly fixed
The GDB CFI hand-rolled in jit_unwind.c couldn't correctly describe the
compiled shim's prologue on AArch64 and relied on hardcoded offsets
that would silently invalidate under any compiler/flag change. Tools/jit
now compiles shim.c with -fasynchronous-unwind-tables, extracts its
.eh_frame at build time, and ships the CIE/FDE CFI bytes as a blob in
jit_stencils.h; jit_unwind.c splices those bytes into the synthetic
EH frame at runtime, so whatever prologue clang emits is described
accurately. Executor regions keep the hand-written steady-state rule,
which is our pinned-frame-pointer invariant (enforced by
Tools/jit/_optimizers.py _validate()), not a guess at compiler output.
Also: "Backtrace stopped: frame did not save the PC" is now a hard
AssertionError instead of a silent skip (it always indicates a real
unwind bug), and the three JIT tests share a get_stack_trace override
that opts out of the generic "?? ()" skip so unrelated libc-without-
debug-info frames don't mask a passing test.
Link the JIT shim into the binary and remove
the old runtime shim-specific unwind machinery.
Rename _PyJIT to _PyJIT_Entry and the synthetic
executor frame to py::jit:executor.
Make executor unwinding materialize _PyJIT_Entry
beneath py::jit:executor.
Update the GDB tests to require both the executor
frame and _PyJIT_Entry.
Accept either JIT backtrace shape in test_jit.py:
- py::jit:executor -> _PyEval_*
- py::jit:executor -> _PyJIT_Entry -> _PyEval_*

This avoids baking in architecture-specific unwind details while still
checking that GDB gets out of the JIT region and back into the eval loop.
@read-the-docs-community
Copy link
Copy Markdown

read-the-docs-community Bot commented Apr 28, 2026

Documentation build overview

📚 cpython-previews | 🛠️ Build #32500094 | 📁 Comparing aa959c1 against main (0fcf2b7)

  🔍 Preview build  

93 files changed · + 1 added · ± 92 modified

+ Added

± Modified

Copy link
Copy Markdown
Member

@markshannon markshannon left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've just tried this out and it works nicely (on linux AArch64). Here's an example:

Hitting ctrl-C while running this under gdb:

def count(seq):
    t = 0
    for i in seq:
        t += math.sqrt(1.0)

Gives me this backtrace:

#0  PyFloat_AsDouble (op=0xfffff7840770) at Objects/floatobject.c:253
#1  0x0000fffff7779074 in math_1 (err_msg=0xfffff777bf38 "expected a nonnegative input, got %s", can_overflow=0, func=<optimized out>, arg=<optimized out>) at ./Modules/mathmodule.c:792
#2  math_sqrt (self=<optimized out>, args=<optimized out>) at ./Modules/mathmodule.c:1161
#3  0x0000fffff7fe72f0 in py::jit:executor ()
#4  0x0000aaaaaab57960 in _PyEval_EvalFrameDefault (tstate=0xaaaaab31b3c0, frame=0xfffffffffffffffd, throwflag=-1424409936, throwflag@entry=0) at Python/generated_cases.c.h:5941
#5  0x0000aaaaaad3a96c in _PyEval_EvalFrame (throwflag=0, frame=0xfffff7fea020, tstate=0xaaaaab1c7de8 <_PyRuntime+344816>) at ./Include/internal/pycore_ceval.h:122
...

and I can use the up, down, step and continue commands as I would expect.

Stepping into the jitted code even works (sort of):

Single stepping until exit from function py::jit:executor,
which has no line number information.

Comment thread Python/jit.c
code_addr, code_size, entry, filename);
return NULL;
}
#endif
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Unless i am missing something I think that_PyPerfJit_WriteNamedCode() is now invoked from JIT compilation, but perf_map_jit_write_entry_with_name() still updates the global perf_jit_map_state and emits a multi-record PerfUnwindingInfo/PerfLoad sequence without holding map_lock. If this is true then two threads can interleave records and race code_id, producing a corrupted jitdump or duplicate IDs. Or is guuaranteed that we will never have concurrent jit compilation?

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ok, good catch. I will add a lock in perf_map_jit_write_entry_with_name when updating the perf_jit_map_state

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This has been addressed here: 37287ba

Comment thread Python/jit_unwind.c Outdated
Comment on lines +651 to +666
/* Executor steady-state rule (our invariant, not the compiler's). */
#ifdef __x86_64__
DWRF_U8(DWRF_CFA_def_cfa); // CFA = %rbp + 16
DWRF_UV(DWRF_REG_BP);
DWRF_UV(16);
DWRF_U8(DWRF_CFA_offset | DWRF_REG_RA);
DWRF_UV(9); // return-to-_PyJIT_Entry at cfa-72
#elif defined(__aarch64__) && defined(__AARCH64EL__) && !defined(__ILP32__)
DWRF_U8(DWRF_CFA_def_cfa); // CFA = x29 + 96
DWRF_UV(DWRF_REG_FP);
DWRF_UV(96);
DWRF_U8(DWRF_CFA_offset | DWRF_REG_FP);
DWRF_UV(12); // caller x29 at cfa-96
DWRF_U8(DWRF_CFA_offset | DWRF_REG_RA);
DWRF_UV(11); // caller x30 at cfa-88
#else
Copy link
Copy Markdown
Member

@pablogsal pablogsal Apr 30, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I still feel quite worried about this: this hard-codes constants (9, 12, 11, 16, 96, 72) that describe _PyJIT_Entry's frame layout...specifically the slot where the call to jitted(...) spills its return address. I don't think nothing guarantees that layout is won't change when we change the clang version. Indeed i can imagine a bunch of reasons that can change that like changing the PyStackRef_ZERO_BITS width or the number of preserve_none args in Tools/jit/shim.c as none of these are pinned. If any of those change, -72 silently becomes wrong and bt produces a readable-looking but bogus frame; the shape assertions in test_jit.py won't catch a misaligned-by-one-slot RA.

Is there anything that guarantees this won't ever change?

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

as discussed on Discord, for now it will be enough to have a mechanism that crashes at build time if the shim of the shape is different from what we expect.
The correct solution would be to read at build time the eh_frame of the shim and parametrise the above rules. We agreed to leave this for 3.16.

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I tried to do a check but I didn't like it and it was introducing some complexity. Actually from there to rely on the eh_frame of the shim is not much work and I did it here: 13db9a9

Now the eh_frame is the source of truth for generating the CIE of the dwarf that we emit to unwind the executor.

Comment thread Python/jit_unwind.c Outdated
offset += shstrtab_size;
const size_t str_off = offset;
offset += strtab_size;
offset = _Py_SIZE_ROUND_UP(offset, sizeof(Elf64_Sym));
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is not a power of two. _Py_SIZE_ROUND_UP(n, a) is (((n) + ((a)-1)) & ~((a)-1)), which only works for power-of-2 alignments. I checked and indeed: round_up(1, 24) == 8, round_up(17, 24) == 40, round_up(25, 24) == 32.

Today the code happens to work because Elf64_Sym only requires 8-byte alignment for its uint64_t fields and the macro coincidentally produces 8-aligned offsets. But anyone reading this assumes sym_off is a multiple of 24, and any future code that relies on that invariant, We can easily fix by

 /* Elf64_Sym requires 8-byte alignment for st_value/st_size. */
 offset = _Py_SIZE_ROUND_UP(offset, 8);

matching sh_addralign = 8 for SH_SYMTAB.

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This has been addressed here: 6e3a2ee

Comment thread Python/optimizer.c
res->trace = (_PyUOpInstruction *)(res->exits + exit_count);
res->code_size = length;
res->exit_count = exit_count;
res->jit_gdb_handle = NULL;
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nit: executor->jit_gdb_handle = NULL is initialized at three sites: allocate_executor(), make_executor_from_uops() and make_cold_executor() . The latter two call allocate_executor() first, which already zeros the field. The two extra assignments are redundant

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This has been addressed here: c063ff9

Comment thread Python/jit.c
PyErr_Format(PyExc_RuntimeWarning, "JIT %s (%d)", message, hint);
}

static void *
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

jit_record_code returns void *, but in the perf-trampoline branch returns NULL on success. The caller assigns into executor->jit_gdb_handle; _PyJIT_Free later only unregisters when non-NULL. The "NULL = no registration handle to release" semantics make perf path and the failure path of GDB path indistinguishable to the free side. This correct, but I had to trace through two backends to figure out NULL is not always a failure. I think either we add a comment that failures are intentionally swallowed or we need to propagate errors

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ok, fair point. Comment added here: 74e65ca

Comment thread Python/jit_unwind.c Outdated
struct jit_code_entry *first_entry;
};

static PyMutex jit_debug_mutex = {0};
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

jit_debug_mutex is a process-global PyMutex. After fork(), the child inherits the mutex state; if the parent was inside gdb_jit_register_code() at fork time (mutex held by the dying parent thread), the child deadlocks the next time it tries to register a JIT entry. CPython's _PyOS_AfterFork_Child reset path doesn't know about jit_debug_mutex because it's static.

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm a bit lost here. In a previous comment you told me to create it a separate static PyMutex #146071 (comment)

What do you suggest to do?

Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Since the child process won't be in the middle of gdb_jit_register_code, you can simply set jit_debug_mutex = {0}; in _PyOS_AfterFork_Child.

You'll need to make jit_debug_mutex non static, of course, and it should be renamed to _Py_jit_debug_mutex

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This has been addressed here: fb0c17e

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

7 participants