Skip to content

Update AI transpiler introduction tutorial to new template#4968

Open
henryzou50 wants to merge 7 commits intoQiskit:mainfrom
henryzou50:update-QATSI
Open

Update AI transpiler introduction tutorial to new template#4968
henryzou50 wants to merge 7 commits intoQiskit:mainfrom
henryzou50:update-QATSI

Conversation

@henryzou50
Copy link
Copy Markdown
Collaborator

Summary

Revised ai-transpiler-introduction.ipynb to follow the Tutorial_Template structure, simplifying the tutorial to focus on comparing the default transpiler (SABRE) vs the AI transpiler using random circuits.

Key changes from the old notebook:

  • Simplified circuit selection: Uses only random circuits with 2-qubit gates (instead of efficient_su2, QFT, BV, Clifford, and permutation circuits)
  • Added execution and fidelity testing: The old notebook skipped Steps 3-4 entirely ("no experiments will be executed"). The revised version uses mirror circuits to measure fidelity on both an Aer simulator (with a depolarizing noise model) and real hardware
  • Removed Part II benchmarking and Part III permutation synthesis: Consolidated into a focused Default vs AI comparison
  • Added quantitative analysis: Summary tables with mean/stdev for 2Q depth, gate count, and transpilation time, plus percentage improvement plots
  • Added commentary: Markdown cells discussing the depth vs gate count trade-off (AI optimizes depth, SABRE minimizes gate count), transpilation time scaling, and mirror circuit fidelity limitations at large scale
  • Template compliance: Follows the 4-step Qiskit patterns structure (Map, Optimize, Execute, Post-process) for both small-scale simulator and large-scale hardware examples

Tutorial structure:

  • Small-scale simulator example (6-25 qubits, depth 4): Full 4-step walkthrough with Aer simulator fidelity test on the 10-qubit case
  • Large-scale hardware example (26-50 qubits, depth 8): Compressed workflow with real hardware submission on the 26-qubit case

Revised ai-transpiler-introduction.ipynb following the Tutorial_Template
structure.

- Compare default (SABRE) vs AI transpiler using random circuits with
  only 2-qubit gates across small-scale (6-25 qubits) and large-scale
  (26-50 qubits) examples
- Use mirror circuits to evaluate transpilation fidelity on both Aer
  simulator (with depolarizing noise model) and real hardware
- Add summary tables with mean/stdev and percentage improvement metrics
- Add percentage improvement plots comparing AI vs default transpiler
- Include commentary on depth vs gate count trade-offs between the two
  strategies
@henryzou50 henryzou50 requested a review from a team April 10, 2026 09:20
@qiskit-bot
Copy link
Copy Markdown
Contributor

One or more of the following people are relevant to this code:

  • @henryzou50
  • @nathanearnestnoble

@review-notebook-app
Copy link
Copy Markdown

Check out this pull request on  ReviewNB

See visual diffs & provide feedback on Jupyter Notebooks.


Powered by ReviewNB

@henryzou50 henryzou50 changed the title Revise AI transpiler introduction tutorial Update AI transpiler introduction tutorial to new template Apr 10, 2026
Copy link
Copy Markdown
Collaborator

@nathanearnestnoble nathanearnestnoble left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The large scale section states "step 1-4 compress into single code block" but is in multiple blocks. Would suggest removing "compress into single code block"

@github-project-automation github-project-automation Bot moved this to In Review in Docs Planning Apr 13, 2026
@henryzou50
Copy link
Copy Markdown
Collaborator Author

The large scale section states "step 1-4 compress into single code block" but is in multiple blocks. Would suggest removing "compress into single code block"

Thanks @nathanearnestnoble , I just pushed a change for that and added the reasoning for why we don't compress the large scale example into a single code block for this tutorial.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

Status: In Review

Development

Successfully merging this pull request may close these issues.

3 participants