Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
95 changes: 80 additions & 15 deletions crates/quantum_info/src/clifford.rs
Original file line number Diff line number Diff line change
Expand Up @@ -14,6 +14,8 @@ use std::fmt;
use fixedbitset::FixedBitSet;
use ndarray::{Array2, ArrayView2};

use crate::dense_pauli::DensePauli;

/// Symplectic matrix.
pub struct SymplecticMatrix {
/// Number of qubits.
Expand Down Expand Up @@ -288,15 +290,16 @@ impl Clifford {
);
}

/// Evolving the single-qubit Pauli-Z with Z on qubit qbit.
/// Returns the evolved Pauli in the a sparse ZX format: (sign, z, x, indices).
/// Evolving a Pauli with a single non-identity Z-term on qubit `qbit` by the given Clifford.
///
/// Return the evolved Pauli in a sparse ZX format: (sign, z, x, indices).
pub fn get_inverse_z(&self, qbit: usize) -> (bool, Vec<bool>, Vec<bool>, Vec<u32>) {
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we still need this implementation, given that we now have evolve_single_qubit_pauli_dense?

It would be good to update the existing code paths to use the new functionality, also from a testing POV (since the new function is not used anywhere yet, right?).

Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is the "sparse" variant of the same function. From my local experiments and an offline discussion with you, we might need both variants. Applying Litinski to a single-qubit RZ-rotation is faster with the sparse format. Applying on a multi-qubit PPR is faster with a dense format.

let mut z = Vec::with_capacity(self.num_qubits);
let mut x = Vec::with_capacity(self.num_qubits);
let mut indices = Vec::with_capacity(self.num_qubits);
let mut pauli_indices = Vec::<usize>::with_capacity(2 * self.num_qubits);
// Compute the y-count to avoid recomputing it later
let mut pauli_y_count: u32 = 0;
let mut pauli_y_count: u8 = 0;
for i in 0..self.num_qubits {
let z_bit = self.tableau[qbit][i];
let x_bit = self.tableau[qbit][i + self.num_qubits];
Expand All @@ -310,34 +313,97 @@ impl Clifford {
if z_bit {
pauli_indices.push(i + self.num_qubits);
}
pauli_y_count += (x_bit && z_bit) as u32;
pauli_y_count += (x_bit && z_bit) as u8;
}
}
let phase = compute_phase_product_pauli(self, &pauli_indices, pauli_y_count);

(phase, z, x, indices)
}

/// Evolve a Pauli with a single non-identity term (either X, Y, or Z on qubit `qbit`)
/// by the given Clifford.
/// The non-identity Pauli term is represented as a pair `(pauli_z, pauli_x)` of boolean values.
///
/// Return the evolved Pauli as (dense) Pauli.
pub fn evolve_single_qubit_pauli_dense(
&self,
pauli_z: bool,
pauli_x: bool,
qbit: usize,
) -> DensePauli {
let num_qubits = self.num_qubits;
let mut z = FixedBitSet::with_capacity(num_qubits);
let mut x = FixedBitSet::with_capacity(num_qubits);
let mut pauli_indices = Vec::<usize>::with_capacity(2 * num_qubits);
// Compute the y-count to avoid recomputing it later
let mut pauli_y_count: u8 = 0;
for i in 0..num_qubits {
let (z_bit, x_bit) = match (pauli_z, pauli_x) {
(true, false) => (
// pauli Z
self.tableau[qbit][i],
self.tableau[qbit][i + num_qubits],
),
(false, true) => (
// pauli X
self.tableau[qbit + num_qubits][i],
self.tableau[qbit + num_qubits][i + num_qubits],
),
(true, true) => (
// pauli Y
self.tableau[qbit + num_qubits][i] ^ self.tableau[qbit][i],
self.tableau[qbit + num_qubits][i + num_qubits]
^ self.tableau[qbit][i + num_qubits],
),
_ => unreachable!("This is only called for RX/RZ/RY gates."),
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is not unreachable in the current form. It's a pub function and calling it with with pauli_z=false, pauli_x=false is a valid input.

};
z.set(i, z_bit);
x.set(i, x_bit);

if x_bit {
pauli_indices.push(i);
}
if z_bit {
pauli_indices.push(i + num_qubits);
}
pauli_y_count += (x_bit & z_bit) as u8;
}

let phase_sign = compute_phase_product_pauli(self, &pauli_indices, pauli_y_count);
let evolved_pauli_phase =
(pauli_y_count + 2 * (phase_sign as u8) + 3 * ((pauli_x & pauli_z) as u8)) & 3;
DensePauli {
pauli_x: x,
pauli_z: z,
xz_phase: evolved_pauli_phase,
}
}
}

/// Computes the sign (either +1 or -1) when conjugating a Pauli by a Clifford
/// Compute the sign (either +1 or -1) when conjugating a Pauli by a Clifford.
/// The Pauli is represented using a sparse vector of indices.
/// For efficiency, the number of Y-terms in the Pauli is already available.
fn compute_phase_product_pauli(
clifford: &Clifford,
pauli_indices: &[usize],
pauli_y_count: u32,
pauli_y_count: u8,
) -> bool {
Comment on lines 387 to 391
Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This question is Independent of this PR, but I might as well ask it here. This function is the hot-spot for evolving Paulis by Cliffords and in particular for the LitinskiTransformation pass. I have locally tried multiple ways to reimplement it.

The following version replaces the match statement by a static table lookup.

static PHASE_TABLE: [u8; 16] = [0, 0, 0, 0, 0, 0, 3, 1, 0, 1, 0, 3, 0, 3, 1, 0];
...
let idx = (x1 as u8) | ((z1 as u8) << 1) | ((x as u8) << 2) | ((z as u8) << 3);
ifact += PHASE_TABLE[idx as usize];

It works very well for large densely populated Cliffords (resulting in about 5x performance improvement) but is apparently slightly worse than the current implementation on the existing ASV benchmarks.

I have also tried replacing the match or the table lookup by an explicit arithmetic computation, but this was consistently worse than the table lookup on all of the benchmarks.

I am wondering if there is a clever way to vectorize this computation - or any additional ideas.

let num_qubits = clifford.num_qubits;

let phase = pauli_indices.iter().fold(false, |acc, &pauli_index| {
acc ^ (clifford.tableau[2 * clifford.num_qubits][pauli_index])
acc ^ (clifford.tableau[2 * num_qubits][pauli_index])
});

let mut ifact: u8 = pauli_y_count as u8 % 4;

for j in 0..clifford.num_qubits {
let mut ifact: u8 = pauli_y_count;
for j in 0..num_qubits {
let mut x = false;
let mut z = false;
let x1_column = &clifford.tableau[j];
let z1_column = &clifford.tableau[j + num_qubits];
for &pauli_index in pauli_indices.iter() {
let x1: bool = clifford.tableau[j][pauli_index];
let z1: bool = clifford.tableau[j + clifford.num_qubits][pauli_index];

let x1: bool = x1_column[pauli_index];
let z1: bool = z1_column[pauli_index];
match (x1, z1, x, z) {
(false, true, true, true)
| (true, false, false, true)
Expand All @@ -353,10 +419,9 @@ fn compute_phase_product_pauli(
};
x ^= x1;
z ^= z1;
ifact %= 4;
}
}
(((ifact % 4) >> 1) != 0) ^ phase
(((ifact & 3) >> 1) != 0) ^ phase
}

impl fmt::Debug for Clifford {
Expand Down
Loading