iTranslated by AI
Playing with Qiskit (16) — Quantum Machine Learning Part 2
Purpose
In Playing with Qiskit (15) — Quantum Machine Learning, we handled the classification problem of the Iris dataset using exact expectation value calculations with a statevector simulator.
This time, I would like to handle a case with noise. However, since using an actual device is expected to take time, I will handle the case of setting a noise model in AerEstimator. I will also consider error mitigation and try applying error mitigation using T-REx on IBM Quantum.
Simulation with Noise
I will use the content from VQE with Qiskit Aer Primitives.
Additional imports in addition to the previous ones
I will obtain a noise model from FakeManilaV2, which is a fake version of ibmq_manila, and set it in the Aer Estimator.
Preparing an Estimator with Noise
from qiskit import transpile
from qiskit.utils import algorithm_globals
from qiskit.providers.fake_provider import FakeManilaV2
from qiskit_aer.noise import NoiseModel
from qiskit_aer.primitives import Estimator as AerEstimator
I will mimic the settings found in VQE with Qiskit Aer Primitives. I will also check the circuit after transpilation.
seed = 1234
algorithm_globals.random_seed = seed
device = FakeManilaV2()
coupling_map = device.coupling_map
noise_model = NoiseModel.from_backend(device)
noisy_estimator = AerEstimator(
backend_options={
'method': 'density_matrix',
'coupling_map': coupling_map,
'noise_model': noise_model,
},
run_options={'seed': seed, 'shots': 1024},
transpile_options={'seed_transpiler': seed},
)
transpile(placeholder_circuit, backend=device).draw()

Since it has been replaced with native gates, the circuit depth is deeper than that of the previous one.

However, no SWAPs have been inserted, and I think it's a straightforward expansion. This is because I am using ibmq_manila with that expectation.
ibmq_manila Layout
The layout of ibmq_manila is as follows:

It is expected that using Qubit 0 to Qubit 3 will yield good results. Also, the Readout assignment error is largest on Qubit 4. In other words, Qubit 0 to Qubit 3 are relatively better. Additionally, the CNOT error is somewhat better than other actual devices. Given this, one might expect that it could perform reasonably well even without error mitigation.
Checking the Accuracy of Expectation Value Calculations
Let's try calculating the expectation value with some appropriate initial values.
length = make_ansatz(n_qubits, dry_run=True)
np.random.seed(10)
init = np.random.random(length) * 2*math.pi
qc = placeholder_circuit.bind_parameters(X_train[0].tolist() + init.tolist())
estimator = Estimator()
ideal_expval = estimator.run([qc], [hamiltonian]).result().values[0]
print(f'{ideal_expval=}')
noisy_expval = noisy_estimator.run([qc], [hamiltonian]).result().values[0]
print(f'{noisy_expval=}')
ideal_expval=0.41827236579867355
noisy_expval=0.361328125
Whether it's good or bad, it's probably not that great.
Experiment
I will perform the experiment by training without expecting too much. I will reuse the implementation from Playing with Qiskit (15) — Quantum Machine Learning and only replace the estimator with the noisy one defined this time.
%%time
length = make_ansatz(n_qubits, dry_run=True)
placeholder_circuit = make_placeholder_circuit(n_qubits)
np.random.seed(10)
init = np.random.random(length) * 2*math.pi
opt_params, loss_list = RunPQCTrain(
trainset, 32,
placeholder_circuit, hamiltonian, init=init, estimator=noisy_estimator,
epochs=100, interval=500)
Looking at the cost values, they are as follows, which is not too bad. It almost makes me feel like there might be a mistake in the implementation.

Also, the test accuracy was 0.9. Since the problem setting is very simple, it might be expected, but it resulted in an accuracy comparable to the ideal simulation.
Note that the experiment took about 8 minutes. This is about four times longer than the ideal simulation.
Applying Error Mitigation on IBM Quantum
Since I have the opportunity, I also want to know the results when error mitigation is applied. My expectations were as follows:
- Ideal test accuracy: 0.9
- Test accuracy with noise simulation (no error mitigation): 0.9
Given this, I anticipate a rather unexciting result, but I expect the test accuracy to be around 0.9 even when error mitigation is applied on IBM Quantum.
Comparing Time per Error Mitigation
I will look at the values and the time taken when calculating the expectation value with a single circuit using each error mitigation method.
| Expectation Value | Calculation Time | |
|---|---|---|
| Statevector (Ideal) | 0.418 | 17.6 ms |
| T-REx | 0.400 | 3.11 s |
| ZNE | 0.392 | 4.38 s |
| PEC | 0.427 | 3min 38s |
While I would love to use PEC, it takes far too long. This time, I've decided to use T-REx.
Importing Necessary Modules
When you create a Jupyter notebook on IBM Quantum, it fortunately starts with some relevant boilerplate code, so I'll gratefully reuse it[1].
# Importing standard Qiskit libraries
from qiskit import QuantumCircuit, transpile
from qiskit.tools.jupyter import *
from qiskit.visualization import *
from ibm_quantum_widgets import *
# qiskit-ibmq-provider has been deprecated.
# Please see the Migration Guides in https://ibm.biz/provider_migration_guide for more detail.
from qiskit_ibm_runtime import QiskitRuntimeService, Sampler, Estimator, Session, Options
# Loading your IBM Quantum account(s)
service = QiskitRuntimeService(channel="ibm_quantum")
# Invoke a primitive. For more details see https://qiskit.org/documentation/partners/qiskit_ibm_runtime/tutorials.html
# result = Sampler("ibmq_qasm_simulator").run(circuits).result()
The important point here is that the Estimator class is being imported from qiskit_ibm_runtime, not from Qiskit Terra or Aer. The names are the same and confusing, but essentially, it means we will be using the Qiskit Runtime service.
A detailed explanation of what Qiskit Runtime is can be found in Learning Qiskit Runtime from Scratch [IBM Quantum Challenge]. It is likely a service equivalent to what Amazon Braket calls Amazon Braket Hybrid Jobs. There seems to be an explanation for that in Making quantum computers easier to use. Breaking down what the new "Amazon Braket Hybrid Jobs" service does as much as possible..
Performing Expectation Value Calculations with Qiskit Runtime Estimator Primitive
Since calculating on an actual device would likely take an enormous amount of time, I will use ibmq_qasm_simulator to perform simulations with a noise model derived from ibmq_manila.
Referring to Error suppression and error mitigation with Qiskit Runtime, I will apply error mitigation with T-REx.
I'm not quite sure how much should be wrapped in the session, but for now, I've wrapped the area around the expectation value calculation.
backend_simulator = 'ibmq_qasm_simulator'
noisy_backend = service.get_backend('ibmq_manila')
backend_noise_model = NoiseModel.from_backend(noisy_backend)
options = Options()
options.resilience_level = 1 # T-REx
options.optimization_level = 0 # no optimization
options.simulator = {
'noise_model': backend_noise_model
}
class PQCTrainerEstimatorQnn:
def __init__(self,
qc: QuantumCircuit,
initial_point: Sequence[float],
optimizer: optimizers.Optimizer
):
self.qc_pl = qc # placeholder circuit
self.initial_point = np.array(initial_point)
self.optimizer = optimizer
self.estimator = estimator
def fit(self,
dataset: Dataset,
batch_size: int,
operator: BaseOperator,
callbacks: list | None = None,
epochs: int = 1
):
dataloader = DataLoader(dataset, batch_size, shuffle=True, drop_last=True)
callbacks = callbacks if callbacks is not None else []
opt_loss = sys.maxsize
opt_params = None
params = self.initial_point.copy()
n_qubits = self.qc_pl.num_qubits
for epoch in range(epochs):
for batch, label in dataloader:
batch, label = self._preprocess_batch(batch, label)
label = label.reshape(label.shape[0], -1)
with Session(service=service, backend=backend_simulator) as session:
estimator = Estimator(session=session, options=options)
qnn = EstimatorQNN(
circuit=self.qc_pl, estimator=estimator,
observables=operator,
input_params=self.qc_pl.parameters[:n_qubits],
weight_params=self.qc_pl.parameters[n_qubits:]
)
expvals = qnn.forward(input_data=batch, weights=params)
_, grads = qnn.backward(input_data=batch, weights=params)
grads = np.squeeze(grads, axis=1)
total_loss = np.mean((expvals - label)**2)
total_grads = np.mean((expvals - label) * grads, axis=0)
if total_loss < opt_loss:
opt_params = params.copy()
opt_loss = total_loss
with open('opt_params_iris.pkl', 'wb') as fout:
pickle.dump(opt_params, fout)
# "update params"
self.optimizer.update(params, total_grads)
for callback in callbacks:
callback(total_loss, params)
return opt_params
def _preprocess_batch(self,
batch: torch.Tensor,
label: torch.Tensor
) -> tuple[np.ndarray, np.ndarray]:
batch = batch.detach().numpy()
label = label.detach().numpy()
return batch, label
The remaining code is reused from Playing with Qiskit (15) — Quantum Machine Learning.
Experiment
...
opt_params, loss_list = RunPQCTrain(trainset, 32,
placeholder_circuit, hamiltonian, init=init,
epochs=100, interval=500)
print(f'final loss={loss_list[-1]}')
print(f'{opt_params=}')
loss=1.7326727277662126
loss=1.5162304085470835
...
loss=0.2487620141326955
loss=0.33536750883215183
final loss=0.33536750883215183
opt_params=array([ 3.85653708, -0.66181524, 4.78169959, 5.41047038, 2.17274121,
2.17786779, 1.08354446])
CPU times: user 3min 36s, sys: 3.11 s, total: 3min 39s
Wall time: 2h 9min 35s
Consequently, it took a very long time. Since it takes about 2 minutes with Terra's statevector simulator, this means it took about 65 times longer. Initially, I thought it might take 176 times longer based on the time for a single error mitigation run, but it was about half that, which was a relief.
The test accuracy was 0.9 as expected, and the cost value progression was as follows. It is not particularly surprising as it is similar to other cases... but I was glad that I obtained results exactly as expected, even if it took a tremendous amount of time.

Summary
The problem setup was so simple that there was not much difference between the results of the noisy simulation and the simulation with error mitigation.
Even so, I realized that experiments of this scale take quite a bit of time.
The results suggest that the real ibmq_manila would likely yield similar results, but I don't feel like trying it on an actual device with a queue of about 50 today.
-
I see the unpleasant message "qiskit-ibmq-provider has been deprecated.", but I'll ignore it for now. For now... Oh, I'll have to migrate again eventually. ↩︎
Discussion