Giter Site home page Giter Site logo

sri-international / qc-app-oriented-benchmarks Goto Github PK

View Code? Open in Web Editor NEW
130.0 6.0 75.0 53.52 MB

QED-C: The Quantum Economic Development Consortium provides these computer programs and software for use in the fields of quantum science and engineering.

License: Apache License 2.0

Python 27.75% Dockerfile 0.18% Shell 0.02% Jupyter Notebook 72.05%

qc-app-oriented-benchmarks's Introduction

Application-Oriented Performance Benchmarks for Quantum Computing

This repository contains a collection of prototypical application- or algorithm-centric benchmark programs designed for the purpose of characterizing the end-user perception of the performance of current-generation Quantum Computers.

The repository is maintained by members of the Quantum Economic Development Consortium (QED-C) Technical Advisory Committee on Standards and Performance Metrics (Standards TAC).

Important Note -- The examples maintained in this repository are not intended to be viewed as "performance standards". Rather, they are offered as simple "prototypes", designed to make it as easy as possible for users to execute simple "reference applications" across multiple quantum computing APIs and platforms. The application / algorithmic examples are structured using a uniform pattern for defining circuits, executing across different platforms, collecting results, and measuring performance and fidelity in useful ways.

A variety of "reference applications" are provided. At the current stage in the evolution of quantum computing hardware, some applications will perform better on one hardware target, while a completely different set may execute better on another target. They are designed to provide users a quantum "jump start", so to speak, eliminating the need to develop for themselves uniform code patterns that facilitate quick development, deployment, and experimentation.

The QED-C committee that developed these benchmarks released a paper (Oct 2021) describing the theory and methodology supporting this work at

    Application-Oriented Performance Benchmarks for Quantum Computing

The QED-C committee released a second paper (Feb 2023) describing the addition of combinatorial optimization problems as advanced application-oriented benchmarks at

    Optimization Applications as Quantum Performance Benchmarks

Recently, the group recently another paper (Feb 2024) with additional benchmark programs and improvements to the framework at

    Quantum Algorithm Exploration using Application-Oriented Performance Benchmarks

See the Implementation Status section below for the latest report on benchmarks implemented to date.

Notes on Repository Organization

The repository is organized at the highest level by specific reference application names. There is a directory for each application or algorithmic example, e.g. quantum-fourier-transform, which contains the bulk of code for that application.

Within each application directory, there is a second-level directory, one for each of the target programming environments that are supported. The repository is organized in this way to emphasize the application first and the target environment second, to encourage full support across platforms.

The directory names and the currently supported environments are:

    qiskit      -- IBM Qiskit
    cirq        -- Google Cirq
    braket      -- Amazon Braket
    ocean       -- D-Wave Ocean

The goal has been to make the implementation of each algorithm identical across the different target environments, with the processing and reporting of results as similar as possible. Each application directory includes a README file with information specific to that application or algorithm. Below we list the benchmarks we have implemented with a suggested order of approach; the benchmarks in levels 1 and 2 are simpler and a good place to start for beginners, while levels 3 and 4 are more complicated and might build off of intuition and reasoning developed in earlier algorithms. Level 5 includes newly released benchmarks based on iterative execution done within hybrid algorithms.

Complexity of Benchmark Algorithms (Increasing Difficulty)

    1: Deutsch-Jozsa, Bernstein-Vazirani, Hidden Shift
    2: Quantum Fourier Transform, Grover's Search
    3: Phase Estimation, Amplitude Estimation, HHL Linear Solver
    4: Monte Carlo, Hamiltonian Simulation, Variational Quantum Eigensolver, Shor's Order Finding
    5: MaxCut, Hydrogen-Lattice

In addition to the application directories at the highest level, there are several other directories or files with specific purposes:

    _common                      -- collection of shared routines, used by all the application examples
    _doc                         -- detailed DESIGN_NOTES, and other reference materials
    _containerbuildfiles         -- build files and instructions for creating Docker images (optional)
    _setup                       -- information on setting up all environments
    
    benchmarks-*.ipynb           -- Jupyter Notebooks convenient for executing the benchmarks

Setup and Configuration

The prototype benchmark applications are easy to run and contain few dependencies. The primary dependency is on the Python packages needed for the target environment in which you would like to execute the examples.

In the Preparing to Run Benchmarks section you will find a subdirectory for each of the target environments that contains a README with everything you need to know to install and configure the specific environment in which you would like to run.

Important Note:

The suite of application benchmarks is configured by default to run on the simulators
that are typically included with the quantum programming environments.
Certain program parameters, such as maximum numbers of qubits, number of circuits
to execute for each qubit width and the number of shots, are defaulted to values that 
can run on the simulators easily.

However, when running on hardware, it is important to reduce these values to account 
for the capabilities of the machine on which you are executing. This is especially 
important for systems on which one could incur high billing costs if running large circuits.
See the above link to the _setup folder for more information about each programming environment.

Executing the Application Benchmark Programs from a Shell Window

The benchmark programs may be run manually in a command shell. In a command window or shell, change the directory to the application you would like to execute. Then, simply execute a line similar to the following, to begin the execution of the main program for the application:

    cd bernstein-vazirani/qiskit
    python bv_benchmark.py

This will run the program, construct and execute multiple circuits, analyze results, and produce a set of bar charts to report on the results. The program executes random circuits constructed for a specific number of qubits, in a loop that ranges from min_qubits to max_qubits (with default values that can be passed as parameters). The number of random circuits generated for each qubit size can be controlled by the max_circuits parameter.

As each benchmark program is executed, you should see output that looks like the following, showing the average circuit creation and execution time along with a measure of the quality of the result, for each circuit width executed by the benchmark program:

Sample Output

Executing the Application Benchmark Programs in a Jupyter Notebook

Alternatively, you may use the Jupyter Notebook templates that are provided in this repository. There is one template file provided for each of the API environments supported.

In the top level of this repository, start your jupyter-notebook process. When the browser listing appears, select the desired notebook .ipynb file to launch the notebook. There you will have access to a cell for each of the benchmarks in the repository, and may "Run" any one of them independently and see the results presented there.

Some benchmarks, such as Max-Cut and Hydrogen-Lattice, include a notebook for running advanced tests, specifically the iterative execution of interleaved classical/quantum code for a hybrid algorithm. See the instructions in the README for those benchmarks for procedures and options that are available.

Executing the Application Benchmark Programs via the Qiskit Runner (Qiskit Environment only)

It is possible to run the benchmarks from the top-level directory in a generalized way on the command line Qiskit_Runner

Enabling Compiler Optimizations

There is support provided within the Jupyter Notebook for the Qiskit versions of the benchmarks to enable certain compiler optimizations. In the first cell of the notebook, there is a variable called exec_options where several of the built-in Qiskit compiler optimizations may be specified.

The second cell of the Jupyter Notebook contains commented code with references to custom-coded Qiskit compiler optimizations as well as some third-party optimization tools. Simply uncomment the desired optimizations and rerun the notebook to enable the optimization method. The custom code for these optimizations is located in the _common/transformers directory. Users may define their own custom optimizations within this directory and reference them from the notebook.

Container Deployment of the Application Benchmark Programs

Applications are often deployed into Container Management Frameworks such as Docker, Kubernetes, and the like.

The Prototype Benchmarks repository includes support for the creation of a unique 'container image' for each of the supported API environments. You can find the instructions and all the necessary build files in a folder at the top level named _containerbuildfiles. The benchmark program image can be deployed into a container management framework and executed as any other application in that framework.

Once built, deployed, and launched, the container process invokes a Jupyter Notebook from which you can run all the available benchmarks.

Interpreting Metrics

  • Creation Time: time spent on the classical machine creating the circuit and transpiling.
  • Execution Time: time spent on the quantum simulator or hardware backend running the circuit. This only includes the time when the algorithm is being run and does not include any of the time waiting in a queue on Qiskit and Cirq. Braket does not currently report execution time and therefore does include the queue time.
  • Fidelity: a measure of how well the simulator or hardware runs a particular benchmark, on a scale from 0 to 1, with 0 being a completely useless result and 1 being perfect execution of the algorithm. The math of how we calculate the fidelity is outlined in the file _doc/POLARIZATION_FIDELITY.md.
  • Circuit/Transpiled Depth: number of layers of gates to apply a particular algorithm. The Circuit depth is the depth if all of the gates used for the algorithm were native, while the transpile depth is the number of gates if only certain gates are allowed. We default to ['rx', 'ry', 'rz', 'cx']. Note: this set of gates is just used to provide a normalized transpiled depth across all hardware and simulator platforms, and we separately transpile to the native gate set of the hardware. The depth can be used to help provide reasoning for why one algorithm is harder to run than another for the same circuit width. This metric is currently only available on the Qiskit implementation of the algorithms.

Implementation Status

Below is a table showing the degree to which the benchmarks have been implemented in each of the target platforms (as of the last update to this branch):

Prototype Benchmarks - Implementation Status

qc-app-oriented-benchmarks's People

Contributors

ccoffrin avatar hammamq avatar jake321southall avatar japanavi avatar jerrygamble1 avatar jjgoings avatar karl-mayer avatar khmayer01 avatar ninjab3381 avatar nithingovindugari avatar pratiksathe avatar probvar avatar qci-amos avatar rht avatar rryoung98 avatar rtvuser1 avatar smaity-qctrl avatar sonikaj avatar themehtaphysical avatar toddytharmonkey avatar toddythemonkey avatar vprusso avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

qc-app-oriented-benchmarks's Issues

Add QCi Qatalyst support and maxcut benchmark

QCi offers access to its devices through Qatalyst. The API supports minimization of QUBOs and Ising Hamiltonians. The maxcut benchmark would be a good one to implement.
The starting with the code for ocean provides a quick path to implementing this. The package qci-client on PyPI is a wrapper for the Qatalyst REST API.

Bug: Max Qubit number limit Shor (2)

When running Shor method 2 with >17Q, I get the error:

---------------------------------------------------------------------------
ValueError                                Traceback (most recent call last)
Cell In[18], line 4
      2 sys.path.insert(1, "shors/qiskit")
      3 import shors_benchmark
----> 4 shors_benchmark.run(min_qubits=min_qubits, max_qubits=max_qubits, max_circuits=1, num_shots=num_shots,
      5                 method=2,
      6                 backend_id=backend_id, provider_backend=provider_backend,
      7                 hub=hub, group=group, project=project, exec_options=exec_options)

File [c:\Users\2J7579897\Notebooks_general\QEDC-benchmark\QC-App-Oriented-Benchmarks\shors/qiskit\shors_benchmark.py:425](file:///C:/Users/2J7579897/Notebooks_general/QEDC-benchmark/QC-App-Oriented-Benchmarks/shors/qiskit/shors_benchmark.py:425), in run(min_qubits, max_circuits, max_qubits, num_shots, method, verbose, backend_id, provider_backend, hub, group, project, exec_options, context)
    423 # create the circuit for given qubit size and order, store time metric
    424 ts = time.time()
--> 425 qc = ShorsAlgorithm(number, base, method=method, verbose=verbose)
    426 metrics.store_metric(num_qubits, number_order, 'create_time', time.time()-ts)
    428 # collapse the 4 sub-circuit levels used in this benchmark (for qiskit)

File [c:\Users\2J7579897\Notebooks_general\QEDC-benchmark\QC-App-Oriented-Benchmarks\shors/qiskit\shors_benchmark.py:249](file:///C:/Users/2J7579897/Notebooks_general/QEDC-benchmark/QC-App-Oriented-Benchmarks/shors/qiskit/shors_benchmark.py:249), in ShorsAlgorithm(number, base, method, verbose)
    246 qc.x(qr_counting).c_if(cr_aux,1)
    247 qc.h(qr_counting)
--> 249 cUa_gate = controlled_Ua(n, base,2**(2*n-1-k), number)
    251 # Create relevant temporary qubit list
    252 qubits = [qr_counting[0]]; qubits.extend([i for i in qr_mult]);qubits.extend([i for i in qr_aux])

File [c:\Users\2J7579897\Notebooks_general\QEDC-benchmark\QC-App-Oriented-Benchmarks\shors/qiskit\shors_benchmark.py:153](file:///C:/Users/2J7579897/Notebooks_general/QEDC-benchmark/QC-App-Oriented-Benchmarks/shors/qiskit/shors_benchmark.py:153), in controlled_Ua(n, a, exponent, N)
    151 qr_main = QuantumRegister(n)
    152 qr_ancilla = QuantumRegister(2)
--> 153 qc = QuantumCircuit(qr_ctl, qr_x, qr_main,qr_ancilla, name = f"C-U^{a**exponent}")
    155 # Generate Gates
    156 a_inv = modinv(a**exponent,N)

ValueError: Exceeds the limit (4300) for integer string conversion; use sys.set_int_max_str_digits() to increase the limit

hamiltonian-simulation is throwing the error

Hi, I tried running the hamiltonian-simulation code in the pycharm. I am getting the below error:
image
The directory structure of my program looks as below:
image
Please give suggestion on how to resolve this error?

Bug: benchmarks-ocean-add-maxcut.ipyb notbook

Hi,

I've set up an ocean environment as specified by the ocean setup readme, with python version 3.9.7. In the first code cell of benchmarks-ocean-add-maxcut.ipyb, I comment the D-WAVE provider code but leave the simulated annealing sampler code uncommented. I can then run this cell fine. However, when I run the code block under Maxcut - Method 2 I get a value error coming from calling np.max(sizes) where sizes is an empty array. Do you know how I can fix this please? I know 'from neal import SimulatedAnnealingSampler' is now deprecated, but I get the same error with 'from dwave.samplers import SimulatedAnnealingSampler' too

[Bug] Incorrect constant for `s_int` in `hhl_benchmark.py`

Hi, while I was trying out the Qiskit HHL benchmark, I encountered a minor bug:
(I was executing benchmarks-qiskit-add-hhl.ipynb, without modification.)

Bug

At line 785 of hhl_benchmark.py, there is a variable s_int defined as s_int = 1000 * (i+1) + (2**off_diag_index)*(3**b); at line 603 (same file), s_int is recovered as s_int = s_int - 1000 * int(s_int/1000). However, when (2**off_diag_index)*(3**b) become greater than 1000, the calculated b for true_distr() would be wrong (since anything over a thousand is truncated). See Fig. 1. And therefore the resulting fidelity isn't correct as well. See Fig 2.

Fig. 1
image

Fig. 2
image

A possible solution

An easy but temporary solution is to replace 1000 with a larger integer, say int(1e10).

Question

If this isn't a bug, is there any reason why 1000 is set? ((2**off_diag_index)*(3**b) can become greater than 1000 easily.)

Thank you very much!

Support for Quantinuum within QED-C benchmarking suite

As of a recent fix based on this issue on the qiskit-quantinuum-provider project, one can now invoke a backend instance of a Quantinuum device via the Qiskit provider interface (as is done for the other supported backends within QED-C).

As a proof of concept (assuming one has access to a Quantinuum device), doing this following:

    import os
    from qiskit_quantinuum import Quantinuum
    from qiskit import execute
    Quantinuum.save_account(os.environ.get("QUANTINUUM_USERNAME"))

    backends = Quantinuum.backends()
    backend = Quantinuum.get_backend("H1-2E")

    qc = QuantumCircuit(2, 2)
    qc.h(0)
    qc.cx(0, 1)
    qc.measure([0,1], [0,1])
    result = execute(qc, backend).result()
    print(result.get_counts())

yields the following output:

Your id token is expired. Refreshing...
{'11': 502, '0': 512, '1': 4, '10': 6}

indicating a successful run on the device.

Now, presumably, one can run one of the QED-C benchmarking algorithms via the newly obtained provider functionality. Taking the quantum-fourier-transform Qiskit benchmark as an example, adding the following snippet to the bottom of this file:

if __name__ == "__main__":
    import os
    from qiskit_quantinuum import Quantinuum
    Quantinuum.save_account(os.environ.get("QUANTINUUM_USERNAME"))

    backends = Quantinuum.backends()
    backend = Quantinuum.get_backend("H1-2E")
    run(provider_backend=backend)

supplies the QFT benchmark with the custom Quantinuum Qiskit provider. However, running:

python qft_benchmark.py

with this addition appears to fail:

Quantum Fourier Transform Benchmark Program - Qiskit
... using circuit method 1
... execution starting at Jun 07, 2023 17:22:56 UTC
************
Executing [3] circuits with num_qubits = 2
... number of gates, depth = 11, 8
Traceback (most recent call last):
  File "/Users/vincent.russo/Projects/research/unitary_fund/metriq-api/benchmark/benchmark/QC-App-Oriented-Benchmarks/quantum-fourier-transform/qiskit/qft_benchmark.py", line 410, in <module>
    print(run(provider_backend=backend, min_qubits=min_qubits, max_qubits=max_qubits, num_shots=num_shots))
          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/vincent.russo/Projects/research/unitary_fund/metriq-api/benchmark/benchmark/QC-App-Oriented-Benchmarks/quantum-fourier-transform/qiskit/qft_benchmark.py", line 330, in run
    ex.throttle_execution(metrics.finalize_group)
  File "/Users/vincent.russo/Projects/research/unitary_fund/metriq-api/benchmark/benchmark/QC-App-Oriented-Benchmarks/quantum-fourier-transform/qiskit/../../_common/qiskit/execute.py", line 922, in throttle_execution
    check_jobs(completion_handler)
  File "/Users/vincent.russo/Projects/research/unitary_fund/metriq-api/benchmark/benchmark/QC-App-Oriented-Benchmarks/quantum-fourier-transform/qiskit/../../_common/qiskit/execute.py", line 1039, in check_jobs
    job_complete(job)
  File "/Users/vincent.russo/Projects/research/unitary_fund/metriq-api/benchmark/benchmark/QC-App-Oriented-Benchmarks/quantum-fourier-transform/qiskit/../../_common/qiskit/execute.py", line 741, in job_complete
    if job.status() == JobStatus.DONE:
       ^^^^^^^^^^^^
  File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/qiskit_quantinuum/quantinuumjob.py", line 313, in status
    self._result = self._process_results()
                   ^^^^^^^^^^^^^^^^^^^^^^^
  File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/qiskit_quantinuum/quantinuumjob.py", line 337, in _process_results
    status = res_resp.get('status', 'failed')
             ^^^^^^^^^^^^
AttributeError: 'JobStatus' object has no attribute 'get'

Am I making use of the provider_backend argument correctly? Based on the initial code example we know that we can successfully connect to and run a simple circuit on the Quantinuum device, so presumably this is failing because either I am using this incorrectly or because the QFT circuit is more involved and requires some type of transpilation for the Quantinuum device, or something else.

Any obvious reason as to why this is presently failing?

@rtvuser1 @WrathfulSpatula @nathanshammah @ninjab3381

(AE/MC) Controlled Circuits in Braket

To implement AE/MC, we need to implement a controlled-A operation, where A is a potentially complicated quantum circuit. Currently, I am unaware of a circ.controlled() operation like there is in qiskit and cirq. If it exists, it is quite hidden and wasn't used by the braket grover's or qpe braket example notebooks.

There are two potential paths I've thought of to implement this:

  1. Following the method outlined in the braket qpe notebook, which creates the general controlled operation by transforming the sequence of gates to be controlled into a unitary matrix. Tom suggested that may not work on hardware, and likely believes that this would not work.
  2. Duplicating in braket the qiskit or cirq functions used to for creating general controlled gates/circuits. This function is primarily defined in qiskit here

After discussing this with Tom, we are putting this issue on the back burner, and will hope to be able to address it either with Braket's continued development or with more time to be able to implement these complicated methods. We found out from Sashwat, who is interning with QSecure, that Braket will be potentially adding this feature "soon," so we might be able to address this once Braket adds this feature. (We talked with Sashwat around July 21st, 2021)

Generating invalid expected distribution

In this PR, it's found that it's possible for the maxcut benchmark to produce expected distributions with norm=0. At line 139 of get_expectation, there is a step:

# scale to number of shots
        for k, v in counts.items():
            counts[k] = round(v * num_shots)

Correct me if I'm wrong, but this is used to compare the results against a discrete approximation to the theoretical distribution, possibly to not penalize a result list that does not contain any counts for bitstrings that have very small probability mass (so that you'd expect 0 appearances at the given shot count) and also just to peg an integer number of results for each bitstring as ideal. This means for the case you describe, the only way to run the problem instance throwing the error is to increase the number of shots until the discretized expected distribution has at least a single nonzero element. I worry this has its own issues, because a significant distortion between the original distribution and the discretized distribution causes a distortion in the actual fidelity calculation. You could conceivably be comparing the results to a discrete distribution with whacky finite-size effects that make it look very different from the distribution that an ideal quantum computer is pulling from. This should only happen when the theoretical distribution is very wide and mostly but not perfectly flat, (I think), but it's worth considering.

Just wanted to share these thoughts... I don't think we use this kind of step in other benchmarks? It seems odd that we'd calculate fidelity against discretized distributions for some benchmarks and continuous exact distributions for others. Maybe we should perform a check that something like this doesn't happen in maxcut_benchmark.py or pass the exact distribution even if the fidelity moderately underperforms at low shot counts?

Deutsch-Josza Benchmarking Test is giving throwing Error

Hi Tried, I tried running the below code from your Deush Josza algorithm:

"""
Deutsch-Jozsa Benchmark Program - Qiskit
"""

import sys
import time

import numpy as np
from qiskit import QuantumCircuit, QuantumRegister, ClassicalRegister

sys.path[1:1] = ["_common", "_common/qiskit"]
sys.path[1:1] = ["../../_common", "../../_common/qiskit"]
import execute as ex
import metrics as metrics

np.random.seed(0)

verbose = False

# saved circuits for display
QC_ = None
C_ORACLE_ = None
B_ORACLE_ = None


############### Circuit Definition

# Create a constant oracle, appending gates to given circuit
def constant_oracle(input_size, num_qubits):
    # Initialize first n qubits and single ancilla qubit
    qc = QuantumCircuit(num_qubits, name=f"Uf")

    output = np.random.randint(2)
    if output == 1:
        qc.x(input_size)

    global C_ORACLE_
    if C_ORACLE_ == None or num_qubits <= 6:
        if num_qubits < 9: C_ORACLE_ = qc

    return qc


# Create a balanced oracle.
# Perform CNOTs with each input qubit as a control and the output bit as the target.
# Vary the input states that give 0 or 1 by wrapping some of the controls in X-gates.
def balanced_oracle(input_size, num_qubits):
    # Initialize first n qubits and single ancilla qubit
    qc = QuantumCircuit(num_qubits, name=f"Uf")

    b_str = "10101010101010101010"  # permit input_string up to 20 chars
    for qubit in range(input_size):
        if b_str[qubit] == '1':
            qc.x(qubit)

    qc.barrier()

    for qubit in range(input_size):
        qc.cx(qubit, input_size)

    qc.barrier()

    for qubit in range(input_size):
        if b_str[qubit] == '1':
            qc.x(qubit)

    global B_ORACLE_
    if B_ORACLE_ == None or num_qubits <= 6:
        if num_qubits < 9: B_ORACLE_ = qc

    return qc


# Create benchmark circuit
def DeutschJozsa(num_qubits, type):
    # Size of input is one less than available qubits
    input_size = num_qubits - 1

    # allocate qubits
    qr = QuantumRegister(num_qubits);
    cr = ClassicalRegister(input_size);
    qc = QuantumCircuit(qr, cr, name="main")

    for qubit in range(input_size):
        qc.h(qubit)
    qc.x(input_size)
    qc.h(input_size)

    qc.barrier()

    # Add a constant or balanced oracle function
    if type == 0:
        Uf = constant_oracle(input_size, num_qubits)
    else:
        Uf = balanced_oracle(input_size, num_qubits)
    qc.append(Uf, qr)

    qc.barrier()

    for qubit in range(num_qubits):
        qc.h(qubit)

    # uncompute ancilla qubit, not necessary for algorithm
    qc.x(input_size)

    qc.barrier()

    for i in range(input_size):
        qc.measure(i, i)

    # save smaller circuit and oracle subcircuit example for display
    global QC_
    if QC_ == None or num_qubits <= 6:
        if num_qubits < 9: QC_ = qc

    # return a handle to the circuit
    return qc


############### Result Data Analysis

# Analyze and print measured results
# Expected result is always the type, so fidelity calc is simple
def analyze_and_print_result(qc, result, num_qubits, type, num_shots):
    # Size of input is one less than available qubits
    input_size = num_qubits - 1

    # obtain counts from the result object
    counts = result.get_counts(qc)
    if verbose: print(f"For type {type} measured: {counts}")

    # create the key that is expected to have all the measurements (for this circuit)
    if type == 0:
        key = '0' * input_size
    else:
        key = '1' * input_size

    # correct distribution is measuring the key 100% of the time
    correct_dist = {key: 1.0}

    # use our polarization fidelity rescaling
    fidelity = metrics.polarization_fidelity(counts, correct_dist)

    return counts, fidelity


################ Benchmark Loop

# Execute program with default parameters
def run(min_qubits=3, max_qubits=8, max_circuits=3, num_shots=100,
        backend_id='qasm_simulator', provider_backend=None,
        hub="ibm-q", group="open", project="main", exec_options=None):
    print("Deutsch-Jozsa Benchmark Program - Qiskit")

    # validate parameters (smallest circuit is 3 qubits)
    max_qubits = max(3, max_qubits)
    min_qubits = min(max(3, min_qubits), max_qubits)
    # print(f"min, max qubits = {min_qubits} {max_qubits}")

    # Initialize metrics module
    metrics.init_metrics()

    # Define custom result handler
    def execution_handler(qc, result, num_qubits, type, num_shots):

        # determine fidelity of result set
        num_qubits = int(num_qubits)
        counts, fidelity = analyze_and_print_result(qc, result, num_qubits, int(type), num_shots)
        metrics.store_metric(num_qubits, type, 'fidelity', fidelity)

    # Initialize execution module using the execution result handler above and specified backend_id
    ex.init_execution(execution_handler)
    ex.set_execution_target(backend_id, provider_backend=provider_backend,
                            hub=hub, group=group, project=project, exec_options=exec_options)

    # Execute Benchmark Program N times for multiple circuit sizes
    # Accumulate metrics asynchronously as circuits complete
    for num_qubits in range(min_qubits, max_qubits + 1):

        input_size = num_qubits - 1

        # determine number of circuits to execute for this group
        num_circuits = min(2, max_circuits)

        print(f"************\nExecuting [{num_circuits}] circuits with num_qubits = {num_qubits}")

        # loop over only 2 circuits
        for type in range(num_circuits):
            # create the circuit for given qubit size and secret string, store time metric
            ts = time.time()
            qc = DeutschJozsa(num_qubits, type)
            metrics.store_metric(num_qubits, type, 'create_time', time.time() - ts)

            # collapse the sub-circuit levels used in this benchmark (for qiskit)
            qc2 = qc.decompose()

            # submit circuit for execution on target (simulator, cloud simulator, or hardware)
            ex.submit_circuit(qc2, num_qubits, type, num_shots)

        # Wait for some active circuits to complete; report metrics when groups complete
        ex.throttle_execution(metrics.finalize_group)

    # Wait for all active circuits to complete; report metrics when groups complete
    ex.finalize_execution(metrics.finalize_group)

    # print a sample circuit
    print("Sample Circuit:");
    print(QC_ if QC_ != None else "  ... too large!")
    print("\nConstant Oracle 'Uf' =");
    print(C_ORACLE_ if C_ORACLE_ != None else " ... too large or not used!")
    print("\nBalanced Oracle 'Uf' =");
    print(B_ORACLE_ if B_ORACLE_ != None else " ... too large or not used!")

    # Plot metrics for all circuit sizes
    metrics.plot_metrics("Benchmark Results - Deutsch-Jozsa - Qiskit")


# if main, execute method
if __name__ == '__main__': run()

I am getting the below error:

C:\Users\manuc\Documents\Pytorch_Study_Workspace\Benchmark_dj\venv\Scripts\python.exe C:/Users/manuc/Documents/Pytorch_Study_Workspace/Benchmark_dj/main.py
Traceback (most recent call last):
  File "C:\Users\manuc\Documents\Pytorch_Study_Workspace\Benchmark_dj\main.py", line 219, in <module>
    if __name__ == '__main__': run()
  File "C:\Users\manuc\Documents\Pytorch_Study_Workspace\Benchmark_dj\main.py", line 161, in run
    metrics.init_metrics()
AttributeError: module 'metrics' has no attribute 'init_metrics'
Deutsch-Jozsa Benchmark Program - Qiskit

Process finished with exit code 1

Please guide me how to correct this Error?

"Default Noise Model in execute.py incorrectly uses depolarizing_error for amplitude_damping_error

# Add amplitude damping error to all single qubit gates with error rate 0.0%
#                         and to all two qubit gates with error rate 0.0%
amp_damp_one_qb_error = 0.0
amp_damp_two_qb_error = 0.0
noise.add_all_qubit_quantum_error(depolarizing_error(amp_damp_one_qb_error, 1), ['rx', 'ry', 'rz'])    #line number = 193
noise.add_all_qubit_quantum_error(depolarizing_error(amp_damp_two_qb_error, 2), ['cx'])                 #line number = 194

Bug: Max Qubit number limit Deutsch-Josza

Running the current version Deutsch-Josza algorithm with >21Q gives an error

----> 1 cir = dj_benchmark.DeutschJozsa(22, 'constant_circuit')
      2 transpile(cir, basis_gates=['x', 'sx', 'rz', 'cx'], optimization_level=3).draw()

File c:\Users\2J7579897\Notebooks_general\QEDC-benchmark\QC-App-Oriented-Benchmarks\deutsch-jozsa/qiskit\dj_benchmark.py:93, in DeutschJozsa(num_qubits, type)
     91 # Add a constant or balanced oracle function
     92 if type == 0: Uf = constant_oracle(input_size, num_qubits)
---> 93 else: Uf = balanced_oracle(input_size, num_qubits)
     94 qc.append(Uf, qr)
     96 qc.barrier()

File c:\Users\2J7579897\Notebooks_general\QEDC-benchmark\QC-App-Oriented-Benchmarks\deutsch-jozsa/qiskit\dj_benchmark.py:54, in balanced_oracle(input_size, num_qubits)
     52 b_str = "10101010101010101010"              # permit input_string up to 20 chars
     53 for qubit in range(input_size):
---> 54     if b_str[qubit] == '1':
     55         qc.x(qubit)
     57 qc.barrier()

IndexError: string index out of range

Possible Fixes:

  • Expand bit string b_str in dj_benchmark.py by a few more characters
  • create a dynamic bit string b_str depending on qubit number/ max qubit number

qiskit v1 broke functionality

qiskit 1.0 does not have execute():

https://docs.quantum.ibm.com/api/qiskit/release-notes/1.0

Qiskit’s execute() function is removed.
This function served as a high-level wrapper around transpiling a circuit
with some transpile options and running it on a backend with some run options.
To do the same thing, you can explicitly use the [transpile()]
(https://docs.quantum.ibm.com/api/qiskit/compiler#qiskit.compiler.transpile)
function (with appropriate transpile options)
 followed by backend.run() (with appropriate run options).

However, _common/qiskit/execute.py deliberately uses execute() instead of run():

 706                 ''' some circuits, like Grover's behave incorrectly if we use run()
 707                 job = backend.run(simulation_circuits, shots=shots,
 708                     noise_model=this_noise, basis_gates=this_noise.basis_gates,
 709                     **backend_exec_options_copy)
 710                 '''   
 711                 job = execute(simulation_circuits, backend, shots=shots,
 712                     noise_model=this_noise, basis_gates=this_noise.basis_gates,
 713                     **backend_exec_options_copy)

I wonder what is the way forward with qiskit 1.0?

Qiskit transpilation for circuit depth determination is slow; cache data for performance

When executing benchmarks in Qiskit, the normalized circuit depth is calculated using transpile with a default set of basis gates. The can take a long time for circuits of larger qubit widths.

A solution may be to write the data to a json file. Load the json file at beginning of each benchmark in the 'set_execution_target()' method. If the required depth info exists in the file, use it and avoid the tranpilation.

The downside of this is that the data can be out of date if the circuit definition changes.
To eliminate this issue, create a hash key of the circuit definition and store it with the data file.
That way if the circuit definition changes, the data will be seen as 'old' and will be replaced.

Note: the data file should be saved with a generic name; the data contained within may be used in the Cirq, Braket, and Q# benchmarks ... but should always be recomputed in Qiskit, so the hash remains the same.

NOTE: before implementing, need to confirm that this is a performance bottleneck. Some questions remain as to whether this is the cause

Add High Level Intuition README sections

Similar to Scott Aaronson's shors description. Add an intuition section to each algorithm describing on a high level how the algorithm works. (Ideally not more than a quick paragraph) (I think it makes the most sense to add this just after the Problem Outline section) (We could then have it such that we let people know that the first sections, [problem outline, benchmarking, intuition] are good introductions to the algorithms, and the other sections can get a bit more involved)

  • Deutsch-Josza
  • Bernstein-Vazirani
  • Hidden-Shift
  • Quantum-Fourier-Transform
  • Grover's Search
  • Phase Estimation
  • Amplitude Estimation
  • Monte Carlo Sampling
  • Hamiltonian Simulation
  • VQE
  • Shor's Factoring Algorithm

Ieally this section will have a visualization of Quantum Circuit Composer describing via bloch spheres what the algorithm is doing. Must include a section somewhere in the repo describing the bloch sphere

  • Include Bloch Sphere documentation

Visualization is only really possible/helpful for DJ, BV, and QFT. The other algorithms have too much entanglement, meaning that the visualization like what we include in DJ's gif is impossible. How do we feel about that idea? One suggestion would be to add these visualizations in these benchmarks. We would probably want this to be a separate section different from intuition.

Figure 9 from paper does not use proper depth

In the paper https://arxiv.org/abs/2110.03137 it is said that the depth calculation for circuits is done using the basis set ['rx', 'ry', 'rz', 'cx']. However when looking at Fig 9 of the paper the reported depth does not match what the depth is when the circuit is decomposed to the indicated basis. Namely the routine just does a decompose

before passing on to execute that computes the depth of this decomposition:

This does not decompose to the correct basis, and the circuits are much shorter than they should be. At 7 qubits, the decomposed depth is 31, which matches Fig 9., but the actual depth with the correct basis is 78. The same is true for the circuits that are swap mapped to the Casablanca system, where I get an avg depth of 117.

Bug: Max Qubit number limit Hamiltonian Simulation

Running the current Hamiltonian Simulation with >20Q gives:

---------------------------------------------------------------------------
IndexError                                Traceback (most recent call last)
Cell In[15], line 4
      2 sys.path.insert(1, "hamiltonian-simulation/qiskit")
      3 import hamiltonian_simulation_benchmark
----> 4 hamiltonian_simulation_benchmark.run(min_qubits=min_qubits, max_qubits=max_qubits, skip_qubits=skip_qubits,
      5                 max_circuits=max_circuits, num_shots=num_shots,
      6                 backend_id=backend_id, provider_backend=provider_backend,
      7                 hub=hub, group=group, project=project, exec_options=exec_options)

File c:\Users\2J7579897\Notebooks_general\QEDC-benchmark\QC-App-Oriented-Benchmarks\hamiltonian-simulation/qiskit\hamiltonian_simulation_benchmark.py:286, in run(min_qubits, max_qubits, max_circuits, skip_qubits, num_shots, use_XX_YY_ZZ_gates, backend_id, provider_backend, hub, group, project, exec_options, context)
    284 h_x = precalculated_data['h_x'][:num_qubits] # precalculated random numbers between [-1, 1]
    285 h_z = precalculated_data['h_z'][:num_qubits]
--> 286 qc = HamiltonianSimulation(num_qubits, K=k, t=t, w=w, h_x= h_x, h_z=h_z)
    287 metrics.store_metric(num_qubits, circuit_id, 'create_time', time.time() - ts)
    289 # collapse the sub-circuits used in this benchmark (for qiskit)

File c:\Users\2J7579897\Notebooks_general\QEDC-benchmark\QC-App-Oriented-Benchmarks\hamiltonian-simulation/qiskit\hamiltonian_simulation_benchmark.py:70, in HamiltonianSimulation(n_spins, K, t, w, h_x, h_z)
     66 # loop over each trotter step, adding gates to the circuit defining the hamiltonian
     67 for k in range(K):
     68 
     69     # the Pauli spin vector product
---> 70     [qc.rx(2 * tau * w * h_x[i], qr[i]) for i in range(n_spins)]
     71     [qc.rz(2 * tau * w * h_z[i], qr[i]) for i in range(n_spins)]
     72     qc.barrier()
...
---> 70     [qc.rx(2 * tau * w * h_x[i], qr[i]) for i in range(n_spins)]
     71     [qc.rz(2 * tau * w * h_z[i], qr[i]) for i in range(n_spins)]
     72     qc.barrier()

Possible Fix:
in precalculated_data.ipynb increase hardcoded limits:

  • line 19: precalculated_data['h_x'] = list(2 * np.random.random(39) - 1)
  • line 21: precalculated_data['h_z'] = list(2 * np.random.random(29) - 1)
  • line 25: for n_spins in range(2,40)

or make the limits dynamic based on max_qubit_number, however, this will require a recalculation with every initialization

Problem in executing the vqe code

Hello Sir,
I am finding problem in executing the vqe code.
The error I am getting is below:
image
I tried installing these packages but the error is still not resolved.
Need your help.

VQE method 2 with qiskit raises exception

Hello, I was trying to run VQE method 2 with qiskit but it prompts the following exceptions:

Executing [17] circuits with num_qubits = 4
ERROR: failed to execute result_handler for circuit 4 ZIII
... exception = unsupported operand type(s) for *: 'dict' and 'float'
ERROR: failed to execute result_handler for circuit 4 XXII
... exception = unsupported operand type(s) for *: 'dict' and 'float'
ERROR: failed to execute result_handler for circuit 4 YYII
... exception = unsupported operand type(s) for *: 'dict' and 'float'

The same exception pops up for all the Pauli strings. VQE method 1 works fine.

Ignore "subtitle" in circuit_metrics

A plot subtitle is added to the circuit_metrics dictionary, then is treated like a circuit group in metrics.py. This key should be ignored when collecting metrics, or the subtitle should be passed in a different way.

No ability to specify which qubits used in Qiskit transpiler

The benchmarking suite has no way to specify which qubits are used in the execution of a given circuit, i.e. one cannot define an initial_layout here:

This is nice to have because, for example, in Fig. 11 of https://arxiv.org/abs/2110.03137 you look at dynamic Berstein-Vazirani on the Lagos system, but the 0-1 edge of the coupling map is actually not the best (it is also not the worst). On that machine the 3-5 edge is the best in terms of fidelity:

[3, 5] 0.7861328125
[2, 1] 0.77197265625
[5, 4] 0.7678222656250001
[5, 3] 0.7481689453125
[3, 1] 0.7360839843750001
[4, 5] 0.7303466796875
[0, 1] 0.7076416015625001
[1, 2] 0.665771484375
[6, 5] 0.64208984375
[1, 0] 0.6323242187500001
[1, 3] 0.6124267578125001
[5, 6] 0.5809326171875001

MCX Shim in braket grover's doesn't work

In trying to implement the new grover's fidelity calculation in braket, I noticed that the fidelity wasn't 100% in the grover's braket benchmark when running a noiseless simulation, as shown in the below image.

Grover's-Search-metrics'

These results are also reproducible by using the current qiskit grover's benchmark setting use_mcx_shim=True, as seen in the below image.

Grover's-Search-metrics

I am not sure exactly what happened, because I ran the initial braket code, as in the commit which added the mcx shim, and the results were the same. I would think that potentially an update broke the functionality, but it seems weird that the results are the same in both braket and qiskit with the use_mcx_shim=True.

We could follow something similar to this reference to implement the mcx iteratively using Gray codes. Then again, this is the same issue pretty much as #10, which we decided we weren't going to fix until braket is updated to include these features.

polarization fidelity is not a valid comparator

The QED-C benchmarks, and paper, use the polarization fidelity as the comparison metric amongst applications and differing numbers of qubits. This is given by Eq.(2) of the paper (https://arxiv.org/abs/2110.03137). In the plots this fidelity is reported on the interval [0,1], e.g

Screen Shot 2022-02-13 at 10 01 41

. However, the polarization fidelity is not defined over the interval [0,1]. Instead, the lower bound is negative and is given by
-(1/2**N)/(1-1/2**N)

where N is the number of qubits. Therefore the range [0,1] is only valid in the large N limit. More importantly, this lower bound is qubit number specific. Therefore this fidelity cannot be used as a comparator across differing numbers of qubits, as done in the tests and paper.

The polarization fidelity should, for example, be shifted and rescaled so that the range [0,1] is valid across all numbers of qubits.

`execute` performance vs simple qiskit execute calls on simulator

Using our execution code introduces a lot of overhead when using qiskit simulator. This can be seen as a large factor increase in runtime, such as 3.5x slower compared to using qiskit.execute(). I believe a lot of this is due to this sleep functionality in the qiskit execute file needed when running on hardware:

# delay a bit, increasing the delay periodically
sleeptime = 0.25
if pollcount > 6: sleeptime = 0.5
if pollcount > 60: sleeptime = 1.0
time.sleep(sleeptime)

# delay a bit, increasing the delay periodically
sleeptime = 0.25
if pollcount > 6: sleeptime = 0.5
if pollcount > 60: sleeptime = 1.0
time.sleep(sleeptime)

A potential solution is to avoid this sleeping when the module detects that we are using the simulator. However, we don't want to break this functionality while we are working on finishing phase 1 of the project, so we can defer this issue until later.

Allowing for one-click access and minimal environment setup with qBraid

Hello, I'd like to suggest adding a feature that is similar to Binder, where users will be able to launch the benchmarking notebooks and have pre-made python environments for running the notebooks as well. It only requires adding the launch on qbraid button in the following PR: #446 and users can select the QED-C benchmarking environments for the respective software frameworks such as cirq, qiskit, etc that is freely available on qbraid.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.