OpenSuperQPlus partners contribute to EU Quantum Flagship’s Key Performance Indicators for Quantum Computing

Shortly before Christmas, OpenSuperQPlus researchers published a new paper on scalable quantum benchmarks together with research teams from the MILLENION and SPINUS projects, all of which are funded by the European Union.

The rapid development of quantum computers, in terms of size as well as complexity, requires a set of well-defined, reproducible, and technology-agnostic benchmarks. This is both the motivation and the starting point of this recent paper on benchmarks covering complementary aspects of quantum computing. These benchmarks are volumetric – they measure the size of the quantum computer – and use different methods to identify only those qubits that can effectively be used in a quantum computation.

According to the authors, currently available benchmarks often only serve their purpose for quantum processors with fewer than 50 qubits and lack comparability across platforms. Therefore, they identify scalability and platform independence as core issues when it comes to developing quantum benchmarks.

In order to foster a holistic improvement of quantum systems, the paper presents four Key Performance Indicator Benchmarks, ranging from assessing the overall computing capacity via demonstrations of scalability and fault-tolerant performance to algorithmic primitives: Specifically, the so-called Clifford-Volume Benchmark evaluates the quantum computing volume, that is, the largest task a device can solve within acceptable error limits. The Multi-Partite Entanglement Benchmark assesses the size of the largest genuinely entangled state by creating and evaluating Greenberger-Horne-Zeilinger (GHZ) states, a class of quantum states known for their particularly deep entanglement. Furthermore, the authors propose an algorithmic benchmark based on the period-finding component of the well-known Shor algorithm. The fourth and final benchmark focuses on quantum hardware and is designed to quantify the performance of error correction during quantum computing.

The article comprises detailed background information for each benchmark, provides a step-by-step description of the benchmark protocol, and defines the benchmark scores and reporting methodology. In particular, it outlines which type of approaches can be used to improve the benchmarks and which ones are considered cheating as they are not scalable. However, the paper is intended not as a fixed set of recommendations, but rather as a starting point for further discussion. Hence, the article concludes by encouraging the scientific community to contribute to the further development of the key performance indicators for quantum computing. It contrasts benchmarks published by smaller teams and/or commercial entities, which sometimes tend towards “benchmarketing” to promote their own technology.

The full article is available as preprint here.