Method for assessing quantum computer hardware capabilities
Randomized benchmarking is an experimental method for measuring the average error rates of quantum computing hardware platforms. The protocol estimates the average error rates by implementing long sequences of randomly sampled quantum gate operations.[1]
Randomized benchmarking is the industry-standard protocol used by quantum hardware developers such as IBM[2] and Google [3] to test the performance of the quantum operations.
The original theory of randomized benchmarking, proposed by Joseph Emerson and collaborators,[1] considered the implementation of sequences of Haar-random operations, but this had several practical limitations. The now-standard protocol for randomized benchmarking (RB) relies on uniformly random Clifford operations, as proposed in 2006 by Dankert et al.[4] as an application of the theory of unitary t-designs. In current usage randomized benchmarking sometimes refers to the broader family of generalizations of the 2005 protocol involving different random gate sets [5][6][7][8][9][10][11][12][13][14] that can identify various features of the strength and type of errors affecting the elementary quantum gate operations. Randomized benchmarking protocols are an important means of verifying and validating quantum operations and are also routinely used for the optimization of quantum control procedures. [15]