What’s the Best Quantum Computer Today?

What’s the Best Quantum Computer Today?

“What’s the best quantum computer today?” is a question people learning about quantum computing often ask. It’s an excellent question, and, like many other excellent questions, the answer is: “it’s complicated.”  


First, you have to define “best,” much like you would need to define it if the question was instead, “What’s the best classical computer?” We would also need to know, “Well, what do you mean by best?” For now, let’s just ask about the power of the computer. This is fairly simple to classify for classical computers, and the way we do this is by measuring how many Floating Point Operations per Second (FLOPS) it can run. This allows us to make a list of the most powerful supercomputers in the world.


Then the question becomes, “What is the equivalent of FLOPS for a quantum computer?” This question is hard to answer.


There are multiple proposals for methods to give a single number to benchmark the performance of a quantum computer. Researchers at IBM have proposed Quantum Volume, IonQ has created Algorithmic Qubits, and we at Zapata have proposed application benchmarks. All of these benchmark methods measure different aspects of how “powerful” a quantum computer is.  


“Well, if there are so many different methods, why not choose one of those? What makes this one so special?” 


As mentioned before, each benchmark measures a different aspect of a quantum computer. This new benchmark method recently allowed us to characterize yet another aspect. Specifically, we looked at the trade-offs between quantum and classical resources in hybrid quantum-classical workflows. We also saw how much quantum resources the quantum computer was able to use before it was “overwhelmed” by noise. As it turned out, there was another “knob” we were able to tune to adjust the amount of classical pre-processing done before the quantum algorithm took over.  


Currently, it’s always best to use the maximum amount of classical pre-processing because classical computing’s performance for that task is much better than what a current noisy intermediate scale quantum (NISQ) processor could achieve, but it’s possible that at some point it will be more advantageous to use more quantum resources in this step as quantum hardware gets better. 


The type of benchmark in this paper is more looking at how a quantum computer performs a specific task, so it isn’t entirely analogous to measuring a base value like FLOPS on a classical computer. Rather, it is more like a benchmark where your classical computer performs a known task like CineBench (if in CineBench there was an unknown point in, for example, the resolution of the output that would give you the best result with the lowest amount of noise). 

chart showing expected vs real performance


As we increase the amount of quantum resources we ask the device to utilize, under ideal circumstances we should see an increase in the performance of the computer. Much like asking a computer for more and more CPU power, as the resources increase, the performance increases. However, with a quantum computer, the more resources you ask for, the more you are exposed to noise and error, thereby degrading the performance. Thus, there is a “maximum effective hardware resource” threshold. Using resources beyond this point will be counter-productive, so knowing where this lies is important for getting the most out of a noisy quantum computer. 


Imagine a shaky table with some dominoes on it. This shaky table will be our quantum computer, and the dominoes represent “units of computation.” The more dominoes we can put on the table standing upright, the more computation we can do. But there’s a catch: as we put more and more dominoes on the table, it becomes more and more likely that one will fall down and begin to knock over others and “cascade” through the rest of the dominoes.  

Here is a video demonstrating what is described above: 


In the video, you can see that once the dominoes “rely” on each other, when one falls it knocks over others. As we increase the number of dominoes, the number of upright or “usable” dominoes increases, until we reach a point where more fall over than are added. We can see that in the graph showing the “device output performance” above, as well as this data from the dominoes: 


Because not every quantum computer is identical, (the errors and noise differing between each one) having this way to benchmark is important to discover that point of “maximum effectiveness” for each device. In our work we used Variational Quantum Factoring (VQF) to do the benchmarking. The quantum part of VQF used the Quantum Approximate Optimization Algorithm (QAOA), and this allowed us to use this information on the point of maximum effectiveness to inform the use of other algorithms based on QAOA. Additionally, running these algorithms on simulators and real hardware, as we did in the paper, allowed us to look at how different error models affected the outputs of the algorithm as compared to the true errors on the device. 


The major takeaway here is that this new method of benchmarking quantum devices gives a better idea of how a quantum computer will perform when real problems are run on it. This differentiates the method in our paper as it’s a much more “realistically useful” problem and algorithm than many of the other proposed benchmarking methods.  


Doing this before running an experiment or using the quantum computer for a real-world use case can save the hassle of running a small-scale version of the problem, finding that it doesn’t give the best results, trying again, trying again, then finally finding the best results. This gives a clear way of finding the threshold ahead of time, saving time and effort. It’s also most likely going to be the case that different quantum processors will be better for different applications.  


Imagine having a couple different application benchmarks (e.g. VQF, VQE, QCBM, etc.) for a set of quantum hardware providers. With this information you can have a better idea of which hardware vendor would give the best results for your use case. 


You can read the full paper here for more information: https://www.zapatacomputing.com/publications/analyzing-the-performance-of-variational-quantum-factoring-on-a-superconducting-quantum-processor/

Ethan Hansen
Zapata Author

Ethan Hansen

Marketing & Product Intern